Search results
(821 - 840 of 2,990)
Pages
- Title
- APPLYING THE PSYCHOLOGICAL FLEXIBILITY MODEL TO EXAMINE PREDICTORS OF ENGAGEMENT AND SUCCESS IN A WEIGHT MANAGEMENT PROGRAM FOR VETERANS
- Creator
- Pieczynski, Jessica
- Date
- 2015, 2015-07
- Description
-
Weight management success is contingent upon treatment utilization and engagement. Unfortunately, low enrollment, poor attendance, and high...
Show moreWeight management success is contingent upon treatment utilization and engagement. Unfortunately, low enrollment, poor attendance, and high attrition from weight management programs are major barriers for long-term weight loss. This study aimed to applying the psychological flexibility model to the problem of weight management engagement. The current study evaluated the hypotheses that lower experiential avoidance, the process of changing, suppressing, or avoiding unpleasant experiences in an effort to regulate behavior, and higher values congruence, behaving consistently with one’s values, predict treatment engagement and successful weight loss. Participants were 183 overweight and obese veterans (91.3% Male, 77.6% African American). Participants completed a demographics questionnaire, the Acceptance and Action Questionnaire for Weight-Related Problems (AAQ-W) and the Valued Living Questionnaire (VLQ). Analyses revealed that experiential avoidance significantly predicted probability of enrolling (OR=1.03, p<.01). Experiential avoidance and values congruence were not significantly related to attendance, and experiential avoidance approached significance for dropout (OR=6.54, p=.08). AAQ-W was related to baseline BMI (β=7.49, p<.001) and 3-month BMI trajectory (β= 0.54, p<.01) for enrollees, while experiential avoidance predicted 3-month weight change for nonenrollees (β =0.28, p<.05). The extant research on weight management suggests that much can be done to improve treatment outcomes. Increasing engagement is a major component of improving weight management success. The findings from this study suggest that targeting psychological flexibility can be a means to achieving this goal. Future weight management research should continue to explore this relationship.
Ph.D. in Psychology, July 2015
Show less
- Title
- SCHEDULING FOR THROUGHPUT OPTIMIZATION IN WIMAX NETWORKS
- Creator
- Nusairat, Ashraf
- Date
- 2011-03-21, 2011-05
- Description
-
WiMAX emerged as one of the important Broadband Wireless Access (BWA) networks based on OFDMA technology and is anticipated to be an...
Show moreWiMAX emerged as one of the important Broadband Wireless Access (BWA) networks based on OFDMA technology and is anticipated to be an alternative to wired broadband networks. WiMAX supports different emerging applications with different Quality of Service (QoS) requirements like voice over IP (VoIP), video conference, voice conference and online gaming. Those emerging wireless applications have high throughput demand and impose a challenge to the underlying Radio Access Network (RAN) scheduling algorithms. Efficient allocation of WiMAX shared resources like subchannels is critical to meeting the high throughput demand. The WiMAX resource allocation algorithms determine which users to schedule, how to allocate subcarriers to them, and how to determine the appropriate power levels for each user on each subcarrier. In WiMAX, the DL TDD OFDMA subframe structure is a rectangular area of N subchannels × K time slots. Users are assigned rectangular bursts in the downlink subframe. The burst size varies based on the user’s channel quality and data to be transmitted for the assigned user. In this dissertation we study the problem of assigning users to DL bursts in WiMAX TDD OFDMA system with the objective of maximizing downlink system throughput for the PUSC subchannelization permutation mode. We show that finding the optimal burst assignment that maximizes throughput is NP-hard. In this dissertation,We study this problem following two distinct approaches: (1) Integer Programming approach: we formulate the problem as an IP problem and then relax it to LP, we propose different methods to resolve conflicts resulting from the LP relaxation and through extensive simulations we compare the performance of the proposed conflict resolution methods to the optimal solution. (2) Best Channel approach: we propose several efficient and effective methods to assign bursts to users based on channel quality, we prove that our Best Channel burst assignment method achieves a throughput within a constant factor of the optimal and through extensive simux lations with real system parameters, we study the performance of the Best Channel burst assignment method. To the best of our knowledge, we are the first to study the problem of DL Burst Assignment in the DL OFDMA subframe for PUSC subchannelization permutation mode taking user’s channel quality into consideration in the assignment process.
Ph.D. in Computer Science, May 2011
Show less
- Title
- INVESTIGATING DIRECTED EVOLUTION AND GENETIC ENGINEERING WITH VITREOSCILLA HEMOGLOBIN TO PRODUCE CULTURES FOR LOW AERATION BIOLOGICAL WASTEWATER TREATMENT
- Creator
- Kunkel, Stephanie
- Date
- 2014, 2014-07
- Description
-
The dominance of hemoglobin (Hb)-expressing bacteria in biological wastewater treatment systems could improve oxygen utilization under low...
Show moreThe dominance of hemoglobin (Hb)-expressing bacteria in biological wastewater treatment systems could improve oxygen utilization under low dissolved oxygen (DO) conditions. Hb-proteins are versatile molecules that have several biological functions. Here, Nitrosomonas europaea has been transformed with various plasmids; of particular interest is a recombinant plasmid bearing the constitutive Amo1 promoter and the gene (vgb) encoding the hemoglobin from the bacterium Vitreoscilla. Expression of VHb was assayed using various visible spectral methods, and VHb production seen in this recombinant strain. There were several positive effects on N. europaea metabolism related to VHb expression that were seen, specifically the ability of cultures to convert ammonia to nitrite at a slightly higher rate as well as higher specific oxygen uptake rates (SOUR) at both high (near saturation, 7 mg O2/ L) and low (< 2 mg O2/L) dissolved oxygen (DO) conditions. In parallel to this, two activated sludge cultures from the same source were cultivated using synthetic wastewater seeded with activated sludge from the same source and were operated at high DO (near saturation) and low DO (0.25 mg O2/L) concentrations for 370 days. There were significant changes in the bacterial species and phyla present in each of the cultures at various time points during the 370 day operational period. In the low DO culture, over time, there was a much greater expression of single domain and truncated Hbs which may enhance utilization and delivery of oxygen to various enzymes as well as to the respiratory chain. A larger increase in heme b was also observed which coincides with this observation. By the end of the acclimation period, the SOUR values were about 30% greater in the low DO culture compared to the high DO culture. This indicated the successful adaption of the low DO culture to respire more efficiently and eventually outperform the high DO culture.
Ph.D. in Biology, July 2014
Show less
- Title
- AN INTEGRATED RESOURCE MANAGEMENT AND SCHEDULING FRAMEWORK FOR PRODUCTION SUPERCOMPUTERS
- Creator
- Tang, Wei
- Date
- 2012-07-16, 2012-07
- Description
-
Resource management and job scheduling is a crucial task on large-scale computing systems. Despite years of research on resource management...
Show moreResource management and job scheduling is a crucial task on large-scale computing systems. Despite years of research on resource management and scheduling, it has not kept pace with modern changes and technology trends. The study of this thesis is motivated by emerging issues observed in current production supercomputers, caused by reasons such as human behaviors, application characteristics, and increasing system complexity. Specifically, users tend to provide inaccurate parameters for their jobs which are dependent by the scheduler; system owners have diverse goals which are always conflicting with each other. Also, workload characteristics on production supercomputers keep changing unpredictably, making it hard to achieve a sustainable scheduling performance since scheduling policies are largely dependent on workload characteristics. Further, increasing hardware complexity causes system issues and leads to new demands. For example, issues such as node fragmentation, failure interruption, power consumption, and I/O overhead have become common in large-scale systems. Existing resource management systems lack the support for these issues and demands. In this study, we present an integrated resource management and scheduling framework, aiming at addressing emerging issues and challenges in resource management for large-scale production supercomputers. We have designed a set of new schemes, including job parameter prediction, adaptive metric-aware job scheduling, cost-aware job scheduling, and multi-domain job coscheduling. We have implemented these approaches in the production resource manager Cobalt, and evaluated them with real job traces from production supercomputers such as the Blue Gene/P system at Argonne National Laboratory. Experimental results show our schemes can effectively improve job scheduling regarding both user satisfaction and system utilization.
Ph.D. in Computer Science, July 2012
Show less
- Title
- APPLICATION OF THE FEAR-AVOIDANCE MODEL OF CHRONIC PAIN TO UNDERSTAND NEUROCOGNITIVE AND BEHAVIORAL FACTORS THAT CONTRIBUTE TO FUNCTIONAL IMPAIRMENT AND DEPRESSION IN ADULTS WITH SICKLE CELL DISEASE
- Creator
- Piper, Lauren E.
- Date
- 2017, 2017-07
- Description
-
Acute and chronic pain in sickle cell disease (SCD) are associated with functional impairment and depressive symptoms. Given the suboptimal...
Show moreAcute and chronic pain in sickle cell disease (SCD) are associated with functional impairment and depressive symptoms. Given the suboptimal management of pain in SCD and serious health risks associated with current treatment methods for pain, there is a need to identify factors associated with pain that impact functional outcomes and depression. The fear-avoidance (FA) model of chronic pain has been examined in other chronic pain populations as a means to understand how pain-related cognitive and behavioral factors contribute to functional impairment and depression, but has not been applied in individuals with SCD. The purpose of the present study was to apply the FA model of chronic pain to adults with SCD via mediation analyses. Additionally, mental flexibility was examined as a possible moderator in the FA model. Results demonstrated that pain catastrophizing mediated the relationship between pain severity and pain-related fear. No other mediators within the model were identified. Additionally, results did not demonstrate that mental flexibility moderated the relationship between pain severity and pain catastrophizing. Post-hoc exploratory analyses demonstrated that pain catastrophizing and pain-related fear significantly predicted functional impairment and depression, respectively, above and beyond pain severity. Overall, results suggest that the FA model of chronic pain does not apply to individuals with SCD and the predictive roles that pain catastrophizing and pain-related fear play in functional impairment and depression are not consistent with results in other chronic pain populations. Further studies are needed to identify factors that explain the relationship between pain, functional impairment, and depression so that these factors may be targeted for intervention as a means to improve pain, mood, and functional independence.
Ph.D. in Psychology, July 2017
Show less
- Title
- IDENTIFY AND IMPROVE OBSTACLE AVOIDANCE CAPABILITY OF UNMANNED GROUND VEHICLE
- Creator
- Nie, Chenghui
- Date
- 2015, 2015-05
- Description
-
An Unmanned Ground Vehicle (UGV), incorporating a high level of obstacle avoidance capability, benefits from field operations. Such a UGV...
Show moreAn Unmanned Ground Vehicle (UGV), incorporating a high level of obstacle avoidance capability, benefits from field operations. Such a UGV would be better able to travel at a high average speed to quickly finish tasks, as well as quickly alter its trajectory to avoid getting into hostile situation. However, avoiding obstacles at high speed is challenging, since the danger of collisions with obstacle is increased with vehicle speed. This thesis developed novel metrics to mathematically identify the obstacle avoidance capability of ground vehicles. The theory is applied to demonstrate the characteristics of the obstacle avoidance capability of generalized rigid bodies and three types of wheeled ground vehicles: Ackermann steered, differential steered and omni-directional vehicle. The design guidelines are provided in the final chapters to improve obstacle avoidance capabilities of these three types of wheeled ground vehicles. I demonstrated in this thesis that the Ackermann steered vehicle's obstacle avoidance capability is related to the location of its center of mass. I utilized the obstacle avoidance theory to create a novel Variable Inertial Vehicle (VIV), an unmanned ground vehicle with a capability to control the location of its center of mass during locotion. Experimental results are presented to demonstrate the improved obstacle avoidance capability at the end. This thesis also experimental evaluates the characteristics of the obstacle avoidance capability of an omni-directional unmanned ground vehicle. This omnidirectional vehicle is comprised of four independent differential steered units, Active Split Offset Caster (ASOC). Both the characteristics of the vehicle system and ASOC kinematics are demonstrated. Experimental results are presented at the end to validate its distinct obstacle avoidance capability in challenging outdoor terrains.
Ph.D. in Mechanical and Aerospace Engineering, May 2015
Show less
- Title
- SPECTRUM ALLOCATIONS ALGORITHMS IN WIRELESS NETWORKS
- Creator
- Xu, Ping
- Date
- 2011-04-26, 2011-05
- Description
-
All wireless devices rely on access to the radio frequency spectrum, which has been chronically regulated by static spectrum allocation...
Show moreAll wireless devices rely on access to the radio frequency spectrum, which has been chronically regulated by static spectrum allocation policies. With the recent fast growing of spectrum-based services and devices, the remaining spectrum available for future wireless services is being exhausted, known as the spectrum scarcity problem. The current fixed spectrum allocation scheme leads to significant spectrum white spaces (including spectral, temporal, and geographic), where many allocated spectrum blocks are used only in certain geographical areas and/or in brief periods of time. In this work, we design and analyze variant spectrum allocation algorithms for better spectrum utilization and study some fundamental performance bounds for networks with opportunistic spectrum utilization. We first propose spectrum allocation algorithms for offline model, in which all spectrum requests are known when allocation decision is made. Then we also addresse the problems in online model, where allocation decision should be made when only a few spectrum requests are known. In the online model, we focus on two different cases. The first one assumes no statistic of future spectrum requests are known, and the second one assumes some statistic is known or can be learned. For all these models, we design efficient spectrum allocation methods and analytically prove most of them are asymptotically optimal. Our extensive simulation results also verify our theoretical conclusion.
Ph.D. in Computer Science, May 2011
Show less
- Title
- AUTOMATED SLICING METHODS FOR LARGE EVENT TRACES
- Creator
- Smith, Raymond D.
- Date
- 2012-05-02, 2012-05
- Description
-
Many long-running computer systems record events as they execute, resulting in a dynamic record of system behavior. In large systems, the...
Show moreMany long-running computer systems record events as they execute, resulting in a dynamic record of system behavior. In large systems, the event trace may contain thousands of entries and when faced with a problem for analysis, programmers must sort through many disparate events to find those that are related to the system behavior under study and eliminate those that are not. In this research we investigated automatic reduction of event traces to reduce the volume of events and assist in analysis of behavior of large systems. Our approach was to adapt the techniques used in program slicing to compute event trace slices as a means of reduction. Two methods for slicing of event traces were proposed and investigated. The Event Dependence Based method (EDB) uses information available in the event trace to identify dependencies between events and to compute an event trace slice that meets a slicing criterion. The Model Dependence Based method (MDB) incorporates the use of an executable state-based system model to achieve further reduction of traces. The method identifies model-based dependences in the trace to compute trace slices. An experimental study was performed on simulated systems, representative of state-based software systems present in industry to analyze and compare the EDB and MDB slicing methods. Both methods provided significant reduction of event traces, particularly for systems with a low degree of sharing and interaction among resources. However, the MDB method significantly outperformed the EDB method for systems with a high degree of resource sharing.
Ph.D. in Computer Science, May 2012
Show less
- Title
- INVESTIGATION AND MODELING OF PRESSURE DEPENDENT YIELD BEHAVIOR OF 3D STOCHASTIC AND PERIODIC FOAMS
- Creator
- Ayyagari Venkata S, Ravi Sastri
- Date
- 2013, 2013-07
- Description
-
With growing potential of cellular solids in a multitude of diverse engineering applications including but not limited to automotive,...
Show moreWith growing potential of cellular solids in a multitude of diverse engineering applications including but not limited to automotive, aerospace, naval and biomed- ical industries as lightweight alternatives and space lling cores in sandwich struc- tures, need for predictive yield/failure criteria for these load bearing members under multiaxial stress states becomes critical. Although there exist several yield criteria proposed in the literature for highly porous solid foams, they are all phenomenolog- ical in nature, rely on relatively long list of model parameters that require di cult experimentation not readily available to end user, and none of them can handle the anisotropy observed in the majority of commercially available solid foams. Further, it is by now well established that, unlike commonly used engineering bulk solids, the yield behavior of highly porous solid foams is signi cantly in uenced by the hydro- static component of stress. In majority of phenomenological yield criteria proposed for solid foams this dependence is expressed by a quadratic pressure term. The scope of this study is quite comprehensive in the sense that it integrates analytical and computational investigation of yield behavior in solid foams along with extensive validation by recent experimental results produced in our lab. Present study proposes a physics based approach by hypothesizing that the yielding of stochastic foams is governed by the total elastic strain energy density, which leads to an energy based yield criterion for transversely isotropic foams and also provides a physical basis for the quadratic pressure dependence commonly adopted in existing phenomenolog- ical models. An added bene t of the analytical framework proposed in this work is that it introduces new scalar measures of stress and strain, which are referred to as characteristic stress and characteristic strain, that function in an analogous way to e ective (von Mises) stress and strain commonly used in analyzing the yield and post- yield behavior of bulk metals. Besides accommodating anisotropy, this energy-based xii yield criterion renders a unique advantage by relying only on the elastic properties and uniaxial yield strengths of the material, which makes the proposed yield criterion extremely practical for end user. Results from experimental data obtained from multiaxial testing of Divinycell H100 and H130 foams (Sha q, 2009; Ehaab, 2011) as well as a series of extensive com- putational simulations performed in this study on: a) periodic Kelvin foam models (both isotropic and transversely isotropic) of varying relative densities, b) stochastic Voronoi foams (both isotropic and transversely isotropic), point out to an additional linear pressure dependence in the yield behavior of solid foams, from a load-sharing viewpoint. This dependence is observed to be more pronounced at lower relative den- sities. A simple quantitative technique which is based on the partition of elastic strain energy into bending and stretch components is used to identify the distribution of deformation modes at microstructural level, along with its in uence on load sharing as a function of stress path. Furthermore, a plasticity model that incorporates a ow rule and hardening law are presented which allows the analysis of inelastic deforma- tions in solid foams in a continuum framework. Such models facilitate development of user de ned material model (UMAT) that allow evaluating the performance of proposed yield criterion under complex loading scenarios, such as indentation and punch loading.
PH.D in Mechanical and Aerospace Engineering, July 2013
Show less
- Title
- ANYTIME ACTIVE LEARNING DISSERTATION
- Creator
- Ramirez Loaiza, Maria E.
- Date
- 2016, 2016-05
- Description
-
Machine learning is a subfield of artificial intelligence which deals with algorithms that can learn from data. These methods provide...
Show moreMachine learning is a subfield of artificial intelligence which deals with algorithms that can learn from data. These methods provide computers with the ability to learn from past data and make predictions for new data. A few examples of machine learning applications include automated document categorization, spam detection, speech recognition, face detection and recognition, language translation, and self-driving cars. A common scenario for machine learning is supervised learning where the algorithm analyzes known examples to train a model that can identify a concept. For instance, given example documents that are pre-annotated as personal, work, family, etc., a machine learning algorithm can be trained to automate organizing your documents folder. In order to train a model that makes as few mistakes as possible, the algorithm needs many training examples (e.g., documents and their categories). Obtaining these examples often involves consulting the human user/expert whose time is limited and valuable. Hence, the algorithm needs to utilize the human’s time as efficiently as possible by focusing on the most cost-effective and informative examples that would make learning more efficient. Active learning is a technique where the algorithm selects which examples would be most cost-effective and beneficial for consultation with the human. In a typical active learning setting, the algorithm simply chooses the examples that should be asked to the expert. In this thesis, we take this one step further: we observe that we can make even better use of the expert’s time by showing not the full example but only the relevant pieces of it, so that the expert can focus on what is relevant and can provide the answer faster. For example, in document classification, the expert does not need to see the full document to categorize it; if the algorithm can show only the relevant snippet to the expert, the expert should be able to categorize the document much faster. However, automatically finding the relevant snippet is not a trivial task; showing an incorrect snippet can either hinder the expert’s ability to provide an answer at all (if the snippet is irrelevant) or even cause the expert to provide incorrect information (if the snippet is misleading). For this to work, the algorithm needs to find a snippet to show the expert, estimate how much time the expert will spend on that snippet, and predict if the expert will return an answer at all. Further, the algorithm would estimate the likelihood of the expert returning the correct answer. Similar to anytime algorithms that can find better solutions as they are given more time, we call the proposed set of methods anytime active learning where the experts are expected to give better answers as they are shown longer snippets. In this thesis, we focus on three aspects of anytime active learning: i) anytime active learning with document truncation where the algorithm assumes that the first words, sentences, and paragraphs of the document are most informative and it has to decide on the snippet length, i.e., where to truncate the document, ii) given a document, the algorithm optimizes for both snippet location and length, and lastly, iii) the algorithm chooses not only the snippet location and size but also chooses which documents to choose snippets from so that the snippet length, the correctness of the expert’s response, and the informativeness of the document are all optimized in a unified framework.
Ph.D. in Computer Science, May 2016
Show less
- Title
- THE VERY ENERGETIC RADIATION IMAGING TELESCOPE ARRAY SYSTEM OBSERVATIONS OF THE STARBURST GALAXY M82
- Creator
- Ratliff, Gayle
- Date
- 2015, 2015-07
- Description
-
This work describes the Very Energetic Radiation Imaging Telescope Array Systems (VERITAS) observations of the starburst galaxy M82 by...
Show moreThis work describes the Very Energetic Radiation Imaging Telescope Array Systems (VERITAS) observations of the starburst galaxy M82 by documenting the analysis of 231 quality-selected hours of observational data taken between 2008 and 2014. The prototypical starburst galaxy, M82’s high supernova (SN) rate and dense central accumulation of molecular gas make it a promising candidate for studying cosmic ray (CR) acceleration and propagation with the detection of di↵use very high energy (VHE; approximately 100 GeV-100 TeV) !-ray emission. This di↵use emission is predicted to result from proton-proton interactions within the galaxy’s core that produce VHE !-rays through neutral pion decay. This work confirms the results of the initial VERITAS publication covering 137 hours of M82 observations between January 2008 and April 2009, yielding a total of 103.5 excess !-ray-like events (0.007 !/min, 5.7 pre-trial statistical significance) from a deeper exposure of 231 hours of observation. The spectral properties found are in agreement with the original detection within errors ("=2.85 ± 0.39). These results are consistent with paradigms that describe the production of CRs via the conversion of mechanical energy generated in supernovae (SNe). These findings will improve current di↵use emission models by better constraining galaxy parameters and by providing insight into CR proton loss processes and timescales, with further understanding to be gained with the introduction of the Cherenkov Telescope Array (CTA). xiiThis work describes the Very Energetic Radiation Imaging Telescope Array Systems (VERITAS) observations of the starburst galaxy M82 by documenting the analysis of 231 quality-selected hours of observational data taken between 2008 and 2014. The prototypical starburst galaxy, M82’s high supernova (SN) rate and dense central accumulation of molecular gas make it a promising candidate for studying cosmic ray (CR) acceleration and propagation with the detection of di↵use very high energy (VHE; approximately 100 GeV-100 TeV) !-ray emission. This di↵use emission is predicted to result from proton-proton interactions within the galaxy’s core that produce VHE !-rays through neutral pion decay. This work confirms the results of the initial VERITAS publication covering 137 hours of M82 observations between January 2008 and April 2009, yielding a total of 103.5 excess !-ray-like events (0.007 !/min, 5.7 pre-trial statistical significance) from a deeper exposure of 231 hours of observation. The spectral properties found are in agreement with the original detection within errors ("=2.85 ± 0.39). These results are consistent with paradigms that describe the production of CRs via the conversion of mechanical energy generated in supernovae (SNe). These findings will improve current di↵use emission models by better constraining galaxy parameters and by providing insight into CR proton loss processes and timescales, with further understanding to be gained with the introduction of the Cherenkov Telescope Array (CTA). xiiThis work describes the Very Energetic Radiation Imaging Telescope Array Systems (VERITAS) observations of the starburst galaxy M82 by documenting the analysis of 231 quality-selected hours of observational data taken between 2008 and 2014. The prototypical starburst galaxy, M82’s high supernova (SN) rate and dense central accumulation of molecular gas make it a promising candidate for studying cosmic ray (CR) acceleration and propagation with the detection of di↵use very high energy (VHE; approximately 100 GeV-100 TeV) !-ray emission. This di↵use emission is predicted to result from proton-proton interactions within the galaxy’s core that produce VHE !-rays through neutral pion decay. This work confirms the results of the initial VERITAS publication covering 137 hours of M82 observations between January 2008 and April 2009, yielding a total of 103.5 excess !-ray-like events (0.007 !/min, 5.7 pre-trial statistical significance) from a deeper exposure of 231 hours of observation. The spectral properties found are in agreement with the original detection within errors ("=2.85 ± 0.39). These results are consistent with paradigms that describe the production of CRs via the conversion of mechanical energy generated in supernovae (SNe). These findings will improve current di↵use emission models by better constraining galaxy parameters and by providing insight into CR proton loss processes and timescales, with further understanding to be gained with the introduction of the Cherenkov Telescope Array (CTA). xiiThis work describes the Very Energetic Radiation Imaging Telescope Array Systems (VERITAS) observations of the starburst galaxy M82 by documenting the analysis of 231 quality-selected hours of observational data taken between 2008 and 2014. The prototypical starburst galaxy, M82’s high supernova (SN) rate and dense central accumulation of molecular gas make it a promising candidate for studying cosmic ray (CR) acceleration and propagation with the detection of di↵use very high energy (VHE; approximately 100 GeV-100 TeV) !-ray emission. This di↵use emission is predicted to result from proton-proton interactions within the galaxy’s core that produce VHE !-rays through neutral pion decay. This work confirms the results of the initial VERITAS publication covering 137 hours of M82 observations between January 2008 and April 2009, yielding a total of 103.5 excess !-ray-like events (0.007 !/min, 5.7 pre-trial statistical significance) from a deeper exposure of 231 hours of observation. The spectral properties found are in agreement with the original detection within errors ("=2.85 ± 0.39). These results are consistent with paradigms that describe the production of CRs via the conversion of mechanical energy generated in supernovae (SNe). These findings will improve current di↵use emission models by better constraining galaxy parameters and by providing insight into CR proton loss processes and timescales, with further understanding to be gained with the introduction of the Cherenkov Telescope Array (CTA). xiiThis work describes the Very Energetic Radiation Imaging Telescope Array Systems (VERITAS) observations of the starburst galaxy M82 by documenting the analysis of 231 quality-selected hours of observational data taken between 2008 and 2014. The prototypical starburst galaxy, M82’s high supernova (SN) rate and dense central accumulation of molecular gas make it a promising candidate for studying cosmic ray (CR) acceleration and propagation with the detection of di↵use very high energy (VHE; approximately 100 GeV-100 TeV) !-ray emission. This di↵use emission is predicted to result from proton-proton interactions within the galaxy’s core that produce VHE !-rays through neutral pion decay. This work confirms the results of the initial VERITAS publication covering 137 hours of M82 observations between January 2008 and April 2009, yielding a total of 103.5 excess !-ray-like events (0.007 !/min, 5.7 pre-trial statistical significance) from a deeper exposure of 231 hours of observation. The spectral properties found are in agreement with the original detection within errors ("=2.85 ± 0.39). These results are consistent with paradigms that describe the production of CRs via the conversion of mechanical energy generated in supernovae (SNe). These findings will improve current di↵use emission models by better constraining galaxy parameters and by providing insight into CR proton loss processes and timescales, with further understanding to be gained with the introduction of the Cherenkov Telescope Array (CTA).
Ph.D. in Physics, July 2015
Show less
- Title
- FIBRONECTIN INFLUENCES THE RATE OF ASSEMBLY AND STRUCTURAL CHARACTERISTICS OF THE FIBRIN MATRIX AND A MAP OF LYSINE PEGYLATION SITES IN FIBRONECTIN
- Creator
- Ramanathan, Anand
- Date
- 2015, 2015-07
- Description
-
Fibronectin serves multiple roles during tissue formation and wound healing, functioning through interactions with cells and extracellular...
Show moreFibronectin serves multiple roles during tissue formation and wound healing, functioning through interactions with cells and extracellular molecules. The overall objective of my research was to investigate fibronectin biochemistry on responses associated with wound healing. My approach was to engineer relevant in vitro models highlighting fibronectin functionality in tissues and link this work to more complex wound healing systems. My research goals were accomplished through the following three specific aims: (1) Determine the role of fibronectin on the kinetics of formation and structure of a fibrin-fibronectin matrix, (2) Determine the effect of protease on the activity of fibronectin in decellularized extracellular matrices and (3) Map the sites of polyethylene glycol conjugation or PEGylation to lysine residues in fibronectin. Aim 1: I demonstrated that fibronectin increased the initial rate of fibrin matrix formation and altered the fibrin matrix structure. These findings are novel because they link results from light absorbance studies to microcopy analyses and demonstrate the influence of fibronectin on fibrin matrix structural characteristics. Aim 2: I demonstrated a link between fibronectin proteolysis and reduced cell adhesion in decellularized extracellular matrices. This study demonstrates the susceptibility of fibronectin to proteolysis in the extracellular matrix and the resulting loss of matrix functionality, placing weight on bioengineering strategies to stabilize fibronectin against proteolysis. Aim 3: I examined proteolytic fragments of native and PEGylated fibronectin to map fibronectin lysine residues that are conjugated PEG. From four key chymotryptic fragments that span fibronectin and are recognized by specific monoclonal antibodies, I provide a map of lysine PEGylation sites for fibronectin. Moreover, I show that lysine PEGylation of fibronectin occurs asymmetrically on the dimer arms. Knowledge of the lysine PEGylation sites can be used to plan future experiments for investigating fibronectin biochemical interactions in complex in vitro and in vivo models. In accomplishing these specific aims, I identified key biomolecular mechanisms involving fibronectin and created relevant in vitro models to study these interactions. The work detailed in this thesis lays the foundation for future experiments to investigate fibronectin functionality and develop therapeutic strategies targeting fibronectin biochemistry in tissue development.
Ph.D. in Chemical Engineering, July 2015
Show less
- Title
- EFFECT OF FIDELITY ON COMPUTERIZED SIMULATION ASSESSMENT OUTCOMES
- Creator
- Siskind, Ariel David
- Date
- 2012-10-09, 2012-12
- Description
-
Simulation fidelity refers to the level of realism with which the simulation is presented, as well as the method in which applicants can...
Show moreSimulation fidelity refers to the level of realism with which the simulation is presented, as well as the method in which applicants can respond. Work simulations have been shown in previous literature to be beneficial selection tools. However, the research is less concrete with regard to the effects of various levels of fidelity (specifically, high fidelity virtual environments) on important organizational outcomes. In the current study, a model of fidelity is presented and 322 participants completed one of four simulation conditions (high fidelity, low fidelity/no branching, low fidelity/branching, zero fidelity). Face validity, applicant reaction, presence/immersion/engagement, predictive validity, and reliability were measured as outcomes of interest from the model. The findings indicated that the high fidelity condition and the low fidelity conditions had increased fidelity, fairness, face validity, and presence/immersion/ engagement compared to the zero fidelity condition. However, the addition of branching to the low fidelity simulation did not impact the hypotheses in the expected direction. The hypothesis regarding predictive validity was not supported, and the hypothesis regarding reliability was partially supported. Implications of the findings, limitations, and recommendations for future research are presented.
PH.D in Psychology, December 2012
Show less
- Title
- DESIGNING SMART ARTIFACTS FOR ADAPTIVE MEDIAT~ON OF SOCIAL VISCOSITY: TRIADIC ACTOR-NETWORK ENACTMENTS AS A BASIS FOR INTERACTION DESIGN
- Creator
- Salamanca, Juan
- Date
- 2012-10-10, 2012-12
- Description
-
With the advent of ubiquitous computing, interaction design has broadened its object of inquiry into how smart computational artifacts...
Show moreWith the advent of ubiquitous computing, interaction design has broadened its object of inquiry into how smart computational artifacts inconspicuously act in people's everyday lives. Although user-centered design approaches remain useful for exploring how people cope with interactive systems, they cannot explain how this new breed of artifacts participates in people's sociality. User-centered design approaches assume that humans control interactive systems, disregarding the agency of smart artifacts. Based on Actor-Network Theory, this research recognizes that artifacts and humans share the capacity of influencing society and meshing with each other, constituting hybrid social actors. From that standpoint, the research offers a triadic structure of networked social interaction as a methodological basis to investigate how smart devices perceive their social setting and adaptively mediate people's interactions within activities. These triadic units of analysis account for the interactions within and between human-nonhuman collectives in the actor-network. The within interactions are those that hold together humans and smart artifacts inside a collective and put forward the collective's assembled meaning for other actors in the network. The between interactions are those that occur among collectives and characterize the dominant relational model of the actor-network. This triadic approach was modeled and used to analyze the interactions of participants in three empirical studies of social activities with communal goals, each xiii mediated by a smart artifact that enacted – signified – a balanced distribution of obligations and privileges among subjects. Overall, the studies found that actor-networks exhibit a social viscosity that hinders people's interactions. This is because when people try to collectively accomplish goals, they offer resistance to one another. These design experiments also show that the intervention of smart artifacts can facilitate the achievement of cooperative and collaborative interaction between actors when the artifacts enact dominant moral principles which prompt the preservation of social balance, enhance the network's information integrity, and are located at the focus of activity. The articulation of Actor-Network Theory principles with interaction design methods opens up the traditional user-artifact dyad towards triadic collective enactments by embracing diverse kinds of participants and practices, thus facilitating the design of enhanced sociality.
PH.D in Design, December 2012
Show less
- Title
- ENVIRONMENTAL AND SOCIAL SUSTAINABILITY IMPLICATIONS OF DOWNTOWN HIGH-RISE VS. SUBURBAN LOW-RISE LIVING: A CHICAGO CASE STUDY
- Creator
- Du, Peng
- Date
- 2015, 2015-12
- Description
-
This research is focused on quantitatively investigating and comparing the environmental and social sustainability of people’s lifestyles in...
Show moreThis research is focused on quantitatively investigating and comparing the environmental and social sustainability of people’s lifestyles in terms of embodied energy, operational energy use, and overall satisfaction with their quality of life in both downtown high-rise and suburban low-rise living using Chicago, IL and a surrounding suburban area of Oak Park, IL as a case study. Specifically, in both cases, the study seeks to evaluate factors such as the embodied energy of the materials that comprise buildings in each location; the predicted and actual monthly energy consumption of the homes; travel via all modes of transport including automobile, public transport, walking, and biking; and the embodied and operational energy of the infrastructure to support each mode of transportation. In addition, this research also engages with the individual building occupants, including single individuals, couples, and families, in a large subset of downtown and suburban Chicago households to directly evaluate perceptions of their life satisfaction and sense of community, which offers a unique direct comparison between dense high-rise and suburban low-rise living. The findings of the study show that downtown high-rise living in Chicago accounts for approximately 58% more life-cycle energy per person per year than Oak Park low-rise living, on average, contrary to some common beliefs (best estimates were ~260 and ~165GJ/person/year, respectively). Building operational energy was estimated to be the single largest contributor of the total life-cycle energy in both the downtown high-rise and suburban low-rise cases, followed by vehicle OE. The findings of the study also show that downtown high-rise residents were associated with higher life satisfaction than suburban low-rise residents when controlling for demographic differences in the research sample. Residence type was not found to be associated with sense of community when controlling for demographic differences, and the factor that was found to be significantly associated with sense of community was household size in the research sample. Also, accessibility and safety were found as the strongest predictors of overall residential environment for individuals.
Ph.D. in Architecture, December 2015
Show less
- Title
- ENGINEERING OF CLINICAL-SCALE, IN VITRO VASCULARIZED BONE TISSUE FOR IMPLANTATION
- Creator
- Gandhi, Jarel K.
- Date
- 2016, 2016-05
- Description
-
Tissue engineering has been a rapidly expanding field dedicated to regeneration of tissue. The field has focused on application through...
Show moreTissue engineering has been a rapidly expanding field dedicated to regeneration of tissue. The field has focused on application through combinations of 3 key components: cells, signals, and scaffolds. One ambitious combination of all three is the desire to engineer functional tissues in vitro to meet the clinical-demand of organ replacement. While major advances have been made, a critical obstacle that has yet to be overcome is the need to grow large volumes of complex 3D tissue. In this proposal, this issue is addresed in two ways: the use of a perfusion bioreactor system to culture 3D scaffolds to enhance mass transport, and engineering of a vascular network withing the scaffold for rapid perfusion once implanted in vivo. This thesis aims to address both aspects for bone tissue engineering by engineering pre-vascularized, mineralizing scaffolds that can be scaled up to clinically-relevant volumes by using a tubular perfusion bioreactor system (TPS). To address this, 3 aims were addressed. First, 3D culture of endothelial colony forming cells (ECFCs), a clinically-relevant cell population, was demonstrated utilizing fibrin gels within the TPS. The TPS allowed for viable culture of ECFCs within fibrin bead scaffold up to 1 week without a reduction in cell amount or genomic quality of the cells. Second, a co-culture model of angiogenesis utilizing ECFCs and mesenchymal stem cells (MSCs) was demonstrated to reproducibly form pre-formed vessel networks within a mineralizing fibrin scaffold. Data shows that MSC suspension concentration and fibrinogen concentration modulate the angiogenic response. Mineralization is demonstrated without the use of osteogenic media utilizing shear stress within the TPS. Finally, functionality of the pre-formed vessels is demonstrated following implantation to a SCID mouse model. Engineered human vessels showed anastasmosis to the host vasculature, with evidence of interconnected host and human vessel networks as well as formation of hybrid vessels. Additionally, evidence of mineralization within the scaffolds is maintained in TPS-cultured samples. In demonstrating these aims, future work should focus on fortifying the scaffold material to enable addressing implantation and persistence of clinically-relevant tissue volumes. In conclusion, pre-vascularization within bioreactor-cultured scaffolds represents a promising solution for future tissue engineering application.
Ph.D. in Biomedical Engineering, May 2016
Show less
- Title
- DESIGN AND ANALYSIS OF DATAPATH CIRCUITS USING MULTI-GATE TRANSISTORS
- Creator
- Garcia Martin, Martin
- Date
- 2015, 2015-07
- Description
-
Multi-Gate Field-E ect Transistors are transistors with more than one gate that allows continuation of Moore's Law and performance increases...
Show moreMulti-Gate Field-E ect Transistors are transistors with more than one gate that allows continuation of Moore's Law and performance increases for CMOS tran- sistors. Introduction of multi-gate devices has been a turning point for the semi- conductor industry in facilitating transition from planar to 3D structures. Intel rst introduced commercial products using 3D structures (called Tri-Gate transistors) in late 2011 with Ivy Bridge CPUs using 22nm processes. Signi cant performance gains have been reported; i.e., 37% performance increase at low voltage and 50% power reduction. Multi-gate transistors based on 3D structures can vary greatly in their con guration and architectures leading to ambiguity in their design. It is necessary to investigate the performance of datapath circuits when multi-gate and independent- gate devices replace the conventional planar transistors. Therefore, key objective of this work has been to analyze these transistors' performance and to design new dat- apath circuits to leverage the inherent qualities of multi-gate transistor structures. Multiple-gate devices can be modeled using the BSIM-CMG (Common Multi- Gate) and BSIM-IMG (Independent Multi-Gate) compact models from University of Berkeley Device Group. In this research, both device types have been characterized for a variety of parameters to study their basic properties, functionality and to build a foundation for improved circuit designs. In particular, BSIM-CMG devices have been compared with CMOS planar technology demonstrating signi cant advantages in all design metrics, meanwhile the BSIM-IMG have been used to design new gates and improve datapath designs. In the rst part of this study, essential logic gates, i.e. Inverter, NAND and NOR, have been implemented using BSIM-CMG devices. After being analyzed and compared with the CMOS technology, a 32% reduction on dynamic power consump- tion and 82% reduction for the leakage current has been obtained. For a compre-hensive look on full adder designs, several novel adder architectures have been im- plemented including ultra low power and minimum number of transistor (10T) de- signs. The analysis of these implementations shows 54% dynamic power reduction, 98% static current reduction and 26% delay reduction. These results lead to a 68% improvement on the Power-Delay product comparing with the 32nm CMOS planar technology. In order to investigate dynamic logic circuits with multi-gate transistors, two recent dynamic circuit techniques have been implemented with novel enhancements to reduce the leakage current. Data Driven Dynamic Logic (D3L) and Split-Path Data Driven Dynamic Logic (SPD3L) have been used to analyze the dynamic logic circuits resulting in 11% reduced dynamic power, 52% reduced leakage current and 33% reduced delay. Second part of this study deals with the independent gate devices. Using the BSIM-IMG model, new XOR/XNOR logic gate designs are introduced for im- plementing novel low-power adders. With these new adder architectures, the average improvement on Dynamic power is an 8% and the designs are 6% faster. Furthermore, a new design technique is proposed combining the possible modes (Short Gate-SG, Low-Power-LP, Independent Gate-IG) that the BSIM-IMG provides. Using this novel mixed design, the Power-Delay product is improved on average 7.2% and 54%, com- pared to the Short-Gate (SG) and Low-Power (LP) modes, respectively. The properties of the BSIM-IMG logic have been applied to improve the Dy- namic logic designs as well. The Domino and SPD3L design techniques have been implemented and enhancements such as merging the pull-up transistors have been proposed for sleep and power-gating techniques. With these enhancements, the Dy- namic power is reduced 13% in average and the designs are 18% faster. The trade-o is an increase on leakage current of 8%. Another major contribution of the work has been the development of shell script les for generating a custom toolbox for datapath designs with multi-gate and independent-gate transistors.
Ph.D. in Electrical and Computer Engineering, July 2015
Show less
- Title
- PMU DATA APPLICATIONS IN SMART GRID: LOAD MODELING, EVENT DETECTION AND STATE ESTIMATION
- Creator
- Ge, Yinyin
- Date
- 2016, 2016-05
- Description
-
The thesis mainly includes four parts of research, event detection, data archival reduction, load modeling, state estimation. Firstly, we...
Show moreThe thesis mainly includes four parts of research, event detection, data archival reduction, load modeling, state estimation. Firstly, we present methods on real-time event detection and data archival reduction based on synchrophasor data produced by phasor measurement unit (PMU). Event detection is performed with Principal Component Analysis (PCA) and a second order difference method with a hierarchical framework for the event notification strategy on a small-scale Microgrid. Compared with the existing methods, the proposed method is more practical and efficient in the combined use of event detection and data archival reduction. Secondly, the proposed method on data reduction, which is an “Event oriented auto-adjustable sliding window method”, implements a curve fitting algorithm with a weighted exponential function-based variable sliding window accommodating different event types. It works efficiently with minimal loss in data information especially around detected events. The performance of the proposed method is shown on actual PMU data from the IIT campus Microgrid, thus successfully improving the situational awareness (SA) of the campus power system network. Thirdly, we present a new “event-oriented” method of online load modeling for the IIT Microgrid based on synchrophasor data produced PMU. Several load models and their parameter estimation methods are proposed. It is given great importance on choosing the best models for the detected events. The online load modeling process is based on an adjustable sliding window applied to two different types of load step changes. The load modeling tests and related analysis on the synchrophasor data of the IIT Microgrid are demonstrated in this paper. Finally, we present a three-phase unbalanced distribution system state estimation (DSSE) method based on Semidefinitetheir parameter estimation methods are proposed. It is given great importance on choosing the best models for the detected events. The online load modeling process is based on an adjustable sliding window applied to two different types of load step changes. The load modeling tests and related analysis on the synchrophasor data of the IIT Microgrid are demonstrated in this paper. Finally, we present a three-phase unbalanced distribution system state estimation (DSSE) method based on Semidefinite Programming (SDP). A partitioning strategy with the aid of PMU and another distributed optimization algorithm alternating direction method of multipliers (ADMM) are also proposed for large-scale DSSE. Compared with a traditional weighted least square (WLS) method based on the Gauss-Newton iteration, the proposed DSSE by SDP method delivers a more accurate estimation, and the application of ADMM can lead to high performance for large scale DSSE while deriving satisfying estimation.
Ph.D. in Electrical Engineering, May 2016
Show less
- Title
- THE VETERAN/MILITARY COUPLE RELATIONSHIP IN THE CONTEXT OF POSTTRAUMATIC STRESS DISORDER: THE ROLE OF RELATIONSHIP-FOCUSED COPING AND CONGRUENCY/DISCREPANCY OF COPING
- Creator
- Gela, Natalie R.
- Date
- 2016, 2016-07
- Description
-
Intimate relationship functioning is an area of great concern for Veterans and military personnel coping with clinically significant...
Show moreIntimate relationship functioning is an area of great concern for Veterans and military personnel coping with clinically significant posttraumatic stress disorder (PTSD), as well as for their significant others. Research findings based on couples affected by chronic physical illnesses indicate that specific relationship-focused coping strategies (active engagement, protective buffering, and overprotection) are linked to dyadic adjustment and individual well-being, yet this type of interpersonal coping has not been investigated in the context of Veteran/military samples affected by PTSD. The present study used a sample of Veterans diagnosed with PTSD and their significant others (N = 71 pairs) to examine associations between: (a) relationship-focused coping and dyadic adjustment; (b) relationship-focused coping and PTSD symptom severity; and (c) relationship-focused coping and significant other emotional distress. Actor-Partner Interdependence Models revealed significant associations between relationship-focused coping strategies and dyadic adjustment in the predicted directions. Furthermore, protective buffering and overprotection were positively associated with, and active engagement was negatively associated with, Veteran PTSD symptom severity and significant other emotional distress. Congruency/discrepancy of couple members’ relationship-focused coping was also examined in order to investigate whether or not patterns of coping within a couple impact dyadic adjustment in the context of PTSD, and these findings were not significant. Overall, findings from the present study demonstrate the importance of interpersonal coping processes within the context of military Veterans diagnosed with PTSD and their significant others. The implications of these findings in regards to current theoretical models of PTSD and relationship functioning are discussed. Implications for clinical interventions aimed at treating Veterans diagnosed with PTSD and/or couples coping with a Veteran’s PTSD diagnosis are also discussed.
Ph.D. in Psychology, July 2016
Show less
- Title
- CHARACTERIZATION AND MODELING OF A COMMERCIAL NATIONWIDE WI-FI HOTSPOT NETWORK
- Creator
- Divgi, Gautam
- Date
- 2014, 2014-12
- Description
-
We present a thorough analysis of a commercial nationwide Wi-Fi hotspot network. The analysis is approached in two ways, characterization and...
Show moreWe present a thorough analysis of a commercial nationwide Wi-Fi hotspot network. The analysis is approached in two ways, characterization and modeling. First we characterize the network from a ve month long log of user activ- ity and traffic collected by a wireless network service provider operating hotspots in restaurants, serviced apartments, hotels and airports all over Australia. The users are categorized based on their account time limits to analyze the impact of account strati cation on the overall user behavior. A similarity index is developed to com- pare two data sets. This is used to quantitatively measure how similar or different various types of accounts are. The user population in the network is found to be highly uctuating, hence user speci c, population independent metrics are proposed to manage this transience. We also introduce metrics to measure account time and data utilization. We then follow through with detailed modeling of session and traffic parame- ters. We develop the truncated loglogistic (T-LL) distribution which can model light and heavy tailed data using a modi cation of Lavalette's law. A novel method to t the T-LL distribution to data by minimizing a goodness-of- t metric is presented. The T-LL distribution and the tting method are subsequently used to model session and traffic parameters of the network based on the categorization methodology de- veloped previously. We address concerns about the speci city of the model by using it to model other publicly available Wi-Fi network traces. The property of the introduced T-LL distribution to model both light and heavy tailed data makes it uniquely quali ed for modeling web le sizes. Thus we extend the applicability of the introduced model by tting it to publicly available web le size data. The T-LL models outperform those of the Pareto and lognormal distributions used to model such data currently.
Ph.D. in Computer Science, December 2014
Show less