Search results
(821 - 840 of 2,944)
Pages
- Title
- THE CHARACTERIZATION FOR B2 STRUCTURE AND L2\ STRUCTURE IN THE AG-MG AND AG-MG-IN SYSTEM
- Creator
- Kim, Do Hyung
- Date
- 2013, 2013-05
- Description
-
The concentration of point defects and the long range order for ordered B2 AgMg alloys, quenched from 973K, was investigated by statistical...
Show moreThe concentration of point defects and the long range order for ordered B2 AgMg alloys, quenched from 973K, was investigated by statistical thermodynamic modeling, powder X-Ray Diffractometery and diffraction simulation as a function of composition. The lattice parameter behavior on the Ag rich side are expectably having constitutional and thermal anti-site defects on both Ag and Mg sub-lattices, corresponding to the literature data. On the other hand, the Mg rich side has substantially thermal vacancy defects based on the lattice parameter data which shows lower, compared with previously reported data. Concentration of the equilibrium point defects at 973K was calculated from two thermodynamic models, where the Ag rich side was based on the constitutional and thermal anti-site defect formation and the Mg rich side was based on the hybrid defect formation consisting of vacancy, Mg and Ag anti-site defects The experimental long range order at 973K, determined from the integrated intensity ratio of (100) super-lattice reflection to (200) fundamental reflection, is in quite good agreement with the theoretical long range order at 973K based on the calculated integral intensities from the diffraction simulation with the equilibrium concentration of each point defect, obtained by two thermodynamic models. Furthermore, point defect hardening coefficients on both sides of stoichiometry were determined by the measurement of the Vickers hardness as a function of the equilibrium concentration of the main point defects deduced from two thermodynamic models. The hardening coefficient is G/16 for the Ag rich side with respect to Ag anti-site defects and G/3.1 for the Mg rich side with respect to vacancy defects. Also, two hardening coefficients are corresponding to the empirical correlation for the several binary B2 intermetallic compounds with anti-site defects (G/9 to G/85) xvi and vacancy defects (G/3 to G/4). This suggests that the elastic size effect on the Ag rich side is the primary hardening mechanism due to constitutional and thermal Ag anti-site defects while the Mg rich side is likely to have the elastic modulus effect due to constitutional and thermal vacancy defects. It is also indicated that the vacancy defect is more significant hardener than Ag anti-site defects for the ordered B2 AgMg intermetallic system. The partial liquidus projection in the Ag-Mg-In ternary system was established by the primary phase and liquidus temperature, using the Scanning Electron Microscopy, Energy Dispersive Spectroscopy and Differential Scanning Calorimetry. The results showed that the AgMg1-xInx phase of the Ag-Mg-In ternary system has a large primary solidification field up to 90 at.% of In, so that most ternary invariant reactions of the In rich field must be formed beyond 90 at.% of In. The liquid-solid schematic reactions in the Ag rich field were experimentally confirmed, but those of the In rich side have not been established. Furthermore, the ordering phase transition and melting temperature of the Heusler phase AgMg1-xInx alloys were investigated using Differential Scanning Calorimetry, Scanning Electron Microscopy, and powder X-Ray Diffractometery. The DSC results indicated that the melting temperature decreased with increasing the In composition, but a thermal peak for the ordering phase transition was not detected due to either a very small heat of transition or a second order transformation. The XRD results showed that the L21 structure of the Heusler phase was observed for the 15 at.% of In alloy and the degree of order of L21 structure continuously increased with the In composition, resulting from the (111) super-lattice intensity with respect to the In composition. The L21 structure ordering of the 15 at.% of In and 20 at.% of In system xvii was gradually decreased with increasing the annealing temperature, corresponding to decreasing the (111) super-lattice intensity and the long rang order parameters of the L21 structure. These XRD behaviors suggest that the L21/B2 ordering transformation phenomena is a second order transformation with respect to temperature.
PH.D in Material Science and Engineering, May 2013
Show less
- Title
- POWER OPTIMIZATION FROM REGISTER TRANSFER LEVEL TO TRANSISTOR LEVEL IN DEEPLY SCALED CMOS TECHNOLOGY
- Creator
- Li, Li
- Date
- 2012-04-25, 2012-05
- Description
-
With the progress of CMOS technology, there is a steady growth in clock frequency and chip capacity. As a result, the power dissipation of...
Show moreWith the progress of CMOS technology, there is a steady growth in clock frequency and chip capacity. As a result, the power dissipation of deeply scaled digital CMOS design has increased tremendously. On the other hand, low power VLSI designs are crucial in many areas, such as mobile phones. Furthermore, according the 2011 International Technology Roadmap for Semiconductors (ITRS), the trend towards high power consumption is far beyond the power requirement. As a result, power optimization techniques are highly appreciated in nowadays VLSI design. There are various low power methodologies from system level to layout level. In our research, we are focusing on low power techniques from register transfer level (RTL) to transistor level. Clock gating (CG) is the most widely used technique to reduce dynamic power at RTL. One of the traditional CG styles is XOR-based CG. It compares the inputs and outputs of flip-flops (FFs), and gated the FFs when they are the same. However, this CG is not effective since it does not take the signal activities into account. In this thesis, an activity-driven optimized bus specific clock gating (OBSC) is proposed. It uses finegrained RTL power models to estimate the dynamic power, and chooses only a subset of FFs to be gated selectively based on their switching activities. During the clock gated period, the gated FFs’ outputs are stable. As a result, the combinational logics which are completely dependent on these stable outputs can be power gated so as to save leakage power. Thus, CG and power gating (PG) can be integrated to reduce dynamic and leakage power simultaneously. The sleep signal of our PG is the CG enable signal which is generated during the CG implementation. It does not require an individual power management block to generate as in the case of traditional PG implementation. Moreover, in order to determine if PG leads to leakage power savings, minimum average idle time concept is proposed. Lastly, as a critical part in the integration of CG and PG, data retention logics (DRLs) are required to hold the values of the power gated logics’ outputs so that the non power gated blocks which depend on those outputs can function correctly during the power gated period. In this thesis, a low power DRL design is presented. All the above mentioned techniques have been applied to ISCAS’89 benchmark circuits, and their correctness has been verified successfully. Moreover, the whole experimental process is accomplished automatically by software program, so it is easy to be integrated into current EDA tools.
Ph.D. in Computer Engineering, May 2012
Show less
- Title
- GALLIUM NITRIDE NANOSTRUCTURED POWER SEMICONDUCTOR DEVICES
- Creator
- Sabui, Gourab
- Date
- 2017, 2017-05
- Description
-
Gallium nitride (GaN) has emerged as a promising material for development of power semiconductor devices owing to its superior material...
Show moreGallium nitride (GaN) has emerged as a promising material for development of power semiconductor devices owing to its superior material characteristics. Fabricated GaN power devices have started to outperform its silicon (Si) counterpart with low conduction and switching losses and holds the key to extremely low-loss and high efficiency power delivery circuits of the future. However, GaN power devices have been plagued with several inherent drawbacks preventing an ubiquitous adoption of GaN as the material of choice for power switches. The most critical trade-o↵ has been the choice of substrate for the growth of GaN epitaxy: a high performance, high-cost native substrate or a low-cost, non-native substrate with reliability issues. In order for GaN to thrive as a superior successor to Si, a low cost, high performance epitaxy with improved reliability is expected moving forward. A novel nanostructured approach to GaN power devices is proposed in this dissertation. The nano-GaN power devices theoretically has the potential to bypass the reliability concerns associated with a non-native substrate but still deliver comparable performance. A comprehensive model is proposed for TCAD modeling of bulk GaN power devices to accurately model the nano-GaN devices. Through extensive modeling and simulations, design guidelines for Schottky barrier diodes and field effect transistors based on the nano-GaN concept is laid out to extract the best performance out of this architecture. Dielectric and semiconductor interaction is also exploited to push these devices to perform beyond the unipolar material limit of GaN. The simulated and fabricated nano-GaN power devices show the potential to deliver equivalent or superior performance to present state of the art GaN devices but with improved reliability, ruggedness and low cost.
Ph.D. in Electrical Engineering, May 2017
Show less
- Title
- HYBRID ELECTROSTATIC AND MICRO-STRUCTURED ADHESIVES FOR ROBOTIC APPLICATIONS
- Creator
- Ruffatto, Donald F., Iii
- Date
- 2015, 2015-07
- Description
-
Current adhesives and gripping mechanisms used in many robotics applica- tions function on very speci c surface types or at de ned attachment...
Show moreCurrent adhesives and gripping mechanisms used in many robotics applica- tions function on very speci c surface types or at de ned attachment locations. A controllable, i.e. ON-OFF, adhesive mechanism that can operate on a wide range of surfaces would be very advantageous. Such a device would have applications ranging from robotic gripping and climbing to satellite docking and inspection/service mis- sions. The main goal of the research presented here was to create such an attachment mechanism through the use of a new hybrid adhesive technology. The newly devel- oped adhesive technology is a hybridization of electrostatic and micro-structured dry adhesion. The result provides enhanced robustness and utility, particularly on rough surfaces. There were challenges not only in the integration of these two adhesive elements but also with its application in a complete gripping mechanism. Electrostatic and directional dry adhesives were both individually investigated. The electrode geometry for an electrostatic adhesive was optimized for maximum ad- hesion force using nite element analysis software. Optimization results were then veri ed through experimental testing. New manufacturing techniques were also de- veloped for electrostatic adhesives that utilized a metalized mesh embedded in a sili- cone polymer and Kapton lm based construction, greatly improving adhesion. The micro-structured dry adhesive used was provided by Dr. Parness, from the NASA Jet Propulsion Lab (JPL), and consists of an array of vertical stalks with an angled front face, referred to as micro-wedges. The hybrid electrostatic dry adhesive (EDA) was created by fabricating the electrostatic adhesive directly on top of a dry adhesive mold. This process created an array of dry adhesive micro-wedges directly on the surface of the electrostatic adhesive. In operation the electrostatic adhesive provides a normal force which serves to pull the dry adhesive into the surface substrate. With greater surface contact more of the dry adhesive is able to engage, bring the electro-static adhesive even closer to the surface and increasing its e ectiveness. Therefore, the combination of these two technologies creates a positive feedback cycle whose whole is often greater than the sum of its parts. An interface mechanism is needed to transmit applied loads from a rigid struc- ture to the exiable adhesive while still maintaining its conformability. This is es- pecially important for strong adhesion on rough surfaces, such as tile and drywall. Di erent concepts such as a structured brillar hierarchy and a uid- lled backing pouch have been explored. Additionally, nite element analysis was used to evaluate di erent fribrillar shapes and geometry for the structured hierarchy. The goal was to equalize the load distribution across the adhesive while still maintaining surface compliance. A gripper mechanism was also created which used a servo for actuation and three rigid tiles with a directional dry adhesive. It was tested on a perching Micro Air Vehicle (MAV) as well as in the RoboDome facility at NASA's Jet Propulsion lab to simulate a satellite docking/capture maneuver.
Ph.D. in Mechanical and Aerospace Engineering, July 2015
Show less
- Title
- METHODOLOGY FOR URBAN AREA SNOW REMOVAL USING NEW MACHINE AND PERFORMANCE-BASED ANALYSIS
- Creator
- Neishapouri, Mohammad
- Date
- 2015, 2015-12
- Description
-
The need for alternative methods that facilitate removal of snow on urban streets with minimal pavement and bridge damages, vehicles...
Show moreThe need for alternative methods that facilitate removal of snow on urban streets with minimal pavement and bridge damages, vehicles corrosions, and environmental impacts due to use of chemicals and salts has been growing over time. Conversely, this issue has not been thoroughly investigated. This is particularly true for large urban areas where the snow removal machine and background traffic share already congested streets. In this research, a new methodology is introduced for effectively managing snow removal that involves new machine and performance-based analysis. The new machine aims to melt snow and ice using technically adequate mechanical system including special engine, heat pumps and very fast ventilation pumps in order to suck and discharge water from pavement surface to road sides. The performance-based analysis employs a life cycle cost analysis approach to estimate reductions in expenditures to pavements and bridges, and vehicle corrosions of background traffic as a result of using new machine for snow melting instead of using chemicals and salts; and an optimization model for effective dispatching of new machine across a large area, leading to a significant level of travel timing savings to the background traffic owning to shorter duration oftravel way closures. The proposed methodology is implemented in a computational study to examine the current snow removal programs in the city of Chicago for a typical winter day involving moderate and severe snowfalls that correspond to its 50 percent and 100 percent programs for filed dispatching one-half and all snow plow trucks. Compared with the use of snow removal trucks coupled with chemicals and salts, the use of new machine could result in better equivalent annualized savings as benefits component and less amount of cost components which cause the project implementation has benefit to cost ratio Xl11The need for alternative methods that facilitate removal of snow on urban streets with minimal pavement and bridge damages, vehicles corrosions, and environmental impacts due to use of chemicals and salts has been growing over time. Conversely, this issue has not been thoroughly investigated. This is particularly true for large urban areas where the snow removal machine and background traffic share already congested streets. In this research, a new methodology is introduced for effectively managing snow removal that involves new machine and performance-based analysis. The new machine aims to melt snow and ice using technically adequate mechanical system including special engine, heat pumps and very fast ventilation pumps in order to suck and discharge water from pavement surface to road sides. The performance-based analysis employs a life cycle cost analysis approach to estimate reductions in expenditures to pavements and bridges, and vehicle corrosions of background traffic as a result of using new machine for snow melting instead of using chemicals and salts; and an optimization model for effective dispatching of new machine across a large area, leading to a significant level of travel timing savings to the background traffic owning to shorter duration oftravel way closures. The proposed methodology is implemented in a computational study to examine the current snow removal programs in the city of Chicago for a typical winter day involving moderate and severe snowfalls that correspond to its 50 percent and 100 percent programs for filed dispatching one-half and all snow plow trucks. Compared with the use of snow removal trucks coupled with chemicals and salts, the use of new machine could result in better equivalent annualized savings as benefits component and less amount of cost components which cause the project implementation has benefit to cost ratio respectively 2.15 and 2 by CPI analysis and 3 and 3.04 by CCI analysis. Compared with the current practice of filed dispatching of snow plow trucks or new machine for snow removal, the optimization model for vehicle dispatching could further improve the snow removal productivity by 2-4 percent for the 100 percent program and 3-8 percent for the 50 percent program, respectively.
Ph.D. in Civil Engineering, December 2015
Show less
- Title
- DEVELOPMENT OF A CREATIVE WORK ANALYSIS
- Creator
- Neuman, Brendan George
- Date
- 2014, 2014-05
- Description
-
Creativity researchers continue to debate whether the phenomenon of creativity is a uniform construct regardless of context, or if creativity...
Show moreCreativity researchers continue to debate whether the phenomenon of creativity is a uniform construct regardless of context, or if creativity differs characteristically across domains. The present research contributes to this debate by way of an analysis of creative work. It was hypothesized that a comprehensive analysis of creative work would reflect a four-factor structure that is often used to organize the creativity research literature. Additionally, differences in both the level and nature of creativity were expected to emerge from incumbent data across occupational domains. An eight-factor, rather than four-factor structure of creative work was observed. Incumbent ratings from seven distinct job families were different in nature but not level of creative work.
PH.D in Psychology, May 2014
Show less
- Title
- PHYSIOLOGICAL AND BEHAVIORAL EVIDENCE THAT CHEMICALS IN THE ENVIRONMENT INFLUENCE DEVELOPMENT OF THE ZEBRAFISH OLFACTORY SYSTEM
- Creator
- Valesio, Eric
- Date
- 2013, 2013-12
- Description
-
Olfactory imprinting is the process of producing life-long changes through neural modification that is independent of associative learning....
Show moreOlfactory imprinting is the process of producing life-long changes through neural modification that is independent of associative learning. Here, I provide data to demonstrate that olfactory imprinting in zebrafish leads to neurobiological and behavioral changes. I treated zebrafish with an amino acid (AA) or a bile acid (BA) mixture from 4 days post-fertilization (dpf) to 40 dpf. Behavior studies showed that fish treated with odorants for 40 days exhibited preferential responses to treated odorants, which was different from the controls. These behavioral changes were retained 3 months after the odor treatment was ceased. Whole-mount immunohistochemistry was conducted using antibodies for parvalbumin (PV) and OTX. We discovered that treated fish had increased PV and OTX expression in the olfactory epithelium (OE) at 7 dpf and increased PV expression in the olfactory bulb (OB) at 12 dpf. Detailed analysis indicated that increased PV expression was observed in the OE apical region of treated groups while OTX was increased in both the apical and basal regions of the OE. In three regions of the OB analyzed, BA treated fish showed a doubling in PV expression in all regions while doubling was in two regions in AA treated fish. Increased OTX expression was in the three regions of AA treated fish but not in BA treated fish. These data demonstrate that exposure to AA or BA during zebrafish development leads to long-lasting physiological and behavioral changes. The report also includes a study of embryonic zebrafish treated with SP600125 (anthrapyrazolone), an inhibitor of the c-Jun N-terminal kinase (JNK) signaling pathway. Zebrafish embryos were treated with 1.25 μM, 5 μM, or 12.5 μM of SP600125 from 18 to 48 hours post-fertilization (hpf) followed by evaluation at 120 hpf. Zebrafish treated at 1.25 μM were not affected developmentally while embryos treated at xv 5 μM and higher displayed numerous morphological defects including edemas, eye malformations and reduction in olfactory organ size. Overall, it was observed that treatment at 5 μM, SP600125 caused severe developmental defects and that these defects worsened with increasing concentrations. Taken together, these data indicate that the environment has a profound influence on zebrafish development.
PH.D in Biology, December 2013
Show less
- Title
- APPLYING THE PSYCHOLOGICAL FLEXIBILITY MODEL TO EXAMINE PREDICTORS OF ENGAGEMENT AND SUCCESS IN A WEIGHT MANAGEMENT PROGRAM FOR VETERANS
- Creator
- Pieczynski, Jessica
- Date
- 2015, 2015-07
- Description
-
Weight management success is contingent upon treatment utilization and engagement. Unfortunately, low enrollment, poor attendance, and high...
Show moreWeight management success is contingent upon treatment utilization and engagement. Unfortunately, low enrollment, poor attendance, and high attrition from weight management programs are major barriers for long-term weight loss. This study aimed to applying the psychological flexibility model to the problem of weight management engagement. The current study evaluated the hypotheses that lower experiential avoidance, the process of changing, suppressing, or avoiding unpleasant experiences in an effort to regulate behavior, and higher values congruence, behaving consistently with one’s values, predict treatment engagement and successful weight loss. Participants were 183 overweight and obese veterans (91.3% Male, 77.6% African American). Participants completed a demographics questionnaire, the Acceptance and Action Questionnaire for Weight-Related Problems (AAQ-W) and the Valued Living Questionnaire (VLQ). Analyses revealed that experiential avoidance significantly predicted probability of enrolling (OR=1.03, p<.01). Experiential avoidance and values congruence were not significantly related to attendance, and experiential avoidance approached significance for dropout (OR=6.54, p=.08). AAQ-W was related to baseline BMI (β=7.49, p<.001) and 3-month BMI trajectory (β= 0.54, p<.01) for enrollees, while experiential avoidance predicted 3-month weight change for nonenrollees (β =0.28, p<.05). The extant research on weight management suggests that much can be done to improve treatment outcomes. Increasing engagement is a major component of improving weight management success. The findings from this study suggest that targeting psychological flexibility can be a means to achieving this goal. Future weight management research should continue to explore this relationship.
Ph.D. in Psychology, July 2015
Show less
- Title
- SCHEDULING FOR THROUGHPUT OPTIMIZATION IN WIMAX NETWORKS
- Creator
- Nusairat, Ashraf
- Date
- 2011-03-21, 2011-05
- Description
-
WiMAX emerged as one of the important Broadband Wireless Access (BWA) networks based on OFDMA technology and is anticipated to be an...
Show moreWiMAX emerged as one of the important Broadband Wireless Access (BWA) networks based on OFDMA technology and is anticipated to be an alternative to wired broadband networks. WiMAX supports different emerging applications with different Quality of Service (QoS) requirements like voice over IP (VoIP), video conference, voice conference and online gaming. Those emerging wireless applications have high throughput demand and impose a challenge to the underlying Radio Access Network (RAN) scheduling algorithms. Efficient allocation of WiMAX shared resources like subchannels is critical to meeting the high throughput demand. The WiMAX resource allocation algorithms determine which users to schedule, how to allocate subcarriers to them, and how to determine the appropriate power levels for each user on each subcarrier. In WiMAX, the DL TDD OFDMA subframe structure is a rectangular area of N subchannels × K time slots. Users are assigned rectangular bursts in the downlink subframe. The burst size varies based on the user’s channel quality and data to be transmitted for the assigned user. In this dissertation we study the problem of assigning users to DL bursts in WiMAX TDD OFDMA system with the objective of maximizing downlink system throughput for the PUSC subchannelization permutation mode. We show that finding the optimal burst assignment that maximizes throughput is NP-hard. In this dissertation,We study this problem following two distinct approaches: (1) Integer Programming approach: we formulate the problem as an IP problem and then relax it to LP, we propose different methods to resolve conflicts resulting from the LP relaxation and through extensive simulations we compare the performance of the proposed conflict resolution methods to the optimal solution. (2) Best Channel approach: we propose several efficient and effective methods to assign bursts to users based on channel quality, we prove that our Best Channel burst assignment method achieves a throughput within a constant factor of the optimal and through extensive simux lations with real system parameters, we study the performance of the Best Channel burst assignment method. To the best of our knowledge, we are the first to study the problem of DL Burst Assignment in the DL OFDMA subframe for PUSC subchannelization permutation mode taking user’s channel quality into consideration in the assignment process.
Ph.D. in Computer Science, May 2011
Show less
- Title
- INVESTIGATING DIRECTED EVOLUTION AND GENETIC ENGINEERING WITH VITREOSCILLA HEMOGLOBIN TO PRODUCE CULTURES FOR LOW AERATION BIOLOGICAL WASTEWATER TREATMENT
- Creator
- Kunkel, Stephanie
- Date
- 2014, 2014-07
- Description
-
The dominance of hemoglobin (Hb)-expressing bacteria in biological wastewater treatment systems could improve oxygen utilization under low...
Show moreThe dominance of hemoglobin (Hb)-expressing bacteria in biological wastewater treatment systems could improve oxygen utilization under low dissolved oxygen (DO) conditions. Hb-proteins are versatile molecules that have several biological functions. Here, Nitrosomonas europaea has been transformed with various plasmids; of particular interest is a recombinant plasmid bearing the constitutive Amo1 promoter and the gene (vgb) encoding the hemoglobin from the bacterium Vitreoscilla. Expression of VHb was assayed using various visible spectral methods, and VHb production seen in this recombinant strain. There were several positive effects on N. europaea metabolism related to VHb expression that were seen, specifically the ability of cultures to convert ammonia to nitrite at a slightly higher rate as well as higher specific oxygen uptake rates (SOUR) at both high (near saturation, 7 mg O2/ L) and low (< 2 mg O2/L) dissolved oxygen (DO) conditions. In parallel to this, two activated sludge cultures from the same source were cultivated using synthetic wastewater seeded with activated sludge from the same source and were operated at high DO (near saturation) and low DO (0.25 mg O2/L) concentrations for 370 days. There were significant changes in the bacterial species and phyla present in each of the cultures at various time points during the 370 day operational period. In the low DO culture, over time, there was a much greater expression of single domain and truncated Hbs which may enhance utilization and delivery of oxygen to various enzymes as well as to the respiratory chain. A larger increase in heme b was also observed which coincides with this observation. By the end of the acclimation period, the SOUR values were about 30% greater in the low DO culture compared to the high DO culture. This indicated the successful adaption of the low DO culture to respire more efficiently and eventually outperform the high DO culture.
Ph.D. in Biology, July 2014
Show less
- Title
- AN INTEGRATED RESOURCE MANAGEMENT AND SCHEDULING FRAMEWORK FOR PRODUCTION SUPERCOMPUTERS
- Creator
- Tang, Wei
- Date
- 2012-07-16, 2012-07
- Description
-
Resource management and job scheduling is a crucial task on large-scale computing systems. Despite years of research on resource management...
Show moreResource management and job scheduling is a crucial task on large-scale computing systems. Despite years of research on resource management and scheduling, it has not kept pace with modern changes and technology trends. The study of this thesis is motivated by emerging issues observed in current production supercomputers, caused by reasons such as human behaviors, application characteristics, and increasing system complexity. Specifically, users tend to provide inaccurate parameters for their jobs which are dependent by the scheduler; system owners have diverse goals which are always conflicting with each other. Also, workload characteristics on production supercomputers keep changing unpredictably, making it hard to achieve a sustainable scheduling performance since scheduling policies are largely dependent on workload characteristics. Further, increasing hardware complexity causes system issues and leads to new demands. For example, issues such as node fragmentation, failure interruption, power consumption, and I/O overhead have become common in large-scale systems. Existing resource management systems lack the support for these issues and demands. In this study, we present an integrated resource management and scheduling framework, aiming at addressing emerging issues and challenges in resource management for large-scale production supercomputers. We have designed a set of new schemes, including job parameter prediction, adaptive metric-aware job scheduling, cost-aware job scheduling, and multi-domain job coscheduling. We have implemented these approaches in the production resource manager Cobalt, and evaluated them with real job traces from production supercomputers such as the Blue Gene/P system at Argonne National Laboratory. Experimental results show our schemes can effectively improve job scheduling regarding both user satisfaction and system utilization.
Ph.D. in Computer Science, July 2012
Show less
- Title
- APPLICATION OF THE FEAR-AVOIDANCE MODEL OF CHRONIC PAIN TO UNDERSTAND NEUROCOGNITIVE AND BEHAVIORAL FACTORS THAT CONTRIBUTE TO FUNCTIONAL IMPAIRMENT AND DEPRESSION IN ADULTS WITH SICKLE CELL DISEASE
- Creator
- Piper, Lauren E.
- Date
- 2017, 2017-07
- Description
-
Acute and chronic pain in sickle cell disease (SCD) are associated with functional impairment and depressive symptoms. Given the suboptimal...
Show moreAcute and chronic pain in sickle cell disease (SCD) are associated with functional impairment and depressive symptoms. Given the suboptimal management of pain in SCD and serious health risks associated with current treatment methods for pain, there is a need to identify factors associated with pain that impact functional outcomes and depression. The fear-avoidance (FA) model of chronic pain has been examined in other chronic pain populations as a means to understand how pain-related cognitive and behavioral factors contribute to functional impairment and depression, but has not been applied in individuals with SCD. The purpose of the present study was to apply the FA model of chronic pain to adults with SCD via mediation analyses. Additionally, mental flexibility was examined as a possible moderator in the FA model. Results demonstrated that pain catastrophizing mediated the relationship between pain severity and pain-related fear. No other mediators within the model were identified. Additionally, results did not demonstrate that mental flexibility moderated the relationship between pain severity and pain catastrophizing. Post-hoc exploratory analyses demonstrated that pain catastrophizing and pain-related fear significantly predicted functional impairment and depression, respectively, above and beyond pain severity. Overall, results suggest that the FA model of chronic pain does not apply to individuals with SCD and the predictive roles that pain catastrophizing and pain-related fear play in functional impairment and depression are not consistent with results in other chronic pain populations. Further studies are needed to identify factors that explain the relationship between pain, functional impairment, and depression so that these factors may be targeted for intervention as a means to improve pain, mood, and functional independence.
Ph.D. in Psychology, July 2017
Show less
- Title
- IDENTIFY AND IMPROVE OBSTACLE AVOIDANCE CAPABILITY OF UNMANNED GROUND VEHICLE
- Creator
- Nie, Chenghui
- Date
- 2015, 2015-05
- Description
-
An Unmanned Ground Vehicle (UGV), incorporating a high level of obstacle avoidance capability, benefits from field operations. Such a UGV...
Show moreAn Unmanned Ground Vehicle (UGV), incorporating a high level of obstacle avoidance capability, benefits from field operations. Such a UGV would be better able to travel at a high average speed to quickly finish tasks, as well as quickly alter its trajectory to avoid getting into hostile situation. However, avoiding obstacles at high speed is challenging, since the danger of collisions with obstacle is increased with vehicle speed. This thesis developed novel metrics to mathematically identify the obstacle avoidance capability of ground vehicles. The theory is applied to demonstrate the characteristics of the obstacle avoidance capability of generalized rigid bodies and three types of wheeled ground vehicles: Ackermann steered, differential steered and omni-directional vehicle. The design guidelines are provided in the final chapters to improve obstacle avoidance capabilities of these three types of wheeled ground vehicles. I demonstrated in this thesis that the Ackermann steered vehicle's obstacle avoidance capability is related to the location of its center of mass. I utilized the obstacle avoidance theory to create a novel Variable Inertial Vehicle (VIV), an unmanned ground vehicle with a capability to control the location of its center of mass during locotion. Experimental results are presented to demonstrate the improved obstacle avoidance capability at the end. This thesis also experimental evaluates the characteristics of the obstacle avoidance capability of an omni-directional unmanned ground vehicle. This omnidirectional vehicle is comprised of four independent differential steered units, Active Split Offset Caster (ASOC). Both the characteristics of the vehicle system and ASOC kinematics are demonstrated. Experimental results are presented at the end to validate its distinct obstacle avoidance capability in challenging outdoor terrains.
Ph.D. in Mechanical and Aerospace Engineering, May 2015
Show less
- Title
- SPECTRUM ALLOCATIONS ALGORITHMS IN WIRELESS NETWORKS
- Creator
- Xu, Ping
- Date
- 2011-04-26, 2011-05
- Description
-
All wireless devices rely on access to the radio frequency spectrum, which has been chronically regulated by static spectrum allocation...
Show moreAll wireless devices rely on access to the radio frequency spectrum, which has been chronically regulated by static spectrum allocation policies. With the recent fast growing of spectrum-based services and devices, the remaining spectrum available for future wireless services is being exhausted, known as the spectrum scarcity problem. The current fixed spectrum allocation scheme leads to significant spectrum white spaces (including spectral, temporal, and geographic), where many allocated spectrum blocks are used only in certain geographical areas and/or in brief periods of time. In this work, we design and analyze variant spectrum allocation algorithms for better spectrum utilization and study some fundamental performance bounds for networks with opportunistic spectrum utilization. We first propose spectrum allocation algorithms for offline model, in which all spectrum requests are known when allocation decision is made. Then we also addresse the problems in online model, where allocation decision should be made when only a few spectrum requests are known. In the online model, we focus on two different cases. The first one assumes no statistic of future spectrum requests are known, and the second one assumes some statistic is known or can be learned. For all these models, we design efficient spectrum allocation methods and analytically prove most of them are asymptotically optimal. Our extensive simulation results also verify our theoretical conclusion.
Ph.D. in Computer Science, May 2011
Show less
- Title
- AUTOMATED SLICING METHODS FOR LARGE EVENT TRACES
- Creator
- Smith, Raymond D.
- Date
- 2012-05-02, 2012-05
- Description
-
Many long-running computer systems record events as they execute, resulting in a dynamic record of system behavior. In large systems, the...
Show moreMany long-running computer systems record events as they execute, resulting in a dynamic record of system behavior. In large systems, the event trace may contain thousands of entries and when faced with a problem for analysis, programmers must sort through many disparate events to find those that are related to the system behavior under study and eliminate those that are not. In this research we investigated automatic reduction of event traces to reduce the volume of events and assist in analysis of behavior of large systems. Our approach was to adapt the techniques used in program slicing to compute event trace slices as a means of reduction. Two methods for slicing of event traces were proposed and investigated. The Event Dependence Based method (EDB) uses information available in the event trace to identify dependencies between events and to compute an event trace slice that meets a slicing criterion. The Model Dependence Based method (MDB) incorporates the use of an executable state-based system model to achieve further reduction of traces. The method identifies model-based dependences in the trace to compute trace slices. An experimental study was performed on simulated systems, representative of state-based software systems present in industry to analyze and compare the EDB and MDB slicing methods. Both methods provided significant reduction of event traces, particularly for systems with a low degree of sharing and interaction among resources. However, the MDB method significantly outperformed the EDB method for systems with a high degree of resource sharing.
Ph.D. in Computer Science, May 2012
Show less
- Title
- INVESTIGATION AND MODELING OF PRESSURE DEPENDENT YIELD BEHAVIOR OF 3D STOCHASTIC AND PERIODIC FOAMS
- Creator
- Ayyagari Venkata S, Ravi Sastri
- Date
- 2013, 2013-07
- Description
-
With growing potential of cellular solids in a multitude of diverse engineering applications including but not limited to automotive,...
Show moreWith growing potential of cellular solids in a multitude of diverse engineering applications including but not limited to automotive, aerospace, naval and biomed- ical industries as lightweight alternatives and space lling cores in sandwich struc- tures, need for predictive yield/failure criteria for these load bearing members under multiaxial stress states becomes critical. Although there exist several yield criteria proposed in the literature for highly porous solid foams, they are all phenomenolog- ical in nature, rely on relatively long list of model parameters that require di cult experimentation not readily available to end user, and none of them can handle the anisotropy observed in the majority of commercially available solid foams. Further, it is by now well established that, unlike commonly used engineering bulk solids, the yield behavior of highly porous solid foams is signi cantly in uenced by the hydro- static component of stress. In majority of phenomenological yield criteria proposed for solid foams this dependence is expressed by a quadratic pressure term. The scope of this study is quite comprehensive in the sense that it integrates analytical and computational investigation of yield behavior in solid foams along with extensive validation by recent experimental results produced in our lab. Present study proposes a physics based approach by hypothesizing that the yielding of stochastic foams is governed by the total elastic strain energy density, which leads to an energy based yield criterion for transversely isotropic foams and also provides a physical basis for the quadratic pressure dependence commonly adopted in existing phenomenolog- ical models. An added bene t of the analytical framework proposed in this work is that it introduces new scalar measures of stress and strain, which are referred to as characteristic stress and characteristic strain, that function in an analogous way to e ective (von Mises) stress and strain commonly used in analyzing the yield and post- yield behavior of bulk metals. Besides accommodating anisotropy, this energy-based xii yield criterion renders a unique advantage by relying only on the elastic properties and uniaxial yield strengths of the material, which makes the proposed yield criterion extremely practical for end user. Results from experimental data obtained from multiaxial testing of Divinycell H100 and H130 foams (Sha q, 2009; Ehaab, 2011) as well as a series of extensive com- putational simulations performed in this study on: a) periodic Kelvin foam models (both isotropic and transversely isotropic) of varying relative densities, b) stochastic Voronoi foams (both isotropic and transversely isotropic), point out to an additional linear pressure dependence in the yield behavior of solid foams, from a load-sharing viewpoint. This dependence is observed to be more pronounced at lower relative den- sities. A simple quantitative technique which is based on the partition of elastic strain energy into bending and stretch components is used to identify the distribution of deformation modes at microstructural level, along with its in uence on load sharing as a function of stress path. Furthermore, a plasticity model that incorporates a ow rule and hardening law are presented which allows the analysis of inelastic deforma- tions in solid foams in a continuum framework. Such models facilitate development of user de ned material model (UMAT) that allow evaluating the performance of proposed yield criterion under complex loading scenarios, such as indentation and punch loading.
PH.D in Mechanical and Aerospace Engineering, July 2013
Show less
- Title
- ANYTIME ACTIVE LEARNING DISSERTATION
- Creator
- Ramirez Loaiza, Maria E.
- Date
- 2016, 2016-05
- Description
-
Machine learning is a subfield of artificial intelligence which deals with algorithms that can learn from data. These methods provide...
Show moreMachine learning is a subfield of artificial intelligence which deals with algorithms that can learn from data. These methods provide computers with the ability to learn from past data and make predictions for new data. A few examples of machine learning applications include automated document categorization, spam detection, speech recognition, face detection and recognition, language translation, and self-driving cars. A common scenario for machine learning is supervised learning where the algorithm analyzes known examples to train a model that can identify a concept. For instance, given example documents that are pre-annotated as personal, work, family, etc., a machine learning algorithm can be trained to automate organizing your documents folder. In order to train a model that makes as few mistakes as possible, the algorithm needs many training examples (e.g., documents and their categories). Obtaining these examples often involves consulting the human user/expert whose time is limited and valuable. Hence, the algorithm needs to utilize the human’s time as efficiently as possible by focusing on the most cost-effective and informative examples that would make learning more efficient. Active learning is a technique where the algorithm selects which examples would be most cost-effective and beneficial for consultation with the human. In a typical active learning setting, the algorithm simply chooses the examples that should be asked to the expert. In this thesis, we take this one step further: we observe that we can make even better use of the expert’s time by showing not the full example but only the relevant pieces of it, so that the expert can focus on what is relevant and can provide the answer faster. For example, in document classification, the expert does not need to see the full document to categorize it; if the algorithm can show only the relevant snippet to the expert, the expert should be able to categorize the document much faster. However, automatically finding the relevant snippet is not a trivial task; showing an incorrect snippet can either hinder the expert’s ability to provide an answer at all (if the snippet is irrelevant) or even cause the expert to provide incorrect information (if the snippet is misleading). For this to work, the algorithm needs to find a snippet to show the expert, estimate how much time the expert will spend on that snippet, and predict if the expert will return an answer at all. Further, the algorithm would estimate the likelihood of the expert returning the correct answer. Similar to anytime algorithms that can find better solutions as they are given more time, we call the proposed set of methods anytime active learning where the experts are expected to give better answers as they are shown longer snippets. In this thesis, we focus on three aspects of anytime active learning: i) anytime active learning with document truncation where the algorithm assumes that the first words, sentences, and paragraphs of the document are most informative and it has to decide on the snippet length, i.e., where to truncate the document, ii) given a document, the algorithm optimizes for both snippet location and length, and lastly, iii) the algorithm chooses not only the snippet location and size but also chooses which documents to choose snippets from so that the snippet length, the correctness of the expert’s response, and the informativeness of the document are all optimized in a unified framework.
Ph.D. in Computer Science, May 2016
Show less
- Title
- THE VERY ENERGETIC RADIATION IMAGING TELESCOPE ARRAY SYSTEM OBSERVATIONS OF THE STARBURST GALAXY M82
- Creator
- Ratliff, Gayle
- Date
- 2015, 2015-07
- Description
-
This work describes the Very Energetic Radiation Imaging Telescope Array Systems (VERITAS) observations of the starburst galaxy M82 by...
Show moreThis work describes the Very Energetic Radiation Imaging Telescope Array Systems (VERITAS) observations of the starburst galaxy M82 by documenting the analysis of 231 quality-selected hours of observational data taken between 2008 and 2014. The prototypical starburst galaxy, M82’s high supernova (SN) rate and dense central accumulation of molecular gas make it a promising candidate for studying cosmic ray (CR) acceleration and propagation with the detection of di↵use very high energy (VHE; approximately 100 GeV-100 TeV) !-ray emission. This di↵use emission is predicted to result from proton-proton interactions within the galaxy’s core that produce VHE !-rays through neutral pion decay. This work confirms the results of the initial VERITAS publication covering 137 hours of M82 observations between January 2008 and April 2009, yielding a total of 103.5 excess !-ray-like events (0.007 !/min, 5.7 pre-trial statistical significance) from a deeper exposure of 231 hours of observation. The spectral properties found are in agreement with the original detection within errors ("=2.85 ± 0.39). These results are consistent with paradigms that describe the production of CRs via the conversion of mechanical energy generated in supernovae (SNe). These findings will improve current di↵use emission models by better constraining galaxy parameters and by providing insight into CR proton loss processes and timescales, with further understanding to be gained with the introduction of the Cherenkov Telescope Array (CTA). xiiThis work describes the Very Energetic Radiation Imaging Telescope Array Systems (VERITAS) observations of the starburst galaxy M82 by documenting the analysis of 231 quality-selected hours of observational data taken between 2008 and 2014. The prototypical starburst galaxy, M82’s high supernova (SN) rate and dense central accumulation of molecular gas make it a promising candidate for studying cosmic ray (CR) acceleration and propagation with the detection of di↵use very high energy (VHE; approximately 100 GeV-100 TeV) !-ray emission. This di↵use emission is predicted to result from proton-proton interactions within the galaxy’s core that produce VHE !-rays through neutral pion decay. This work confirms the results of the initial VERITAS publication covering 137 hours of M82 observations between January 2008 and April 2009, yielding a total of 103.5 excess !-ray-like events (0.007 !/min, 5.7 pre-trial statistical significance) from a deeper exposure of 231 hours of observation. The spectral properties found are in agreement with the original detection within errors ("=2.85 ± 0.39). These results are consistent with paradigms that describe the production of CRs via the conversion of mechanical energy generated in supernovae (SNe). These findings will improve current di↵use emission models by better constraining galaxy parameters and by providing insight into CR proton loss processes and timescales, with further understanding to be gained with the introduction of the Cherenkov Telescope Array (CTA). xiiThis work describes the Very Energetic Radiation Imaging Telescope Array Systems (VERITAS) observations of the starburst galaxy M82 by documenting the analysis of 231 quality-selected hours of observational data taken between 2008 and 2014. The prototypical starburst galaxy, M82’s high supernova (SN) rate and dense central accumulation of molecular gas make it a promising candidate for studying cosmic ray (CR) acceleration and propagation with the detection of di↵use very high energy (VHE; approximately 100 GeV-100 TeV) !-ray emission. This di↵use emission is predicted to result from proton-proton interactions within the galaxy’s core that produce VHE !-rays through neutral pion decay. This work confirms the results of the initial VERITAS publication covering 137 hours of M82 observations between January 2008 and April 2009, yielding a total of 103.5 excess !-ray-like events (0.007 !/min, 5.7 pre-trial statistical significance) from a deeper exposure of 231 hours of observation. The spectral properties found are in agreement with the original detection within errors ("=2.85 ± 0.39). These results are consistent with paradigms that describe the production of CRs via the conversion of mechanical energy generated in supernovae (SNe). These findings will improve current di↵use emission models by better constraining galaxy parameters and by providing insight into CR proton loss processes and timescales, with further understanding to be gained with the introduction of the Cherenkov Telescope Array (CTA). xiiThis work describes the Very Energetic Radiation Imaging Telescope Array Systems (VERITAS) observations of the starburst galaxy M82 by documenting the analysis of 231 quality-selected hours of observational data taken between 2008 and 2014. The prototypical starburst galaxy, M82’s high supernova (SN) rate and dense central accumulation of molecular gas make it a promising candidate for studying cosmic ray (CR) acceleration and propagation with the detection of di↵use very high energy (VHE; approximately 100 GeV-100 TeV) !-ray emission. This di↵use emission is predicted to result from proton-proton interactions within the galaxy’s core that produce VHE !-rays through neutral pion decay. This work confirms the results of the initial VERITAS publication covering 137 hours of M82 observations between January 2008 and April 2009, yielding a total of 103.5 excess !-ray-like events (0.007 !/min, 5.7 pre-trial statistical significance) from a deeper exposure of 231 hours of observation. The spectral properties found are in agreement with the original detection within errors ("=2.85 ± 0.39). These results are consistent with paradigms that describe the production of CRs via the conversion of mechanical energy generated in supernovae (SNe). These findings will improve current di↵use emission models by better constraining galaxy parameters and by providing insight into CR proton loss processes and timescales, with further understanding to be gained with the introduction of the Cherenkov Telescope Array (CTA). xiiThis work describes the Very Energetic Radiation Imaging Telescope Array Systems (VERITAS) observations of the starburst galaxy M82 by documenting the analysis of 231 quality-selected hours of observational data taken between 2008 and 2014. The prototypical starburst galaxy, M82’s high supernova (SN) rate and dense central accumulation of molecular gas make it a promising candidate for studying cosmic ray (CR) acceleration and propagation with the detection of di↵use very high energy (VHE; approximately 100 GeV-100 TeV) !-ray emission. This di↵use emission is predicted to result from proton-proton interactions within the galaxy’s core that produce VHE !-rays through neutral pion decay. This work confirms the results of the initial VERITAS publication covering 137 hours of M82 observations between January 2008 and April 2009, yielding a total of 103.5 excess !-ray-like events (0.007 !/min, 5.7 pre-trial statistical significance) from a deeper exposure of 231 hours of observation. The spectral properties found are in agreement with the original detection within errors ("=2.85 ± 0.39). These results are consistent with paradigms that describe the production of CRs via the conversion of mechanical energy generated in supernovae (SNe). These findings will improve current di↵use emission models by better constraining galaxy parameters and by providing insight into CR proton loss processes and timescales, with further understanding to be gained with the introduction of the Cherenkov Telescope Array (CTA).
Ph.D. in Physics, July 2015
Show less
- Title
- FIBRONECTIN INFLUENCES THE RATE OF ASSEMBLY AND STRUCTURAL CHARACTERISTICS OF THE FIBRIN MATRIX AND A MAP OF LYSINE PEGYLATION SITES IN FIBRONECTIN
- Creator
- Ramanathan, Anand
- Date
- 2015, 2015-07
- Description
-
Fibronectin serves multiple roles during tissue formation and wound healing, functioning through interactions with cells and extracellular...
Show moreFibronectin serves multiple roles during tissue formation and wound healing, functioning through interactions with cells and extracellular molecules. The overall objective of my research was to investigate fibronectin biochemistry on responses associated with wound healing. My approach was to engineer relevant in vitro models highlighting fibronectin functionality in tissues and link this work to more complex wound healing systems. My research goals were accomplished through the following three specific aims: (1) Determine the role of fibronectin on the kinetics of formation and structure of a fibrin-fibronectin matrix, (2) Determine the effect of protease on the activity of fibronectin in decellularized extracellular matrices and (3) Map the sites of polyethylene glycol conjugation or PEGylation to lysine residues in fibronectin. Aim 1: I demonstrated that fibronectin increased the initial rate of fibrin matrix formation and altered the fibrin matrix structure. These findings are novel because they link results from light absorbance studies to microcopy analyses and demonstrate the influence of fibronectin on fibrin matrix structural characteristics. Aim 2: I demonstrated a link between fibronectin proteolysis and reduced cell adhesion in decellularized extracellular matrices. This study demonstrates the susceptibility of fibronectin to proteolysis in the extracellular matrix and the resulting loss of matrix functionality, placing weight on bioengineering strategies to stabilize fibronectin against proteolysis. Aim 3: I examined proteolytic fragments of native and PEGylated fibronectin to map fibronectin lysine residues that are conjugated PEG. From four key chymotryptic fragments that span fibronectin and are recognized by specific monoclonal antibodies, I provide a map of lysine PEGylation sites for fibronectin. Moreover, I show that lysine PEGylation of fibronectin occurs asymmetrically on the dimer arms. Knowledge of the lysine PEGylation sites can be used to plan future experiments for investigating fibronectin biochemical interactions in complex in vitro and in vivo models. In accomplishing these specific aims, I identified key biomolecular mechanisms involving fibronectin and created relevant in vitro models to study these interactions. The work detailed in this thesis lays the foundation for future experiments to investigate fibronectin functionality and develop therapeutic strategies targeting fibronectin biochemistry in tissue development.
Ph.D. in Chemical Engineering, July 2015
Show less
- Title
- EFFECT OF FIDELITY ON COMPUTERIZED SIMULATION ASSESSMENT OUTCOMES
- Creator
- Siskind, Ariel David
- Date
- 2012-10-09, 2012-12
- Description
-
Simulation fidelity refers to the level of realism with which the simulation is presented, as well as the method in which applicants can...
Show moreSimulation fidelity refers to the level of realism with which the simulation is presented, as well as the method in which applicants can respond. Work simulations have been shown in previous literature to be beneficial selection tools. However, the research is less concrete with regard to the effects of various levels of fidelity (specifically, high fidelity virtual environments) on important organizational outcomes. In the current study, a model of fidelity is presented and 322 participants completed one of four simulation conditions (high fidelity, low fidelity/no branching, low fidelity/branching, zero fidelity). Face validity, applicant reaction, presence/immersion/engagement, predictive validity, and reliability were measured as outcomes of interest from the model. The findings indicated that the high fidelity condition and the low fidelity conditions had increased fidelity, fairness, face validity, and presence/immersion/ engagement compared to the zero fidelity condition. However, the addition of branching to the low fidelity simulation did not impact the hypotheses in the expected direction. The hypothesis regarding predictive validity was not supported, and the hypothesis regarding reliability was partially supported. Implications of the findings, limitations, and recommendations for future research are presented.
PH.D in Psychology, December 2012
Show less