Search results
(161 - 180 of 1,017)
Pages
- Title
- POWER PROFILING, ANALYSIS, LEARNING, AND MANAGEMENT FOR HIGH-PERFORMANCE COMPUTING
- Creator
- Wallace, Sean
- Date
- 2017, 2017-05
- Description
-
As the field of supercomputing continues its relentless push towards greater speeds and higher levels of parallelism the power consumption of...
Show moreAs the field of supercomputing continues its relentless push towards greater speeds and higher levels of parallelism the power consumption of these large scale systems is steadily transitioning from a burden to a serious problem. While the machines are highly scaleable, the buildings, power supplies, etc. are not. Even the most power efficient systems today consume one to two megawatts per peata op/s. Multiplying that by 1,000 to reach the next generation of supercomputer (i.e., exascale) and the power necessary just to turn the machine on is simply impractical. Thus, power has become a primary design constraint for future supercomputing system designs. As such, it has become a matter of paramount importance to understand exactly how current generation systems utilize power and what implications this has on future systems. As the saying goes, you can't manage what you don't measure. This work addresses several large hurdles in fully understanding the power consumption of current systems and making actionable decisions based on this understanding. First, by leveraging environmental data collected from runs of real leadership class applications we analyze power consumption and temperature as it pertains to scale on a production IBM Blue Gene/Q supercomputer. Then, through development of a new power monitoring library, MonEQ, we quantitatively studied how power is consumed in major portions of the system (e.g., CPU, memory, etc.) through profiling of microbenchmarks. Expanding on this, we then studied how scale and network topology affect power consumption for several well-known benchmarks. Wanting to increase the effectiveness of our power monitoring library, we extended it to work with many of the most common classes of hardware available in today's HPC landscape. In doing so, we provided an in-depth analysis of what data is obtainable, what the process of obtaining it is like, and how data from different systems compares. Next, utilizing the knowledge gained from these experiences, we developed a new scheduling approach which utilizing power data can effectively keep a production system's power consumption under a user-specified power cap without modification to the applications running on the system. Finally, we extend this scheduling approach to be applicable to more than just one objective. In doing so, the scheduler can now optimize on multiple criteria instead of simply considering system utilization.
Ph.D. in Computer Science, May 2017
Show less
- Title
- SEISMIC DESIGN STUDY OF STEEL PLATE SHEAR WALL
- Creator
- Moshiri, Ali
- Date
- 2012-04-20, 2012-05
- Description
-
plate shear walls are an innovative lateral load-resisting system capable of effectively and economically bracing a building against both wind...
Show moreplate shear walls are an innovative lateral load-resisting system capable of effectively and economically bracing a building against both wind and earthquake forces. The system consists of infill steel plates connected to boundary beams and columns over the full height of the framed bay. Beam-to-column connections can be rigid or shear connections and the infill plates can be either stiffened or unstiffened, depending on the design philosophy of the infill plates. The view of some structural designers is to use heavy stiffeners to reinforce and increase the buckling capacity of shear walls, whereas, if the walls are left unstiffened and allowed to buckle, their energy absorption will increase significantly due to the post-buckling capacity. Performance of 9-story SPSW with moment resisting beam to column connections was studied under quasi-static loading condition and 10 earthquake records recorded in Los Angeles by developing a nonlinear dynamic explicit finite element models in ABAQUS. All the models were validated with experimental results. Effect of stiffness of boundary elements (VBE and HBE) and plate thickness on general behavior of the structure were also investigated. In design of SPSWs, vertical boundary elements play a major role in increasing the capacity of the system. In high seismic zones there is always a chance of plastic hinge formation in the boundary elements specially columns in any intermediate floor. It is recommended that SPSWs not be used for medium to high rise buildings in high seismic regions until the lack of capacity design requirements for this type of SPSW is rectified.
Ph.D. in Structural Engineering, May 2012
Show less
- Title
- POLARIZATION INDUCED BY A TERAHERTZ ELECTRIC FIELD ON A CONDUCTIVE PARTICLE
- Creator
- Shen, Tao
- Date
- 2013, 2013-05
- Description
-
Interactions of an electromagnetic wave with an object of dimensions small compared to the wavelength can often be accounted for by...
Show moreInteractions of an electromagnetic wave with an object of dimensions small compared to the wavelength can often be accounted for by considering the dipole moments, which are effective in explaining the scattering characteristics in the frequency range referred to as the Rayleigh region. Dielectric functions derived from polarization processes due to molecular orientation or bound charge displacements have been employed over the years to account for the scattering properties of particles. In the presence of mobile charges, bulk conductivity may be incorporated with a complex dielectric function to explain the peak in absorption near the plasma frequency exhibited by metallic particles in the optical region. With the current interest in nanostructures, an investigation of the electromagnetic properties of a conductive particle with attention given to space-charge effects would appear timely. This can be accomplished by coupling the transport equations of the charge carriers to the Maxwell’s equations. Results of computations performed for elementary structures such as plates and particles revealed the screening of the internal field while dispersion and absorptions effects are shown by the complex dipole moments. To gain insight into the nature of charge-wave interactions, results based on quasi-static formulation for the electric field will be compared with those based on full-wave analysis, with special attention given to the charge and current distributions within the structure. By consideration of the physical process of charge carrier motion and lattice polarization, the equivalent circuit model for a conductive nanoparticle in the terahertz frequency range is developed. All circuit elements are of electrical nature and can be directly expressed in terms of material parameters. The equivalent circuit can serve as the basis of analysis for composite structures and aggregates of which the conductive nanoparticle is a constituent.
PH.D in Electrical Engineering, May 2013
Show less
- Title
- COMPUTATIONAL MODELS OF TRANSPARENT WATER STORAGE ENVELOPES FOR ENERGY EFFICIENT COMMERCIAL BUILDINGS
- Creator
- Liu, Xiangfeng
- Date
- 2012-04-25, 2012-05
- Description
-
Transparent Water Storage Envelopes (TWSEs) are climatic adaptive fenestration systems. The major part of the system is an array of modular...
Show moreTransparent Water Storage Envelopes (TWSEs) are climatic adaptive fenestration systems. The major part of the system is an array of modular transparent water containers which are integrated into frames of curtain walls, and serve as both façade and auxiliary water tanks for a commercial building. The concept originates from the idea of combing transparency with dynamic benefits of thermal mass in summer, as well as passive solar heating in winter. Optical and thermal characteristics of TWSEs, including their energy performance, have been studied systematically via numerical approaches. Two numerical procedures covered in the thesis: one is based on the simplified synchronized onedimensional nodal thermal model, and the other is based on the more complex and accurate synchronized CFD model. In each numerical procedure, a triple-step simulation methodology and the correlated computational models of TWSEs are employed. Based on the calculation and simulation results, it can be definitely concluded that TWSEs are energy efficient fenestration systems. They can outperform conventional glazing as long as being designed elaborately with the consideration of the unique physical characteristics, applied under suitable climatic conditions, and operated with appropriate energy efficiency measures. Furthermore, the innovative technical paradigm of TWSEs and the numerical approach developed for energy simulation of TWSEs demonstrate great potential to be implemented in engineering practice for energy efficient commercial buildings.
Ph.D. in Architecture, May 2012
Show less
- Title
- VERIFICATION OF LARGE-SCALE ON-CHIP POWER GRIDS
- Creator
- Xiong, Xuanxing
- Date
- 2013, 2013-05
- Description
-
As technology scaling continues, the performance and reliability of integrated circuits become increasingly susceptible to power supply noises...
Show moreAs technology scaling continues, the performance and reliability of integrated circuits become increasingly susceptible to power supply noises, such as IR drops and Ldi/dt noises in the on-chip power grids. Reduced supply voltage levels in the grid can increase the gate delay, leading to timing violations and logic failures. In order to ensure a reliable chip design, it is critical to verify that the power grid is robust, i.e., the power supply noises are acceptable for all possible runtime situations. Hence, power grid verification has become an indispensable step in modern design flow of integrated circuits. Nowadays, it is common practice to verify power grids by simulation. Typically, an equivalent RC/RLC circuit model of the grid is extracted from the layout, and designers perform simulations to evaluate the power supply noises based on the current waveforms drawn by the circuit. As power grid simulation can only be performed after the circuit design is done, vectorless power grid verification has been introduced to enable early power grid verification with incomplete current specifications, so that the power grid design can be better tuned and optimized at early design stages, thus reducing the design time. Due to the increasing complexity of modern chips, power grid verification has become very challenging. The broad goal of this dissertation is to explore efficient algorithms for verifying large-scale on-chip power grids. Specifically, we study parallel power grid transient simulation, vectorless steady-state verification and vectorless transient verification. Parallel forward and back substitution algorithms are designed for efficient transient simulation; a set of novel algorithms are developed to incrementally improve the runtime efficiency of vectorless steady-state verification; and an efficient approach is proposed for vectorless transient verification with novel constraint setting.
PH.D in Electrical Engineering, May 2013
Show less
- Title
- EXPLORING THE SHEAR-AND-TIME DEPENDENT DEGRADATION OF VON WILLEBRAND FACTOR UNDER VENTRICULAR ASSIST DEVICE-RELATED FLOW CONDITIONS
- Creator
- Yang, Shuo
- Date
- 2015, 2015-12
- Description
-
Abnormalities in VWF can cause impaired blood coagulation which results in higher bleeding tendency in patients with this disorder. Alteration...
Show moreAbnormalities in VWF can cause impaired blood coagulation which results in higher bleeding tendency in patients with this disorder. Alteration in VWF is characteristic in ventricular assist devices (VADs) implanted subjects with failing hearts. The nature of the abnormalities produced and the conditions which produce such abnormalities are not fully understood. The studies in this thesis investigate quantitatively the effects of VADs and VAD-related flow conditions on VWF degradation. This thesis consists of three studies: 1) an in vitro VAD loop study in which was investigated the degradation effects of three VADs either under preclinical development (VAD I) or being commercially available (VAD II & III); 2) a viscometer shear study in which was investigated a variety of factors under the controlled condition of a modified Couette viscometer, namely, shear stress, exposure time, pulsatile frequency and protease function, with respect to VWF degradation 3) a tubular shear study in which was investigated the relative degradation effects of shear stress versus exposure time under more VAD-related shear stresses (10 - 100 times higher than physiological levels) and exposure times of miliseconds. In the VAD flow loop, significant VWF degradation induced by VADs wee observed with an approximately 95% loss of high molecular weight VWF by 60 minutes. In the viscometer and the tubular studies, the factors studied enhanced VWF degradation in the following manner: increased shear stress above physiological levels, prolonged exposure time and higher pulsatile shear frequency were associated with greater degradation; shear stress was a more dominant factor than exposure time with respect to the degradation; and a various shear stress regions demonstrated maximal degradation effects. In addition, calcium-dependent protease function was a necessity for VWF degradation at all shear stress levels investigated. The studies also revealed that the unfolding of VWF to expose the cleavage sites appeared to take more time under shear than the refolding to re-cover those sites under static conditions. Critical shear regions may be important for unfolding and degrading VWF multimers of various sizes.
Ph.D. in Biomedical Engineering, December 2015
Show less
- Title
- TOPICS IN COUNTERPARTY RISK AND DYNAMIC CONIC FINANCE
- Creator
- Iyigunler, Ismail
- Date
- 2012-11-02, 2012-12
- Description
-
This thesis consists of three essays about modeling counterparty risk and pricing derivative securities. In the rst essay, we analyze the...
Show moreThis thesis consists of three essays about modeling counterparty risk and pricing derivative securities. In the rst essay, we analyze the counterparty risk embedded in CDS contracts, in presence of a bilateral margin agreement. We focus on the pricing of collateralized counterparty risk, and we derive the bilateral Credit Valuation Adjustment (CVA), unilateral Credit Valuation Adjustment (UCVA), and Debt Valuation Adjustment (DVA). We propose a model for the collateral by incorporating all related factors such as the thresholds, haircuts and margin period of risk. We derive the dynamics of the bilateral CVA in a general form with related jump martingales. Counterparty risky and the counterparty risk-free spread dynamics are derived and the dynamics of the Spread Value Adjustment (SVA) is found as a consequence. We nally employ a Markovian copula model for default intensities and illustrate our ndings with numerical results. In the second essay we address the issue of computation of the bilateral CVA under rating triggers in presence of ratings-linked margin agreements. We consider collateralized OTC contracts, that are subject to rating triggers, between two parties { an investor and a counterparty. Moreover, we model the margin process as a function of the credit ratings of the counterparty and the investor. We employ a Markovian approach for modeling of the rating transitions and of the default probabilities of the counterparties. In this framework, we derive the representation for bilateral CVA. We also introduce a new component in the decomposition of the counterparty risky price: namely the rating valuation adjustment (RVA) that accounts for the rating triggers. We consider several dynamic collateralization schemes where the margin thresholds are linked to the credit ratings of the counterparties. We account for the rehypothecation risk in the presence of independent amounts. Our results are ix illustrated in terms of a CDS contract and an IRS contract. In the third essay, we study the problem of pricing in incomplete markets with risk measures and acceptability indices. We propose a model for nding the dynamic ask and bid prices of derivative securities using Dynamic Coherent Acceptability Indices (DCAI) in the presence of transaction costs. In this framework, we de ne and prove a representation theorem for dynamic bid ask prices. We show that our prices can be computed using dynamic Gain-Loss Ratio (dGLR), which is a DCAI. To illustrate our results, we provide several numerical examples, by pricing barrier options with dGLR.
PH.D in Applied Mathematics, December 2012
Show less
- Title
- A METHODOLOGY FOR UTILIZATION OF DEGRADED WATER IN THERMOELECTRIC POWER PLANT COOLING SYSTEMS
- Creator
- Safari, Iman
- Date
- 2013, 2013-12
- Description
-
The overall objective of this study was to develop a comprehensive methodology to identify viable treatment strategies for utilization of...
Show moreThe overall objective of this study was to develop a comprehensive methodology to identify viable treatment strategies for utilization of degraded waters for cooling in thermoelectric power systems. To achieve this objective a process simulation model was developed using Aspen Plus® with the OLI (OLI System, Inc.) water chemistry model to predict water quality and the rate of fouling in the recirculating cooling loop utilizing secondary-treated municipal wastewater (MWW) and tertiary-treated municipal wastewater as the sources of makeup water. This process simulation model includes sub- models for pre-treatment units; the cooling tower with water, CO2, and NH3 evaporation; as well as the recirculating cooling system and condenser with salt precipitation and fouling. The input parameters of the model, including CO2 mass transfer coefficients in the cooling tower and kinetics of salts precipitation reactions, were determined by developing mathematical models and calibrating the models with the experimental data obtained from literature. The process simulation module was used to predict the water quality in the recirculating cooling loop and the results were compared with pilot-scale experimental data from literature on makeup water alkalinity, loop pH and ammonia evaporation. The effects of various parameters including makeup water quality, salt formation, NH3 and CO2 evaporation mass transfer coefficients, heat load and operating temperatures were investigated. The results indicate that stripping of CO2 and NH3 in the cooling tower can significantly affect the cooling loop pH. x viii The model was also used to determine the rate of fouling in the condenser. The results indicate that the fouling rate of MWW as makeup water is significantly higher than that expected with fresh water, and tertiary treatment of MWW such as nitrification and/or softening can significantly reduce the fouling potential. Finally, the rate of fouling obtained from this study was integrated into the existing cost model developed earlier (at Illinois Institute of Technology) to perform the overall economic analysis. The results show that the use of municipal wastewater (MWW) to replace freshwater as makeup for the recirculating cooling loops of thermoelectric power plants is economically viable when tertiary treatments such as nitrification or softening are applied. Among various treatment strategies studied, nitrification of MWW has the lowest cost of 0.29 $/m3 for utilization in a 550 MW power plant. Furthermore, it was concluded that utilization of secondary treated municipal wastewater (MWW) without tertiary treatments such as nitrification or softening is not economically viable due to its significant fouling costs.
PH.D in Chemical Engineering, December 2013
Show less
- Title
- CREDIT DEFAULT SWAP SPREAD FORECASTING USING THE LINEAR BAYESIAN RANDOM COEFFICIENTS MODEL WITH BALANCED PANELS
- Creator
- Arifi, Imir
- Date
- 2014, 2014-05
- Description
-
This study (thesis) predicts out of sample one to five year quarterly credit default swap spread curves for subsets of a population comprised...
Show moreThis study (thesis) predicts out of sample one to five year quarterly credit default swap spread curves for subsets of a population comprised of 308 companies via the linear Bayesian Random Coefficient Model (RMC) with balanced panel construction, capturing over 80% of reference entities with liquid CDS term structures. The use of scoring, structural and reduced form model variations generates credit spread tenure points and curves at the company level. The Altman Z-score and classic Merton structural framework explain too little of the credit default swap spreads out of sample. However, The Merton structural framework works well in predicting out of sample credit default swap spreads when modified by deriving the implied leverage ratio via market spreads. The widely used, Bloomberg implemented, JPMorgan 2001(CDSW) model works well for the period the study covers. The Bayesian Random Coefficients model explains 87% of observed credit default swap spreads one quarter out of sample, substantially exceeding any published research on the credit spread forecasting subject.
PH.D in Management Science, May 2014
Show less
- Title
- THE DEVELOPMENT OF AN INSTRUMENT TO EVALUATE TEACHERS’ CONCEPTS ABOUT NATURE OF MATHEMATICAL KNOWLEDGE
- Creator
- Kean, Lesa L.
- Date
- 2012-12-10, 2012-12
- Description
-
While there does seem to be widespread consensus that teachers’ beliefs and concepts influence the way they teach, even the most recent...
Show moreWhile there does seem to be widespread consensus that teachers’ beliefs and concepts influence the way they teach, even the most recent international studies suggest that research-based evidence for this consensus is limited. In an effort to enlarge and enhance the pool of evidence that shows specific relationships between teacher beliefs and practice, the present author undertook to write an attitude survey and interview protocol that identifies and distinguishes teachers’ concepts on eight different aspects of NOMK. Such a survey seems to be a natural first step to providing evidence for the larger question of which beliefs correlate to what teacher behaviors. Eight NOMK aspects were identified and defined based on a review of over 68 resources including twelve that contained an existing assessment addressing NOMK concepts. While superficial inspection of the assessments referenced may suggest that the best solution may be to use an existing assessment or to compile a list of items from these various assessments and use that to assess NOMK, the researcher suggests four major issues that would suggest otherwise. The items of the assessment and the assessment as a whole were validated through several steps. First, the author started with over 40 survey items, distributed evenly over her eight aspects and including both Likert-type and open-ended items. Second, the items were randomized and distributed to practicing mathematics teachers for their feedback. Third, the items were revised and sent back out to teachers for additional feedback. Fourth, the resulting survey was piloted with over 20 community college teachers. Fifth, their responses were coded, and the open-ended items were coded by xii rubric and confirmed by a second coder. Sixth, the survey was revised once again and piloted to another sample of 20 with similar analysis. Finally, she conducted several forms of qualitative and quantitative analysis to cull down the items to those that produced the most valid and reliable survey items set possible. The resulting survey addresses six of the eight aspects proposed by the researcher and includes both Likert-type and open-ended items intended to be confirmed and clarified through interview. The researcher suggests further research be done in order to design items that validly and reliably identify teachers’ concepts of NOMK on the remaining two aspects.
PH.D in Mathematics Education, December 2012
Show less
- Title
- SIMULATION AND DEVELOPMENT OF A CLINICAL ANALYZER-BASED IMAGING SYSTEM
- Creator
- Majidi, Keivan
- Date
- 2013, 2013-12
- Description
-
The analyzer-based phase-sensitive X-ray imaging method (ABI) is emerging as a potential alternative to conventional radiography. ABI...
Show moreThe analyzer-based phase-sensitive X-ray imaging method (ABI) is emerging as a potential alternative to conventional radiography. ABI simultaneously generates a number of planar images containing information about scattering, refraction and absorption properties of the object. These parametric images are acquired by sampling the angular intensity profile (AIP) of an X-ray beam passing through the object at different positions of the analyzer crystal. Like many of the modern imaging techniques, ABI is a computed imaging method (meaning that the images are calculated from raw data). Therefore, the noise in ABI depends on the imaging conditions such as source flux, number of the analyzer positions, and the analyzer positions themselves as well as on the estimation method of the parameters. In the first part of this thesis, we use the Cramer-Rao lower bound to quantify the noise in ABI images and then investigate the effect of different analyzer-sampling strategies on this bound. The CRLB is the minimum bound for the variance of an unbiased estimator and defines the best noise performance that one can obtain regardless of which estimation method is used to estimate ABI parametric images. We will then use this bound to evaluate three ABI methods: Multiple-Image Radiography (MIR), Diffraction Enhanced Imaging (DEI) and Scatter Diffraction Enhanced Imaging (S-DEI). The proposed methodology can be used to evaluate any other ABI parametric image estimation technique. Synchrotron radiation has been the main source for experimental ABI and developing its methodologies, therefore the ABI application to clinical imaging has been very limited. It is inevitable to use conventional X-ray sources for ABI in order to utilize xii the technique in the clinical applications, however, due to the limited intensity of these sources and their finite source size, developing such systems is very challenging. In the second part of this thesis, we use computer simulations to understand the above challenges better. We measure the properties of this imaging system such as flux and point-spread function for various design parameters and discuss how to find an “optimal” setup based on these properties. The optimality of an imaging setup depends on the specific application that one wants to perform using the system; however, the results and discussions in this section layouts a design procedure for clinical ABI systems. In the last part of this thesis we review the steps we took in the Advanced X-ray Imaging Laboratory (AXIL) toward developing a clinical ABI system.
PH.D in Electrical Engineering, December 2013
Show less
- Title
- GOTTA EAT TO LIVE, GOTTA STEAL TO EAT: THE INVESTIGATION OF SERIOUS DISRUPTIVE BEHAVIOR, TEMPERAMENT, AND EXECUTIVE DYSFUNCTION AMONG HOMELESS YOUTH
- Creator
- Kaszynski, Katie
- Date
- 2014, 2014-07
- Description
-
Background: Homeless youth are at risk for many adverse outcomes, including poor physical health, traumatic experiences, victimization, poor...
Show moreBackground: Homeless youth are at risk for many adverse outcomes, including poor physical health, traumatic experiences, victimization, poor academic achievement, cognitive deficits, psychopathology, and substance use. Research demonstrates that these individuals engage in substantial disruptive behavior (e.g., stealing, dealing drugs, breaking and entering, engaging in prostitution), which further increases their risk of negative outcomes. Individual factors, including innate temperament and executive functioning skills have been shown to relate to one another and be independently related to behavior problems, as evidenced by research investigating housed youth. Homeless youth are shown to exhibit poor effortful control, high distress, executive dysfunction, and substance abuse; factors of which have not been fully examined in relationship to persistent behavior problems as reflected in antisocial personality disorder (ASPD). Study Aim: The current study evaluated the association between temperament, executive functioning, and substance use disorders in their relation to the likelihood of meeting criteria for ASPD among homeless youth (ages 18-22). It was hypothesized that these variables would significantly relate to meeting criteria for ASPD in this population. Procedure: 87 homeless individuals (mean age = 19.27) who were residing at a homeless shelter at the time of the study (in Chicago or Los Angeles) participated over the course of two testing sessions. Each individual completed measures of ASPD and substance use disorders (MINI), temperament (ATQ), and executive functioning (D-KEFS), among other measures that are part of a larger studying conducted at University of Chicago Medical Center (UCMC). Results: Results suggested that temperament (specifically effortful control) executive dysfunction (specifically cognitive shifting), and substance use disorder (specifically substance abuse) were significantly related to the likelihood of a homeless individual meeting criteria for ASPD. Youth who showed poorer effortful control, better ability to shift attention between sets of information, and substance abuse were at a greater likelihood of meeting criteria for ASPD. Conclusions: These findings indicate that aspects of temperament, specific executive skills, and substance abuse are important variables in determining the likelihood of ASPD among a population of homeless individuals. Clinical implications, limitations, and suggestions for interventions are discussed.
Ph.D. in Psychology, July 2014
Show less
- Title
- THE VAPORIZATION PHENOMENA OF FUEL DROPLETS EXPOSED TO ASYMMETRIC RADIANT HEATING USING PLANAR LASER-INDUCED FLUORESCENCE
- Creator
- Ammigan, Kavin
- Date
- 2012-04-17, 2012-05
- Description
-
Droplet vaporization under asymmetric conditions is prevalent in many combustion related devices where fuel droplets may either experience...
Show moreDroplet vaporization under asymmetric conditions is prevalent in many combustion related devices where fuel droplets may either experience asymmetric thermal radiant heating or travel in velocity and temperature gradients. Asymmetric radiant heating is particularly common in spray flames, counter-flow diffusion flames, regions close to the walls of conventional combustion chambers and more importantly in liquid-fueled microcombustors. In this study, experiments are carried out to observe how droplets vaporize when exposed to asymmetric radiant heating. The experimental set-up consists of applying radiant heating, through a radiant panel heater, to one face of a monodisperse droplet stream while using the planar-laser induced fluorescence (PLIF) diagnostic tool to reveal the spatial vapor distribution around vaporizing droplets. Since most fuels are made up of multiple components, bicomponent droplets are also investigated. Pure acetone droplets as well as mixtures of acetone/alkanes (octane and hexane) and acetone/alcohols (ethanol and 2-propanol) droplets are investigated. Results in the form of PLIF images, reveal asymmetric vapor distributions around the droplets with the apparent induction of Stefan flow from the irradiated droplet surface. Such phenomena have not previously been reported in the literature and have relevance to the overall fuel vaporization process as well as subsequent ignition and pollutant formation processes. To further investigate the experimental results, a convective and radiative heat transfer model is employed to simulate the droplets under corresponding experimental conditions. Results from the model show convective cooling and a strong thermal radiation absorption near the droplets’ surface. The induced asymmetric Stefan flow observed experimentally is therefore a consequence of the high thermal radiation absorption at the droplets’ surface. This study gives both experimental and theoretical results of the vaporization phenomena of asymmetrically irradiated fuel droplets with varying compositions, diameters and irradiation temperatures.
Ph.D. in Mechanical and Aerospace Engineering, May 2012
Show less
- Title
- ANALYSIS AND CONTROL OF COMPRESSION-IGNITION AND SPARK-IGNITED ENGINES OPERATING WITH DUAL-FUEL COMBUSTION STRATEGY
- Creator
- Kassa, Mateos
- Date
- 2017, 2017-07
- Description
-
In recent years, the implementation of a dual-fuel combustion strategy has been explored as a means to improve the thermal efficiencies of...
Show moreIn recent years, the implementation of a dual-fuel combustion strategy has been explored as a means to improve the thermal efficiencies of internal combustion engines while simultaneously reducing their emissions. The dual-fuel combustion strategy was introduced in compression ignition engines to control the combustion phasing by varying the proportion of two simultaneously injected fuels, and altering the combustion timing. The dual-fuel injection strategy also allowed to extend the load limitation of advanced combustion engines, since the two injected fuel ignite in succession reducing the high peak pressures that generally act as a limiting factor. In spark-ignited (SI) engine, the implementation of a dual-fuel combustion strategy serves as an alternative approach to avoid knock (the inadvertent auto-ignition of the fuel mixture). Although conventional engines rely on delaying spark timing to avoid knocking cycles (which significantly reduces the thermal efficiency), the dual-fuel SI engine rely on the simultaneous injection of a low knock resistance and high knock resistance fuel to dynamically adjust the fuel resistance to knock as required. The dual-fuel SI engine thereby successfully suppresses knock without compromising the engine efficiency. Despite the benefits of the dual-fuel combustion strategy, several challenges arise in its implementation, especially when it is implemented along with other advanced combustion strategy leveraging variable valve timing, exhaust gas recirculation, turbocharging, and so forth. This study explores some of these challenges and addresses them from a control standpoint. Cylinder-to-cylinder variations is identified as one of the main challenges. An in-cylinder oxygen estimation strategy and modification to the conventional fueling strategy are proposed as approaches to reduce the combustion variations. In SI engines, the valve dynamics in transient operations are shown to negatively impact the dual-fuel control strategy. The effect of the valve timing on knock propensity and the resulting effect on the fueling strategy is investigated. Finally, the dual-fuel SI engine relies on measurements of the combustion intensity to adjust the fuel split between the low RON and high RON fuel. The implementation of a conventional knock control is shown to be counterintuitive for dual-fuel SI engines due to the highly reactive nature of the controller and the deterministic approach that assumes cycle-to-cycle correlation of the combustion intensity. Statistical investigation of the combustion intensity metric is conducted to identify key properties that can be leveraged for more effective control strategy.
Ph.D. in Mechanical, Material and Aerospace Engineering
Show less
- Title
- SLIP-LINK MODELING OF ENTANGLED POLYMERS: RHEOLOGICAL APPLICATIONS AND EXTRACTING FRICTION FROM ATOMISTIC SIMULATION
- Creator
- Katzarova, Maria
- Date
- 2016, 2016-05
- Description
-
The Discrete Slip-link Model (DSM) is a robust mesoscopic theory that has great success predicting the rheology of flexible entangled polymer...
Show moreThe Discrete Slip-link Model (DSM) is a robust mesoscopic theory that has great success predicting the rheology of flexible entangled polymer liquids and gels. In the most coarse-grained version of the DSM, we exploit the university observed in the shape of the relaxation modulus of linear monodisperse melts. For this type of polymer we present analytic expressions for the relaxation modulus. The high-frequency dynamics which are typically coarse-grained out from the DSM are added back into these expressions by using a Rouse chain with fixed ends. We find consistency in the friction used for both fast and slow modes. Using these analytic expressions, the polymer density, the molecular weight of a Kuhn step, Mk, and the low-frequency cross-over between the storage and loss moduli, G' and G", it is now straightforward to estimate model parameter values and obtain predictions over the experimentally accessible frequency range. Moreover it has previously been shown that the two static parameters can be obtained from primitive path analysis of molecular dynamics simulations. In this work, two ways are shown for obtaining the friction parameter (i) from atomistic simulations of short chains using the free-volume theory, and (ii) from atomistic simulations of entangled chains by scaling the chain center-of-mass mean-square displacement from the slip-link model to that of the atomistic simulation. Futhermore three standing challenges for molecular theories of polymers (i) predictions for uniaxial extension of star-branched polymer melts (ii) predictions for blends of star-branched and linear chains and (iii) predictions for normal stress differences in start-up of shear and followoing cessation are addressed here using the DSM. Additionally the DSM is used to predict the mechanical properties of a cross-linked polydimethylsiloxane (PDMS) network swollen with non-reactive entangled PDMS solvent. These successful predictions strongly suggest that the observed rheological modification in the swollen blend arises from the constraint dynamics between the network chains and the dangling ends.
Ph.D. in Chemical Engineering, May 2016
Show less
- Title
- ENACTMENT OF COMMON CORE STATE STANDARDS FOR MATHEMATICS: RELATIONSHIP BETWEEN TEACHERS’ CHOICES OF CURRICULUM, TEACHING, AND PROFESSIONAL DEVELOPMENT
- Creator
- Kartal, Ozgul
- Date
- 2015, 2015-07
- Description
-
In response to perceived problems of the United States mathematics curriculum, the Common Core State Standards (CCSS) were developed under the...
Show moreIn response to perceived problems of the United States mathematics curriculum, the Common Core State Standards (CCSS) were developed under the leadership of the National Governor Association (NGA) and the Council of Chief State School Officers (CCSSO), and were released in 2010. As of the time of this study, forty-four states, the District of Columbia, four territories, and the Department of Defense Education Activity have adopted the CCSS. The CCSS for Mathematics (CCSSM) initiative has raised many research questions for the field concerning the quality, enactment, and effectiveness and impact of the standards. There is a great deal of concern, in particular, about the enactment of the standards, becauseas pointed out by Heck, Weiss, and Pasley (2011)if standards have not been well implemented in a particular setting, then failure or ineffectiveness shouldn’t be blamed on the standards. Various researchers identified the key components of a successful enactment of a set of standards as curriculum, assessment, professional development, and teachers and teaching practice (e.g., Confrey & Krupa, 2010; Goertz, 2010; Weiss et al., 2002; Wu, 2011b). Therefore, this research study focused on the enactment of the CCSSM, and analyzed the curriculum, teaching, assessment, and teacher professional development as the key components of the enactment process. This study focused on the state of Illinois which is one of the states that started fully implementing the new academic standards in the 2013-14 school year, and hence had ample preparation and trial time between the adoption and full implementation years. This study investigated the alignment between teachers’ choices of curriculum and CCSSM, and relation between the curriculum resources, professional development, and enactment of CCSSM. The focus of the study was on the content of basic algebra and concepts of solving equations and slope while investigating the alignment of enactment of the CCSSM. The sample was comprised of twelve 9th grade algebra teachers from six different schools in the state of Illinois. The criteria in selecting the schools were the geographic location of the schools, the types of the schools, the curricula used at the schools, and the professional development on CCSSM offered at the schools. Results of this study found that the curricula have limited alignment with CCSSM, and that teachers’ enactment of mathematical practices was affected by the availability of variety of standards for mathematical practices in their curriculum as well as professional development opportunities. The curricula provided opportunities for various mathematical practice standards throughout the content of basic algebra, but some practice standards were left out. Teachers provided opportunities for a subset of the standards that were present in the instructional segments of their curriculum. If not, they provided opportunities for practice standards as a result of acquisition from professional developments. The impact of professional development was most evident when teachers using the same curriculum differed in their enactment of the practices. This study portrayed the relations between (low/high) enactments of CCSSM, curriculum resources (aligned or not aligned), and professional developments. Many states and districts are just beginning to incorporate CCSSM into their math curriculum at the time of this study. Therefore, the findings of this study will guide them as they make their textbook, curriculum, and professional development choices and decisions. In addition, this research generated valuable knowledge that would be useful not only in improving the enactment of the CCSSM, but also improving the enactment of future sets of standards. There are implications for curriculum designers, administrators/school and district leaders, professional development designers, and teacher educators.
Ph.D. in Mathematics Education, July 2015
Show less
- Title
- IRRITABILITY IN CHILDREN: SAME AS FRUSTRATION AND ANGER?
- Creator
- Kozy, Karyn Brasky
- Date
- 2013, 2013-12
- Description
-
The primary aims of this study were four-fold. The first aim was to examine which of the three alternative models of irritability provided a...
Show moreThe primary aims of this study were four-fold. The first aim was to examine which of the three alternative models of irritability provided a better fit to the data. The second aim was to further refine the model of irritability by examining the gender and age invariance of the best-fitting models. After establishing which model showed the best fit, the third aim was to empirically examine the reliability and validity of the irritability scale that included items from both temperament and psychopathology scales. Finally, the fourth aim was to examine the rank-order stability and mean-levels of irritability between the ages of 4 and 6. Participants included a diverse, community sample of 796 children and their parents. Irritability, frustration, and anger were measured by selected items from temperament and psychopathology scales, including the Children’s Behavior Questionnaire (CBQ; Rothbart et al., 2001), Child Symptom Inventory (CSI; Gadow & Sprafkin, 1994, 1997), and Eyberg Behavior Inventory (ECBI; Eyberg & Pincus, 1999). Results indicate that the three-factor and two-factor measurement models were viable, alternative models at age 4. Contrary to expectation, neither the three-factor nor the twofactor models were invariant for both genders combined, or between the ages of 4 and 6. Based on the definition of irritability in the three-factor model, the irritability scale demonstrated adequate internal consistency, convergent validity, and divergent validity. Finally, the rank-order stability of irritability was in the moderate range during the period from preschool through kindergarten and formal school entry, but mean-levels of irritability did not differ across time. Implications of the findings and suggestions for future research are discussed.
PH.D in Psychology, December 2013
Show less
- Title
- RHEOLOGY OF ENTANGLED POLYMER LIQUIDS IN EQUIBIAXIAL ELONGATIONAL FLOWS
- Creator
- Mick, Rebecca M.
- Date
- 2015, 2015-05
- Description
-
Equibiaxial deformation is an important flow in industrial processes such as film blowing and blow molding. Unfortunately, it is very...
Show moreEquibiaxial deformation is an important flow in industrial processes such as film blowing and blow molding. Unfortunately, it is very difficult to implement experimentally which has led to empirical design of these processes. A technique called continuous lubricated squeezing flow (CLSF) has been developed to perform equibiaxial deformation on systems such as polymer melts. This technique is used in this study to measure the behavior of entangled polymer melts in equibiaxial elongation to further the understanding of these materials in industrially relevant flows. The results of CLSF experiments on three linear chain polymer systems show strain softening for strain rates resulting in Weissenberg numbers, Wi = ε˙Bτd > 1. Higher rates lead to greater softening. The deviation from the linear viscoelastic (LVE) prediction occurs at about a strain of one for all the materials. Equibiaxial and shear behavior were compared for two monodisperse linear systems. When normalized by LVE behavior, the two flows yield similar behavior such that the equibiaxial rheology could be inferred from shear rheology. Unfortunately, polydisperse linear and branched systems did not show the same behavior. The two monodispere systems were compared to the GLaMM and Discrete Slip-Link molecular theories. Neither model could successfully predict the equibiaxial behavior; both predicted excessive strain softening and a premature deviation from LVE. Recent literature has suggested that based on uniaxial measurements, dilution changes the behavior of an entangled polymer system. This is contrary to theories of polymer dynamics. A pure melt and diluted melt with the same entanglement density were compared in shear and equibiaxial flows after adjusting for changes in friction. The results were consistent with universality principles of entangled polymers; the uniaxial results require further investigation.
Ph.D. in Chemical Engineering, May 2015
Show less
- Title
- THE IMPACT OF TRUST ON LEADER EMPOWERING BEHAVIOR
- Creator
- Sternburgh, Angela M.
- Date
- 2011-04-22, 2011-05
- Description
-
This study examined the relationship between trust and leader empowering behaviors across 250 matched pairs of leaders and employees in a...
Show moreThis study examined the relationship between trust and leader empowering behaviors across 250 matched pairs of leaders and employees in a Fortune 500 Midwestern U.S. company. The relationships between propensity to trust, trustworthiness, trust, a meta-perception of trust, and leader empowering behavior were examined. The goal of this study was to test the mediating role of trust and/or the metaperception of trust on the relationship between trustworthiness and leader empowering behavior. This study obtained both leader and employee ratings, which permitted the examination of both single source and multi source data. Results supported a partial mediation effect indicating that trust and the meta-perception of trust partially mediated the relationship between trustworthiness and leader empowering behavior. This study is important because previous research has predominantly focused on examining employee perceptions of trust, this was the first study to explore the meta-perception of trust, and this study transferred measures of leader empowering behaviors to more behaviorally based statements. Implications of this study are explored.
Ph.D. in Psychology, May 2011
Show less
- Title
- THE IMPACT OF EXPLICIT AND IMPLICIT ATTITUDES COMPRISING MENTAL ILLNESS STIGMA ON TAKING PSYCHOTROPIC MEDICATIONS AS PRESCRIBED
- Creator
- Michaels, Patrick
- Date
- 2015, 2015-07
- Description
-
Research suggests mental illness stigma adversely impacts psychotropic medication use. Few studies have examined stigma and psychotropic...
Show moreResearch suggests mental illness stigma adversely impacts psychotropic medication use. Few studies have examined stigma and psychotropic medication use with a naturalistic design. This study assessed the independent impact of attitudes toward psychiatric medication, cognitive insight, explicit and implicit attitudes of public stigma and self-stigma on psychotropic medication use for people with serious mental illnesses. Medication use was examined in this one-month longitudinal study via self-reported medication use, desire to take medication as directed, pill count use rates over a onemonth period, and pharmacy records including maximum continuous gap, number of gaps, and medication possession ratios. The primary expectation that explicit and implicit attitudes would independently explain lower psychotropic medication use was mostly not supported. On average participants took 82% of psychotropic medication as prescribed, indicating medication was taken at a therapeutic level despite stigma. The most consistent association across time was a positive relationship among desire to take medication and self-application of negative stereotypes. The second finding was that attitudes toward psychotropic medication may be associated with self-reported use, maximum continuous gap, and medication possession. Implications for clinical practice recommend providers are aware, discuss, and intervene in consumer’s experiences with stigma, which can improve medication use and psychological stability. Future research should specifically enroll participants who concurrently take suboptimal doses of medication (<80% of medication) to study stigma and non-adherence. Research should seek to understand how internalized stigma and psychotropic medication stigma are related to suboptimal medication use behaviors among people with mental illness in longitudinal non-intervention studies.
Ph.D. in Psychology, July 2015
Show less