Search results
(41 - 60 of 1,076)
Pages
- Title
- ECONOMIC BASED CONTROL SYSTEM DESIGN
- Creator
- Omell, Benjamin Peter
- Date
- 2013, 2013-12
- Description
-
EMPC differs from traditional MPC by directly utilizing a profit based function as the objective as opposed to a quadratic function that...
Show moreEMPC differs from traditional MPC by directly utilizing a profit based function as the objective as opposed to a quadratic function that minimizes the distance from a set point that is predetermined. However, implementation of EMPC can result in unexpected and at times pathological closed-loop behavior, including inventory creep, bang-bang actuation and instability. To address these issues, an infinite-horizon version of EMPC is developed and shown to avoid many of the performance issues observed in the finite-horizon version. First, modifications to the EMPC problem will be used for the conceptual development of the Economic Linear Optimal Controller (ELOC), which is a statistically constrained linear feedback controller. Then, pointwise- in-time constraints can be reintroduced using one of two methods; Constrained ELOC or Infinite-Horizon EMPC (IH-EMPC). We also investigate the impact of problem formulation modifications on the ELOC. The first issue is that of disturbance modeling and the second is the impact of controller sample-time. The third topic concerns incorporation of computational delay in the feedback-loop, using both full and partial state information structures. Finally an illustration of the impact of plant-model mismatch is presented. The Constrained ELOC formulation is further modified to allow for market responsive smart grid applications. In particular an Integrated Gasification Combined Cycle (IGCC) process with hydrogen storage will be used to demonstrate the Constrained ELOC for such applications. The ELOC will be used as a vehicle to exploit dispatch capabilities by pursuing directly the objective of maximizing revenue. The idea being that process modifications to enable dispatch capabilities will allow for a time-shift of power production away from periods of low energy value to periods of high value. An in depth discussion is provided on how energy value forecasts are incorporated into the design of the constrained ELOC. Finally, an extension of the ix ELOC to the controller embedded equipment design is provided. The work concludes with a discussion of the computational aspects of solving the ELOC problem. In particular, the impact of reverse-convex constraints inherent to the ELOC problem are discussed along with existing solution methods. The main contribution of this final chapter is a novel application of the Generalized Bender’s Decomposition (GBD) algorithm to the ELOC problem. This new approach is shown to retain global optimality, reduce computational effort (by orders of magnitude) and expand the class of problems one can solve.
PH.D in Chemical Engineering, December 2013
Show less
- Title
- POWER OPTIMIZATION IN DEEP SUBMICRON VLSI CIRCUITS: FROM SYSTEM LEVEL TO CIRCUIT LEVEL
- Creator
- Tong, Qiang
- Date
- 2017, 2017-07
- Description
-
As VLSI technology advances to deep sub-micron regime, power consumption has become a critical concern in VLSI circuits. Therefore, power...
Show moreAs VLSI technology advances to deep sub-micron regime, power consumption has become a critical concern in VLSI circuits. Therefore, power optimization becomes mandatory in VLSI design nowadays. To reduce the power consumption, many techniques have been proposed at various levels of VLSI circuits design: system level, register-transfer level(RTL), and circuit/transistor level. This dissertation starts with a review of system level power optimization techniques. Experiments on a computer architecture simulation system have been conducted to compare the impact of different programming styles at system level on power consumption. The results could be used as an intuitive guidance for programmers with intention for implementing power-aware system. The second topic in this dissertation is a clustering based clock gating technique, targeting power reduction at RT-Level. Clock gating is an effective and popular method to reduce dynamic power in VLSI circuits, it can be applied at both RT-level and gate level. The basic idea of clock gating is to disable the clock of one or more sequential logics (majorly flip-flops) when the input data of the logic cells do not change. In this dissertation, a clustering based clock gating technique is proposed, the technique exploits activity information of each flip-flop, and clusters them into groups according to their activity correlations. As the leakage power has become a major concern in VLSI design, the proposed As the leakage power has become a major concern in VLSI design, the proposed clustering method is extended down to gate level and a clustering based hybrid clock gating and power gating technique is proposed. The technique can reduce both the dynamic power and leakage power in VLSI circuits. As process technology scaling down to deep submicron regime, bulk CMOS technology has encountered many challenges due to short channel effect (SCE), which degrades the reliability and feasibility of MOSFET devices. New technologies such as FinFET and carbon nanotube FET (CNFET) are two promising substitute solutions in the following decade to address SCE issue. Part of this dissertation presents circuit design using these new process technologies for low power VLSI circuits. More specifically, two SRAM cell designs using FinFET and CNFET devices are proposed. The new designs can improve performance while reduce power consumption.
Ph.D. in Electrical Engineering, July 2017
Show less
- Title
- BIOPHYSICAL AND BIOCHEMICAL STUDY OF NATIVE AND EDITED DYSTROPHIN ROD REGION
- Creator
- Mangat, Khushdeep
- Date
- 2014, 2014-12
- Description
-
Duchenne Muscular Dystrophy (DMD) is a severe X-linked recessive disease affecting 1 in 3500 boys that is characterized by the degeneration of...
Show moreDuchenne Muscular Dystrophy (DMD) is a severe X-linked recessive disease affecting 1 in 3500 boys that is characterized by the degeneration of muscle function and strength. The cause of this disease lies in gene defects that eliminate expression of the protein dystrophin. Becker Muscular Dystrophy, BMD is a milder form of disease that has a later onset and much longer survival (up to the 7th decade of life, compared to median survival of 25 years for DMD patients) because of the presence of low levels of modified dystrophin protein. BMD is very heterogeneous, however, and many cases are nearly as severe as DMD. A major therapy for DMD involves exon skipping, which produces modified forms of dystrophin that are very similar to BMD. However, how these edits impact the function of dystrophin, and how they are linked to the severity of BMD or the BMD-like state produced in DMD exon skip therapy is unknown. We investigated this in two specific cases involving a specific panel of BMD defects linked to a major cause of death, dilated cardiomyopathy (DCM). We also investigated the contribution of various exons to interaction with a signaling partner of dystrophin, neuronal nitric oxide synthetase (nNOS).
Ph.D. in Biological and Chemical Sciences, December 2014
Show less
- Title
- PHYSICS-PRESERVING FINITE DIFFERENCE SCHEMES FOR THE POISSON-NERNST-PLANCK EQUATIONS
- Creator
- Flavell, Allen
- Date
- 2014, 2014-07
- Description
-
The Poisson-Nernst-Planck equations are a system of nonlinear di erential equations that describe ow of charged particles in solution. This...
Show moreThe Poisson-Nernst-Planck equations are a system of nonlinear di erential equations that describe ow of charged particles in solution. This dissertation is about the design of numerical schemes to solve this system which preserves global properties exhibited by the system. There are two major advances presented. The rst is the design of schemes that conserve mass globally when the system is coupled with no- ux boundary conditions. Most notably, a scheme using central di erencing and TR-BDF2 achieves second order accuracy in both space and time, while also conserving global mass is presented. The second is the design of a more general scheme that preserves the time-varying properties of the free energy of the system. One such a scheme uses central di erencing in space and trapezoidal integration in time to achieve second order accuracy in both space and time, while also preserving the energy dynamics, but at the cost of requiring positivity of the solution. There is also a discussion of solution methods: the classic Newton iteration scheme is compared with a modi ed Gummel iteration scheme for the purpose of solving the transient equations. The intended application of this work is the modeling of ion channels, and many of the simulations presented use parameters consistent with models of ion channels.
Ph.D. in Applied Mathematics, July 2014
Show less
- Title
- HYBRID TO SOCIAL CONDENSER: COMPETING APPROACHES TO MIXED-USE DEVELOPMENT
- Creator
- Zagow, Maged
- Date
- 2016, 2016-12
- Description
-
In the last two decades, mixed use has taken center stage in urban planning development in the United States. The research frequently cites...
Show moreIn the last two decades, mixed use has taken center stage in urban planning development in the United States. The research frequently cites this development as a model that can address a variety of socioeconomic problems. Also, it has enjoyed a recent surge in popularity in redeveloping cities by providing more affordable housing opportunities, ensuring safety, reducing auto-dependency, and for providing a sense of place and community. However, its affordability, physical design, and outcomes are highly variable. This study is particularly interested in whether and how mixed use affects the socioeconomics configuration of the built environment. This study uses multilevel data from the county level to the zip code level that represents all US neighborhoods. I use different implementation methods of mixed-use development and different cultural and historical backgrounds to examine the data. The study adopts six mixed-use models that present different methodological interactions between socioeconomic spatial metrics and urban forms. These models represent the realistic constraints of urban geometry and of the socioeconomic structure that comprises the charateristics of race, income, accessibility, safety, adjacency, accessibility, environment, and density. This study finds that the built environment produces a rich landscape of information that appears to guide the opportunities for facilities. The analysis shows that mixed-use development may have certain effects on the number of facilities, housing, income, diversity, crime rate, employment, health, and environment. The analysis of this research works in two dimensions. First, urban models (Hybrid and Social Condenser in general and under two categories Metropolis and Neighborhood Community). The second dimension is the urban characteristics (zoning programming, land use mix, streets fabric), socioeconomics variables (Population density, occupied housing, median age, diversity of race, income, and employment rates), and location variation (states, and cities). The results confirm that mixing the facilities in hybrid communities create more jobs opportunities but limit the affordability of housing, social cohesion and the race diversity. But in Social Condenser models, there are more race diversity safety and healthy environment. These results reflect complexity demands more than mixed-use developments, beyond Jane Jacobs' requirements, and beyond the designation of selected mixed-use zones. This study contributes to the study of how mixed-use development models shift because of various social and economic conditions. The findings from this study can inform architects, investors, policymakers, economists, and planners about factors that sustain mixed-use neighborhoods in the United States and beyond. Urban designers will be able to inform how the seemingly necessary act of laying out mixed-use development can affect the socioeconomic structure of a city. Thus, this study is a useful source for more accurate planning ideas than generic abstract theories or slogans.
Ph.D. in Architecture, December 2016
Show less
- Title
- STUDIES ON SYNTHETIC APPLICATIONS OF STEREOSELECTIVE AND REGIOSELECTIVE RING OPENING REACTIONS OF AZIRIDINIUM IONS
- Creator
- Chen, Yunwei
- Date
- 2014, 2014-12
- Description
-
Aziridinium ions are valuable reactive intermediates in organic synthesis. Regioselective and stereoselective ring opening reactions of...
Show moreAziridinium ions are valuable reactive intermediates in organic synthesis. Regioselective and stereoselective ring opening reactions of aziridinium ions can provide various useful building blocks including optically active vincinal amines, amino alcohols and amino esters. Aziridinium ions are also involved in the biological process of anti-cancer agents. However, aziridinium ions are under-utilized in organic synthesis. In this thesis, we utilize stereoselective and regioselective ring opening reactions of aziridinium ions for synthesis of enantiomerically enriched compounds. Ring opening reactions of aziridinium ions were utilized in intramolecular Friedel-Crafts (FC) reactions for stereoselective and regioselective synthesis of 4-substituted tetrahydroisoquinoline. A series of β-haloamine were prepared as precuresors of aziridinium ions. The reaction conditions for ring opening of aziridinium ions for the FC reactions including temperature, catalysts, and solvents were optimized. Further, the reaction mechanism was studied to prove that the aziridinium ions were formed as the key intermediates during the intramolecular FC reaction. Intermolecular nucleophilic ring opening reaction of aziridinium ions was studied as a convenient method of carbon-carbon formation. Regioselective and stereoselective nucleophilic substitution reactions of aziridinium ions with indole analogues were carried out for the synthesis of optically active tryptamine analogues. The reactions proceeded smoothly to provide the tryptamine analogues in high yield in the presence of halo-sequestering agents, while the reaction provided the tryptamine products in significantly low yield in the absence of halo-sequestering agents. Ring opening reactions of aziridinium ions with malonic esters and Grignard reagents were carried out for the respective synthesis of optically active tryptamine analogues, γ-aminobuyric acid (GABA), and α-amine derivatives. The regiospecific ring opening reactions of aziridinium ions was directly applied for the synthesis of bifunctional ligands which have a potential use in targeted therapy and imaging of cancers. The novel bifunctional chelates with a shorter alkyl spacer C-NETA and 2E-C-NETA as well as the chelates with a longer alkyl spacer 5p-C-NETA were prepared. 5p-C-NETA was conjugated to a cyclic peptide c(RGDyK) as a targeting moiety for use in targeted radiation therapy. In addition, 2E-C-NETA was conjugated to a fluorescent dye Cy5.5 for theranostic applications. The experimental results indicated that the new bifunctional ligands have promising applications in the biomedical field. In summary, stereoselective and regioselective ring opening reactions of aziridinium ions have been successfully applied for the synthesis of optically active compounds such as 4-substituted tetrahydroisoquinolines, tryptamines, γ-aminobuyric acid, α-amine derivatives and the bifunctional chelators. We demonstrated that ring opening of versatile aziridinium intermediates is a strightforward and convenient method for the synthesis of various optically active compounds.
Ph.D. in Chemistry, December 2014
Show less
- Title
- ELIASHBERG ANALYSIS OF CUPRATE OXIDE SUPERCONDUCTORS
- Creator
- Ahmadi, Omid
- Date
- 2011-11, 2011-12
- Description
-
In this thesis, evidence for antiferromagnetic spin uctuations as the pairing glue in high temperature superconductors is presented through a...
Show moreIn this thesis, evidence for antiferromagnetic spin uctuations as the pairing glue in high temperature superconductors is presented through a modi ed Eliashberg analysis of experimental tunneling data of Bi2212 over a wide range of doping. In particular, the normalized conductance data of the junctions, from optimal to overdoped, will be tted at T=0K using d-wave Eliashberg equations where the spectral function is modeled after spin uctuation spectra seen in experiments. The corresponding real and imaginary diagonal and anomalous self-energy curves are extracted and compare well to photoemission experiments. This is followed by a temperature dependent Eliashberg analysis where the spectral function is now temperature dependent, based on trends seen in inelastic neutron scattering experiments. New results for temperature dependent self energy curves are also compared to experiment with slight deviations. Finally, the Josephson product is calculated as an independent check of the tunneling matrix used in tting the data.
Ph.D. in Physics, December 2011
Show less
- Title
- DEVELOPMENT OF A NEW EMERGENCY EVACUATION SYSTEM FOR MINES
- Creator
- Qian, Qingyi
- Date
- 2011-08, 2011-07
- Description
-
Underground mining is a very high risk industry. There are many potential hazards in the underground mining these include fire, explosion,...
Show moreUnderground mining is a very high risk industry. There are many potential hazards in the underground mining these include fire, explosion, inundation, roof collapse, toxic gases, chemical pollution, etc. Over past centuries, in US alone, more than 100,000 miners lost their life in different accidents. The primary safety methods used in underground mines concentrate on the monitoring of the hazardous gases, fire detection and ventilation. Using advanced instruments and monitoring techniques have significantly reduced the accidents in the modern mines. However despite the advancement of these monitoring facilities, accidents still occur in underground mining annually around the world, and many miners were killed because they were trapped and unable to escape due to blocked of exit access. This thesis describes a new development of an emergency evacuation system in underground mines and analyzes the advantages and disadvantages of the system. It is expected that the new system will greatly improve the emergency exit methods and save more lives in the future. The new emergency evacuation system consists of vertical concrete mineshaft, high capacity mineshaft elevators, surface terminal and underground support structures. In addition, the study of numerical simulation was carried out to observe the ground response during excavation. A typical ground profiles for underground mining in south part of China was used in this analysis. The results selected from shaft excavation simulation indicate that fluid drilling method effectively prevents the soil around mineshaft from collapse hazard. Compared to soil strength, soil stiffness has a significant influence on the soil response induced by excavating shafts.
Ph.D. in Civil Engineering, July 2011
Show less
- Title
- LARGE-SCALE SIMULATION OF ELECTRIC POWER SYSTEMS FOR WIND
- Creator
- Wei, Tian
- Date
- 2011-08, 2011-07
- Description
-
The utilization of wind energy will pose great socioeconomic benefits with reductions in power plant emissions and the supply of zero cost...
Show moreThe utilization of wind energy will pose great socioeconomic benefits with reductions in power plant emissions and the supply of zero cost energy; however, largescale wind energy integration could introduce inevitable challenges to regional transmission systems and hourly system operations. This thesis addresses the congestion identification, simulation and analysis of large-scale electric power systems in different scenarios, large-scale wind energy integration and related transmission expansion issues. A methodology based on the security-constrained unit commitment (SCUC) is applied to analyze the transmission congestions in the Eastern Interconnection of the United States. The identified congestions are visualized along with the Geographical Information System (GIS) data and compared with the results in National Electric Transmission Congestion Study (NETCS) published by the Department of Energy of the United States in 2006. The study also provides the locational marginal price (LMP) information in the Eastern Interconnection, which is not available in the NETCS report. This thesis implements a comprehensive simulation and scenario analysis of the Illinois electric power system for the year 2011. Possible scenarios representing electrical load sensitivities to economic growth, fuel price variations, and the impact of carbon cost, are studied. This thesis presents the hourly simulation results for the large-scale wind energy integration in the Eastern Interconnection of the United States. An hourly unit commitment is applied for the simulation of the economics of wind energy integration in the year 2030. The energy portfolio for supplying the hourly load in 2030 is developed based on wind integration levels. The sensitivities of fuel price, wind energy quantity, xvii load forecast, carbon cost, and load management to the proposed 2030 wind integration are studied. This thesis identifies transmission congestions and expands the existing transmission system in the Eastern Interconnection of the United States for accommodating a large-scale integration of wind energy. Violated transmission flows which would cause the infeasibility of hourly SCUC are identified. An iterative transmission expansion analysis is implemented to identify the minimum required additions to the Eastern Interconnection for mitigating hourly transmission congestions.
Ph.D. in Electrical Engineering, July 2011
Show less
- Title
- DIRECT DIFFEOMORPHIC REPARAMETERIZATION FOR CORRESPONDENCE OPTIMIZATION IN STATISTICAL SHAPE MODELING
- Creator
- Li, Kang
- Date
- 2015, 2015-05
- Description
-
This dissertation proposes an efficient optimization approach for obtaining shape correspondence across a group of objects for statistical...
Show moreThis dissertation proposes an efficient optimization approach for obtaining shape correspondence across a group of objects for statistical shape modeling. With each shape represented in a B-spline based parametric form, the correspondence across the shape population is cast as an issue of seeking a reparametrization for each shape so that a quality measure of the resulting shape correspondence across the group is optimized. The quality measure is the description length of covariance matrix of the shape population, with landmarks sampled on each shape. The movement of landmarks on each B-spline shape is controlled by the reparameterization of the B-spline shape. The reparameterization itself is also represented with B-splines and B-spline coefficients are used as optimization parameters. We have developed formulations for ensuring the bijectivity of the reparameterization. A gradient-based optimization approach is developed, including techniques such as constraint aggregation and adjoint senstivity for efficient, direct di↵eomorphic reparameterization of landmarks to improve the group-wise shape correspondence. Numerical experiments on both synthetic and real 2D and 3D data sets demonstrate the efficiency and e↵ectiveness of the proposed approach.This dissertation proposes an efficient optimization approach for obtaining shape correspondence across a group of objects for statistical shape modeling. With each shape represented in a B-spline based parametric form, the correspondence across the shape population is cast as an issue of seeking a reparametrization for each shape so that a quality measure of the resulting shape correspondence across the group is optimized. The quality measure is the description length of covariance matrix of the shape population, with landmarks sampled on each shape. The movement of landmarks on each B-spline shape is controlled by the reparameterization of the B-spline shape. The reparameterization itself is also represented with B-splines and B-spline coefficients are used as optimization parameters. We have developed formulations for ensuring the bijectivity of the reparameterization. A gradient-based optimization approach is developed, including techniques such as constraint aggregation and adjoint senstivity for efficient, direct di↵eomorphic reparameterization of landmarks to improve the group-wise shape correspondence. Numerical experiments on both synthetic and real 2D and 3D data sets demonstrate the efficiency and e↵ectiveness of the proposed approach.
Ph.D. in Mechanical and Aerospace Engineering, May 2015
Show less
- Title
- DEPTH MAP PROCESSING FOR MULTI-VIEW VIDEO PLUS DEPTH
- Creator
- Vijayanagar, Krisha Rao
- Date
- 2014, 2014-05
- Description
-
The world of multimedia and visual entertainment has grown in leaps and bounds in the past decade with 3-D television being one of the biggest...
Show moreThe world of multimedia and visual entertainment has grown in leaps and bounds in the past decade with 3-D television being one of the biggest technologies. Amongst several formats proposed for representing 3-D content, multi- view video plus depth (MVD) format has gained a lot of interest in the past few years. MVD requires that each view of a particular scene be accompanied by a per-pixel depth. This introduces new problems for compression and transmission of MVD content because a depth map has di erent characteristics from a color image. Keeping the MVD format and depth map characteristics in mind, we highlight three majors problems that plague the MVD format, namely, 1. depth map re nement. 2. depth map compression. 3. novel view synthesis using the depth map at the decoder side. In order to re ne a depth map, we propose a multi-resolution anisotropic di usion algorithm that is optimized to run in real-time thus ensuring that the encoder does not su er from additional latency. Next, we propose two unique solutions for compressing them. We rst propose a solution using the Layered Depth Video (LDV) concept using a rate-distortion optimized quadtree decomposition of the LDV using a novel two-mode block truncation code with improved prediction. We also propose a compression solution using compressive sensing (CS) concepts by creating a hybrid rate-optimized CS codec. This codec achieves two goals:- rstly, block classi cation to ensure lower decoder complexity and secondly, rate-distortion optimization of the measurement rate for each block that is to be compressively sensed. We then look at the view synthesis component of the MVD tool-chain which x is a time-sensitive process. Keeping decoding latency in mind, we propose a lookup- table based approach to the 3-D warping process with a simpli ed hole- lling algorithm that is not only competitive quality-wise with other schemes but is several times faster too. It is hopeful that the presented techniques can be used successfully to create MVD architectures for applications that need low-complexity encoding solutions.
PH.D in Electrical Engineering, May 2014
Show less
- Title
- COMPUTER MODELING OF BREAST LESIONS AND STUDIES OF ANALYZER-BASED X-RAY IMAGING
- Creator
- Garcia, Luis De Sisternes
- Date
- 2011-11, 2011-12
- Description
-
Phase-contrast x-ray imaging is an emerging technique that promises to yield highly sensitive medical images of soft tissue, which is...
Show morePhase-contrast x-ray imaging is an emerging technique that promises to yield highly sensitive medical images of soft tissue, which is difficult to observe via conventional radiography given its low X-ray attenuation differences. One of these phase-contrast techniques, known as analyzer-based imaging, has demonstrated that highly detailed breast tissue images can be obtained using synchrotron radiation. However, synchrotron facilities are impractical for clinical use. This thesis introduces studies and exposure consideration towards the application of analyzer-based imaging in a clinical environment, particularly in the context of breast imaging. It also introduces a computational breast lesion model that generates randomized three-dimensional phantoms which follow realistically the characteristics observed in real lesions. Moving analyzer-based imaging to clinical application requires the consideration of photon noise, inherent from the use of a photon-limited conventional source. We summarize the statistical properties in the presence of photon noise of two popular analyzer-based imaging techniques, known as diffraction-enhanced imaging (DEI) and multiple-image radiography (MIR). The statistics for MIR have not been previously derived and are introduced in this thesis. Comparison of the resulting statistical predictions with results obtained by Monte Carlo simulation validated the analysis. An expression for the maximum-likelihood (ML) solution for analyzer-based imaging is presented as a way of minimizing the effects of photon noise in the reconstruction of the object’s absorption, refraction and ultra-small angle scattering properties, and more practical maximum-likelihood expectation-maximization (ML-EM) and maximum-a-posteriori expectation-maximization (MAP-EM) solutions are also introduced. The behavior of the ML-EM and MAP-EM solutions was compared to the results produced by the five best-known analyzer-based reconstruction methods using computer simulations. The ML-EM and MAP-EM reconstructions proved closer to the theoretical values as they do not rely on commonly known limitations and approximations introduced by the other techniques. We introduce the development and evaluation of a new computational breast lesion phantom model that can simulate either massess or microcalcifications. The proposed tool allows the generation of a large number of randomized three-dimensional breast lesion simulations following desired characteristics normally used to describe breast lesions in clinical practice. The initial motivation for the development of this new phantom model was to enable the proposed evaluations of analyzer-based imaging to be achieved. However, the model became a major focus of this thesis because it improves significantly upon those that can be found in previous literature. The proposed lesion model can be used for evaluation studies across different breast imaging techniques, as well as for training purposes, so it is our hope that it could become an important resource for the broader mammography research community. As part of the lesion modeling research, we also introduce methods to computationally modify experimental mammography and analyzer-based images of breast tissue so that they present the generated tumor simulations embedded within their parenchyma realistically. The realism of the simulated lesion images was evaluated by comparison of 83 real tumor cases observed in mammograms with 83 constructed hybrid images in which simulated tumors matching the characteristics observed in the real cases were embedded, with healthy tissue acting as background. As a quantitative comparison, extracted features describing tumor shape and density showed no statistically significant differences between real and simulated tumors. A known computational tumor classification technique based on their shape observed in mammography was implemented and showed no significant performance differences between real and simulated cases, as well as showing good correlation with previously published performance results in real tumors. To measure the realism for use in human observer studies, we conducted a reader study in which 5 experienced radiologists were asked to judge whether each of the 166 images was real or simulated by assigning a score on a 7-point scale. The results were analyzed in a multiple-reader multiple-case statistical framework. The conclusion of the study was that the readers’ accuracy in assessing whether the lesions were real or simulated was not significantly better than random chance. This thesis also incorporates a reader study to evaluate the degree to which photon-limited analyzer-based images may be effective for visualization of breast cancer features. Our motivation was to establish the x-ray intensity that would be required to make these methods feasible, the purpose being to serve as a guide in parameter selection for future design of imaging hardware. We conducted a series of observer studies that quantify the performance of analyzer-based refraction images at different noise levels for the task of identifying subtle details present in breast tumors which are relevant to clinical diagnosis. The cases shown to the readers consisted of hybrid images where simulated lesions of known characteristics were computationally embedded in real breast analyzer-based background images. The original phase-contrast data was obtained using synchrotron radiation and was later modified to simulate the noise and blurring effects produced from a photon-limited source with a 300μm aperture size, similar to those used in a laboratory environment. Results showed that the analyzer-based imaging techniques statistically outperformed conventional mammography for the given task with an average of just 128 recorded photons per pixel in background image regions
Ph.D. in Electrical Engineering, December 2011
Show less
- Title
- COOPERATIVE BATCH SCHEDULING FOR HPC SYSTEMS
- Creator
- Yang, Xu
- Date
- 2017, 2017-05
- Description
-
The batch scheduler is an important system software serving as the interface between users and HPC systems. Users submit their jobs via batch...
Show moreThe batch scheduler is an important system software serving as the interface between users and HPC systems. Users submit their jobs via batch scheduling portal and the batch scheduler makes scheduling decision for each job based on its request for system resources and system availability. Jobs submitted to HPC systems are usually parallel applications and their lifecycle consists of multiple running phases, such as computation, communication and input/output data. Thus, the running of such parallel applications could involve various system resources, such as power, network bandwidth, I/O bandwidth, storage, etc. And most of these system resources are shared among concurrently running jobs. However, Today's batch schedulers do not take the contention and interference between jobs over these resources into consideration for making scheduling decisions, which has been identified as one of the major culprits for both the system and application performance variability. In this work, we propose a cooperative batch scheduling framework for HPC systems. The motivation of our work is to take important factors about jobs and the system, such as job power, job communication characteristics and network topology, for making orchestrated scheduling decisions to reduce the contention between concurrently running jobs and to alleviate the performance variability. Our contributions are the design and implementation of several coordinated scheduling models and algorithms for addressing some chronic issues in HPC systems. The proposed models and algorithms in this work have been evaluated by the means of simulation using workload traces and application communication traces collected from production HPC systems. Preliminary experimental results show that our models and algorithms can effectively improve the application and the system overall performance, HPC facilities' operation cost, and alleviate the performance variability caused by job interference.
Ph.D. in Computer Science, May 2017
Show less
- Title
- SPEECH INTELLIGIBILITY AND ACCENTS IN SPEECH-MEDIATED INTERFACES: RESULTS AND RECOMMENDATIONS
- Creator
- Lawrence, Halcyon M.
- Date
- 2013, 2013-07
- Description
-
There continues to be significant growth in the development and use of speech–mediated devices and technology products; however, there is no...
Show moreThere continues to be significant growth in the development and use of speech–mediated devices and technology products; however, there is no evidence that non-native English speech is used in these devices, despite the fact that English is now spoken by more non-native speakers than native speakers, worldwide. This relative absence of non-native English speech in devices may be due in part to the costs associated with localizing speech devices, but it may also be attributable to the fact that not enough is known about user performance with accented speech in speech–mediated environments. In the absence of targeted research, developers may be relying on existing studies which focus on perception (impression) of accented speech, as a basis of decision-making. However, perception paints only part of the picture when it comes to understanding how and why people perform in certain ways and in certain environments. Three studies were conducted to answer the following questions: (1) What are the acoustic-phonetic characteristics of negatively- and positively-perceived accented speech? And how are these characteristics related to markers of intelligible speech? (2) How do participants perform on different types of accented-speech tasks? (3) What is the relationship between user perception of accented speech and user performance in response to accented speech? and; (4) How do participants perform on accented speech tasks of varying complexity? Arising out of this research, there are six recommendations for the use of accented speech in speech-mediated devices. Also, the findings of this study raise questions about inherent linguistic stereotypes which impact both our perceptions and our choices about xvi the accents we want to hear on our speech devices. A discussion about if and how these stereotypes can be altered and measured are included. Future research should examine the role of experienced non-native talkers in speech devices. Results of study one demonstrated that some experienced non-native talkers were positively-perceived by raters and may be good candidates for talkers in speech devices. A study like this would explicitly establish if listeners consistently make native vs. non-native distinctions in their preferences or if a prestige continuum emerges.
PH.D in Technical Communication, July 2013
Show less
- Title
- SPECTRUM OBSERVATORY BASED TRAFFIC MODELING AND CHANNEL SELECTION IN SUPPORT OF DYNAMIC SPECTRUM ACCESS
- Creator
- Bacchus, Brent Roger
- Date
- 2015, 2015-05
- Description
-
It is well known that the exponential growth in popularity of wireless devices has created a demand for radio spectrum that cannot be met with...
Show moreIt is well known that the exponential growth in popularity of wireless devices has created a demand for radio spectrum that cannot be met with current regulatory policies. Despite the difficulty in procuring access to new spectrum resources, many empirical studies have indicated that the majority of spectrum is in-fact unused in the temporal, spatial and/or spectral domains, representing an untapped wealth that must be exploited. Dynamic Spectrum Access (DSA) is a promising technology which aims to improve the efficiency of future radios and alleviate the issue of spectrum under-utilization. This dissertation utilizes the data from the IIT Spectrum Observatory to develop models of channel activity on the Land Mobile Radio (LMR) band (used for critical communication by organizations such as public safety) and shows how such models can be applied to improve the performance of DSA. We demonstrate that LMR traffic may possess multi-timescale behavior – such as clustering and dispersion over different time periods – and propose a novel statistical model to account for these observations based on a multiple emission hidden Markov model. We then used this model to design a collision constrained channel selection algorithm that can permit the re-use of licensed spectrum while minimizing interference with incumbent users. The findings in this work are primarily developed for public safety, however the techniques developed are general enough to be applied to other types of traffic possessing similar characteristics. The proposed model, in particular, is well suited for further analytic work and simulations studies in this area.
Ph.D. in Electrical and Computer Engineering, May 2015
Show less
- Title
- OPTIMAL DECISION-MAKING OF INTERDEPENDENT TRANSPORTATION INVESTMENT ALTERNATIVES UNDER RISK AND UNCERTAINTY
- Creator
- Zhou, Bei
- Date
- 2012-07-12, 2012-07
- Description
-
With increasing demand for a more efficient transportation system and decreasing budget levels, transportation investment decision-making that...
Show moreWith increasing demand for a more efficient transportation system and decreasing budget levels, transportation investment decision-making that aims to select the optimal project portfolio which yields maximized overall networkwide benefits in terms of economy, society and environment has increasingly become important. This dissertation has conducted an in-depth investigation into project evaluation and project selection that are crucial steps of transportation decision-making. It begins with information search through a review of existing methods for project evaluation and selection. Several limitations of existing methods have been revealed. In particular, they are in lack of considerations in network impacts of a single investment project, interdependencies of simultaneously implementing multiple projects, and restrictions of total risk of overall benefits of selected projects within an acceptable level. Then, a new methodology is proposed for networkwide traffic assignments, project evaluation, and project selection. A state-of-art large scale transportation simulation software, the TRansportation ANalysis and SIMulation System (TRANSIMS) toolbox, is utilized to perform networkwide dynamic traffic assignments to general redistributed traffic volumes after project implementation needed as inputs for project evaluation. For project evaluation, a life-cycle cost analysis approach is developed to consider all agency costs and user costs in the service life-cycle of two primary categories of highway facilities: pavements and bridges. In order to enhance the robustness of analytical results, risk and uncertainty of input factors concerning traffic volumes, project costs, and discount rates are incorporated into the life-cycle cost computation using @Risk Palisade software, Version 5.5. For project selection, two-stage enhanced Knapsack model, hypergraph Knapsack, and two-stage hypergraph Knapsack model are proposed to choose the best sub-collection of interdependent projects to yield maximized overall benefits at various budget levels, while controlling the total risk within an acceptable level. In terms of two-stage Knapsack model, the Markowitz mean-variance model is utilized for stage-one optimization to generate minimized total risk of all projects subject to constraints of available budget and minimum benefits to be expected for individual projects. At the second stage, the Knapsack model is enhanced by adding stage-one optimization solution as one more constraint. Such a treatment could help control the total risk of overall benefits of all selected projects at a desirable level. Moreover, a hypergraph Knapsack model is introduced to capture project network impacts and interdependency relationships. In order to simultaneously address issues of networkwide project impacts, interdependencies, and total risk levels, a two-stage hypergraph Knapsack model is developed. Efficient solution algorithms are developed and coded to Frontline Solver Xpress V55 software to solve the two-stage Knapsack model, hypergraph Knapsack model, and two-stage hypergraph Knapsack model, respectively. Three computational studies are performed to apply the proposed methodology using two sets of data, including six-year data on 672 candidate projects proposed by Indiana Department of Transportation for state highway programming and 6 mega projects proposed by Illinois State Toll Highway Authority for tollway network major capital improvements. It has generally found that the use of two-stage Knapsack model could readily control the total risk of overall benefits of selected projects at a desirable level, but it may result in significant changes in the overall benefits for different budget levels where significant differences in risks are associated with individual projects. The hypergraph Knapsack model could effectively handle issues of networkwide project impacts and interdependency relationships. However, the two-stage hypergraph Knapsack model appears to be most robust in that it could simultaneously resolve the issues of networkwide project impacts, interdependency relationships, and total risks of overall project benefits, thus generating most reliable information to support rational transportation investment decision-making.
Ph.D. in Civil Engineering, July 2012
Show less
- Title
- SYSTEM SUPPORT FOR RESILIENCE IN LARGE-SCALE PARALLEL SYSTEMS: FROM CHECKPOINTING TO MAPREDUCE
- Creator
- Jin, Hui
- Date
- 2012-05-31, 2012-05
- Description
-
High-Performance Computing (HPC) has passed the Petascale mark and is moving forward to Exascale. As the system ensemble size continues to...
Show moreHigh-Performance Computing (HPC) has passed the Petascale mark and is moving forward to Exascale. As the system ensemble size continues to grow, the occurrence of failures is the norm rather than the exception during the execution of parallel applications. Resilience is widely recognized as one of the key obstacles towards Exascale computing. Checkpointing is currently the de-facto fault tolerant mechanism for parallel applications. However, parallel checkpointing at scale usually generates bursts of concurrent I/O requests, imposes considerable overhead to I/O subsystems, and limits the scalability of parallel applications. Despite the doubt in the feasibility of checkpointing continues to increase, there is still no promising alternative on the horizon yet to replace checkpointing. MapReduce is a new programming model for massive data processing. It has demonstrated a compelling potential in reshaping the landscape of HPC from various perspectives. The resilience of MapReduce applications and its potential in benefiting HPC fault tolerance are active research topics that require extensive investigation. This thesis work targets at building a systematic framework to support resilience in large-scale parallel systems. We address the identified checkpointing performance issue through a three-fold approach: reduce the I/O overhead, exploit storage alternatives, and determine the optimistic checkpointing frequency. This three-fold approach is achieved with three different mechanisms, namely system coordination and scheduling, the utilization of MapReduce framework, and stochastic modeling. To deal with the increasing concerns about MapReduce resilience, we also strive to improve the reliability of MapReduce applications, and investigate the tradeoffs in the programming model selection (e.g., MPI v.s. MapReduce) from the perspective of resilience. This thesis provides a thorough study and a practical solution for solving the outstanding resilience problem of large-scale MPI-based HPC applications and beyond. It makes a noticeable contribution to the state-of-the-art and opens a new research direction for many to follow.
Ph.D. in Computer Science, May 2012
Show less
- Title
- ASYMPTOTIC SIMILARITY IN TURBULENT BOUNDARY LAYERS
- Creator
- Duncan, Richard D.
- Date
- 2011-05-10, 2011-05
- Description
-
The turbulent boundary layer is one of the most fundamental and important applications of fluid mechanics. Despite great practical interest...
Show moreThe turbulent boundary layer is one of the most fundamental and important applications of fluid mechanics. Despite great practical interest and its direct impact on frictional drag among its many important consequences, no theory absent of significant inference or assumption exists. Numerical simulations and empirical guidance are used to produce models and adequate predictions, but even minor improvements in modeling parameters or physical understanding could translate into significant improvements in the efficiency of aerodynamic and hydrodynamic vehicles. Classically, turbulent boundary layers and fully-developed turbulent channels and pipes are considered members of the same “family,” with similar “inner” versus “outer” descriptions. However, recent advances in experiments, simulations, and data processing have questioned this, and, as a result, their fundamental physics. To address a full range of pressure gradient boundary layers, a new approach to the governing equations and physical description of wall-bounded flows is formulated, using a two variable similarity approach and many of the tools of the classical method with slight but significant variations. A new set of similarity requirements for the characteristic scales of the problem is found, and when these requirements are applied to the classical “inner” and “outer” scales, a “similarity map” is developed providing a clear prediction of what flow conditions should result in self-similar forms. An empirical model with a small number of parameters and a form reminiscent of Coles’ “wall plus wake” is developed for the streamwise Reynolds stress, and shown to fit experimental and numerical data from a number of turbulent boundary layers as well as other wall-bounded flows. It appears from this model and its scaling using the free-stream velocity that the true asymptotic form of u′2 may not become self-evident until Re ≈ 275, 000 or δ+ ≈ 105, if not higher. A perturbation expansion made possible by the novel inclusion of the scaled streamwise coordinate is used to make an excellent prediction of the shear Reynolds stress in zero pressure gradient boundary layers and channel flows, requiring only a streamwise mean velocity profile and the new similarity map. Extension to other flows is promising, though more information about the normal Reynolds stresses is needed. This expansion is further used to infer a three layer structure in the turbulent boundary layer, and modified two layer structure in fully-developed flows, by using the classical inner and logarithmic profiles to determine which portions of the boundary layer are dominated by viscosity, inertia, or turbulence. A new inner function for U+ is developed, based on the three layer description, providing a much more simplified representative form of the streamwise mean velocity nearest the wall.
Ph.D. in Mechanical and Aerospace Engineering, May 2011
Show less
- Title
- MECHANICAL PROPERTIES AND SINTERING MECHANISMS OF POWDER METALLURGY TI6AL4V
- Creator
- Xu, Xiaoyan
- Date
- 2013, 2013-05
- Description
-
Titanium has been identified as one of the key materials with a high strength to weight ratio that can reduce the weight of components and...
Show moreTitanium has been identified as one of the key materials with a high strength to weight ratio that can reduce the weight of components and thereby reduce energy consumption. Single press and sinter as a powder metallurgy technique has the potential to provide cost effective components. Armstrong prealloyed Ti6Al4V, HDH prealloyed Ti6Al4V, HDH blended Ti6Al4V powder and their mixtures were pressed and sintered at different conditions. The chemistry, mechanical and microstructural properties have been investigated to establish optimum processing parameters. Sintered parts were sent to Oshkosh Truck to test and compared with aluminum and steel parts. The Titanium and Ti6Al4V parts were successfully applied and tested. All the specimens passed the load test without failures. The sintering mechanisms of Armstrong prealloyed Ti6Al4V powder were investigated. At relative sintered densities of 75% to 90% (around 900°C), surface diffusion cooperate with grain boundary diffusion, which leads to densification of the powder compact. Around 900°C, grain boundary diffusion controls the sintering process. At 1000°C, boundary diffusion made little contribution to the densification of the Ti6Al4V powder compact. Above 900°C and below 91% sintered density, boundary diffusion controls sintering. Lattice diffusion dominates the densification process at higher temperatures (1100°C~1300°C). The sintering of master alloy blended Ti6Al4V powder has been investigated in order to elucidate the mechanism of sintering. Both blended powder compacts and diffusion couples were investigated using backscattered imaging and energy xvi dispersive analysis to determine the phases present and diffusion path on sintering at 1000ºC and 1100ºC. It is shown that transient liquid phase sintering does not occur and the reason for the rapid sintering of this material is due to enhanced diffusion kinetics resulting from a combination of the concentration gradient and stress induced by a phase transformation in the ternary system.
PH.D in Materials Science and Engineering, May 2013
Show less
- Title
- EUTECTIC γ(NI)/γ′(NI3AL)-δ(NI3NB) POLYCRYSTALLINE NICKEL-BASE SUPERALLOYS: CHEMISTRY, PROCESSING, MICROSTRUCTURE AND PROPERTIES
- Creator
- Xie, Mengtao
- Date
- 2012-12-03, 2012-12
- Description
-
Directionally solidified γ(Ni)/γ′(Ni3Al)-δ(Ni3Nb) eutectic alloys possess attrac- tive high temperature mechanical properties and were...
Show moreDirectionally solidified γ(Ni)/γ′(Ni3Al)-δ(Ni3Nb) eutectic alloys possess attrac- tive high temperature mechanical properties and were considered as candidate tur- bine blade materials. Currently, the properties of polycrystalline γ/γ′-δ alloys are of interest as they inherit many advantageous attributes from the directionally solidi- fied γ/γ′-δ alloys, including high volume fraction of reinforcing phases, exceptional thermal stability and resistance to segregation-induced defect formation. If these at- tributes are properly harnessed, these γ/γ′-δ eutectic alloys might provide a unique solution to the problems experienced by traditional γ/γ′ polycrystalline Ni-base su- peralloys. This thesis is therefore dedicated towards the development of a funda- mental understanding of this novel class of eutectic alloys from several important perspectives. To enrich our understanding of this alloy system, this thesis will first be focused on quantifying the specific effect of individual alloying element on this γ/γ′-δ eutectic system. A set of quaternary Ni-Cr-Al-Nb alloy compositions with increasing levels of Chromium(Cr) was designed to investigate the detailed influence of this element on the primary phase formation, solidus and liquidus temperatures and γ-δ eutectic morphology. The alloying effect of Tantalum(Ta), which shares many similarities to Niobium(Nb), was studied by designing a matrix of multi-component γ/γ′-δ alloy compositions with nominally the same overall (Ta+Nb) content but varying Ta/Nb ratios. Here, different solidification segregation and solid state partitioning behaviors of Ta and Nb in this γ/γ′-δ eutectic system will be discussed, as well as the influ- ence of Ta/Nb ratio on solidification characteristics and equilibrium/non-equilibrium phase volume fractions. Thermodynamic calculations using the Computherm Pandat database (PanNi7) were compared to experimental results in these investigations. The second part of this thesis will aim to provide a more general understand- xvii ing of the effect of various alloying elements, including Cr, Co, Al, Ti, Mo, W, Ta and Nb, on this γ/γ′-δ system. A large number of experimental γ/γ′-δ alloys covering a broad range of compositions was selected for the analysis in this study. Important alloy attributes, such as primary phase formation, overall δ volume fraction, phase transformation temperatures and ternary eutectic initiation, were quantitatively char- acterized as a function of individual alloying element concentrations or combined con- tent of more elements. Linear regression analysis was performed to reveal the relative effectiveness of these elements on this eutectic system. Meanwhile, an extensive com- parison between the experimental observations and Pandat predictions was provided to critically evaluate the strength and weakness of existing thermodynamic database model in predicting trends in this eutectic alloy system with substantially higher Nb content compared to traditional γ/γ′ superalloys. The last part of this thesis emphasizes the development of cast and wrought manufacturing processes for cast γ/γ′-δ eutectic alloys as a cost effective alternative to the powder metallurgy route. Hot rolling of workpieces encapsulated within a steel can was performed on a simple model cast γ/γ′-δ alloy (897) to stimulate the ingot to billet. The influence of different deformation levels on breaking down the dendritic structure and promoting fine and homogenized microstructure was investi- gated. The mechanical soundness associated with different microstructures generated by different hot rolling processes was compared via compression and creep testing. Microstructural parameters that contribute to better mechanical properties will be discussed.
PH.D in Materials Science and Engineering, December 2012
Show less