Search results
(41 - 60 of 1,036)
Pages
- Title
- PHYSICS-PRESERVING FINITE DIFFERENCE SCHEMES FOR THE POISSON-NERNST-PLANCK EQUATIONS
- Creator
- Flavell, Allen
- Date
- 2014, 2014-07
- Description
-
The Poisson-Nernst-Planck equations are a system of nonlinear di erential equations that describe ow of charged particles in solution. This...
Show moreThe Poisson-Nernst-Planck equations are a system of nonlinear di erential equations that describe ow of charged particles in solution. This dissertation is about the design of numerical schemes to solve this system which preserves global properties exhibited by the system. There are two major advances presented. The rst is the design of schemes that conserve mass globally when the system is coupled with no- ux boundary conditions. Most notably, a scheme using central di erencing and TR-BDF2 achieves second order accuracy in both space and time, while also conserving global mass is presented. The second is the design of a more general scheme that preserves the time-varying properties of the free energy of the system. One such a scheme uses central di erencing in space and trapezoidal integration in time to achieve second order accuracy in both space and time, while also preserving the energy dynamics, but at the cost of requiring positivity of the solution. There is also a discussion of solution methods: the classic Newton iteration scheme is compared with a modi ed Gummel iteration scheme for the purpose of solving the transient equations. The intended application of this work is the modeling of ion channels, and many of the simulations presented use parameters consistent with models of ion channels.
Ph.D. in Applied Mathematics, July 2014
Show less
- Title
- HYBRID TO SOCIAL CONDENSER: COMPETING APPROACHES TO MIXED-USE DEVELOPMENT
- Creator
- Zagow, Maged
- Date
- 2016, 2016-12
- Description
-
In the last two decades, mixed use has taken center stage in urban planning development in the United States. The research frequently cites...
Show moreIn the last two decades, mixed use has taken center stage in urban planning development in the United States. The research frequently cites this development as a model that can address a variety of socioeconomic problems. Also, it has enjoyed a recent surge in popularity in redeveloping cities by providing more affordable housing opportunities, ensuring safety, reducing auto-dependency, and for providing a sense of place and community. However, its affordability, physical design, and outcomes are highly variable. This study is particularly interested in whether and how mixed use affects the socioeconomics configuration of the built environment. This study uses multilevel data from the county level to the zip code level that represents all US neighborhoods. I use different implementation methods of mixed-use development and different cultural and historical backgrounds to examine the data. The study adopts six mixed-use models that present different methodological interactions between socioeconomic spatial metrics and urban forms. These models represent the realistic constraints of urban geometry and of the socioeconomic structure that comprises the charateristics of race, income, accessibility, safety, adjacency, accessibility, environment, and density. This study finds that the built environment produces a rich landscape of information that appears to guide the opportunities for facilities. The analysis shows that mixed-use development may have certain effects on the number of facilities, housing, income, diversity, crime rate, employment, health, and environment. The analysis of this research works in two dimensions. First, urban models (Hybrid and Social Condenser in general and under two categories Metropolis and Neighborhood Community). The second dimension is the urban characteristics (zoning programming, land use mix, streets fabric), socioeconomics variables (Population density, occupied housing, median age, diversity of race, income, and employment rates), and location variation (states, and cities). The results confirm that mixing the facilities in hybrid communities create more jobs opportunities but limit the affordability of housing, social cohesion and the race diversity. But in Social Condenser models, there are more race diversity safety and healthy environment. These results reflect complexity demands more than mixed-use developments, beyond Jane Jacobs' requirements, and beyond the designation of selected mixed-use zones. This study contributes to the study of how mixed-use development models shift because of various social and economic conditions. The findings from this study can inform architects, investors, policymakers, economists, and planners about factors that sustain mixed-use neighborhoods in the United States and beyond. Urban designers will be able to inform how the seemingly necessary act of laying out mixed-use development can affect the socioeconomic structure of a city. Thus, this study is a useful source for more accurate planning ideas than generic abstract theories or slogans.
Ph.D. in Architecture, December 2016
Show less
- Title
- STUDIES ON SYNTHETIC APPLICATIONS OF STEREOSELECTIVE AND REGIOSELECTIVE RING OPENING REACTIONS OF AZIRIDINIUM IONS
- Creator
- Chen, Yunwei
- Date
- 2014, 2014-12
- Description
-
Aziridinium ions are valuable reactive intermediates in organic synthesis. Regioselective and stereoselective ring opening reactions of...
Show moreAziridinium ions are valuable reactive intermediates in organic synthesis. Regioselective and stereoselective ring opening reactions of aziridinium ions can provide various useful building blocks including optically active vincinal amines, amino alcohols and amino esters. Aziridinium ions are also involved in the biological process of anti-cancer agents. However, aziridinium ions are under-utilized in organic synthesis. In this thesis, we utilize stereoselective and regioselective ring opening reactions of aziridinium ions for synthesis of enantiomerically enriched compounds. Ring opening reactions of aziridinium ions were utilized in intramolecular Friedel-Crafts (FC) reactions for stereoselective and regioselective synthesis of 4-substituted tetrahydroisoquinoline. A series of β-haloamine were prepared as precuresors of aziridinium ions. The reaction conditions for ring opening of aziridinium ions for the FC reactions including temperature, catalysts, and solvents were optimized. Further, the reaction mechanism was studied to prove that the aziridinium ions were formed as the key intermediates during the intramolecular FC reaction. Intermolecular nucleophilic ring opening reaction of aziridinium ions was studied as a convenient method of carbon-carbon formation. Regioselective and stereoselective nucleophilic substitution reactions of aziridinium ions with indole analogues were carried out for the synthesis of optically active tryptamine analogues. The reactions proceeded smoothly to provide the tryptamine analogues in high yield in the presence of halo-sequestering agents, while the reaction provided the tryptamine products in significantly low yield in the absence of halo-sequestering agents. Ring opening reactions of aziridinium ions with malonic esters and Grignard reagents were carried out for the respective synthesis of optically active tryptamine analogues, γ-aminobuyric acid (GABA), and α-amine derivatives. The regiospecific ring opening reactions of aziridinium ions was directly applied for the synthesis of bifunctional ligands which have a potential use in targeted therapy and imaging of cancers. The novel bifunctional chelates with a shorter alkyl spacer C-NETA and 2E-C-NETA as well as the chelates with a longer alkyl spacer 5p-C-NETA were prepared. 5p-C-NETA was conjugated to a cyclic peptide c(RGDyK) as a targeting moiety for use in targeted radiation therapy. In addition, 2E-C-NETA was conjugated to a fluorescent dye Cy5.5 for theranostic applications. The experimental results indicated that the new bifunctional ligands have promising applications in the biomedical field. In summary, stereoselective and regioselective ring opening reactions of aziridinium ions have been successfully applied for the synthesis of optically active compounds such as 4-substituted tetrahydroisoquinolines, tryptamines, γ-aminobuyric acid, α-amine derivatives and the bifunctional chelators. We demonstrated that ring opening of versatile aziridinium intermediates is a strightforward and convenient method for the synthesis of various optically active compounds.
Ph.D. in Chemistry, December 2014
Show less
- Title
- ELIASHBERG ANALYSIS OF CUPRATE OXIDE SUPERCONDUCTORS
- Creator
- Ahmadi, Omid
- Date
- 2011-11, 2011-12
- Description
-
In this thesis, evidence for antiferromagnetic spin uctuations as the pairing glue in high temperature superconductors is presented through a...
Show moreIn this thesis, evidence for antiferromagnetic spin uctuations as the pairing glue in high temperature superconductors is presented through a modi ed Eliashberg analysis of experimental tunneling data of Bi2212 over a wide range of doping. In particular, the normalized conductance data of the junctions, from optimal to overdoped, will be tted at T=0K using d-wave Eliashberg equations where the spectral function is modeled after spin uctuation spectra seen in experiments. The corresponding real and imaginary diagonal and anomalous self-energy curves are extracted and compare well to photoemission experiments. This is followed by a temperature dependent Eliashberg analysis where the spectral function is now temperature dependent, based on trends seen in inelastic neutron scattering experiments. New results for temperature dependent self energy curves are also compared to experiment with slight deviations. Finally, the Josephson product is calculated as an independent check of the tunneling matrix used in tting the data.
Ph.D. in Physics, December 2011
Show less
- Title
- LARGE-SCALE SIMULATION OF ELECTRIC POWER SYSTEMS FOR WIND
- Creator
- Wei, Tian
- Date
- 2011-08, 2011-07
- Description
-
The utilization of wind energy will pose great socioeconomic benefits with reductions in power plant emissions and the supply of zero cost...
Show moreThe utilization of wind energy will pose great socioeconomic benefits with reductions in power plant emissions and the supply of zero cost energy; however, largescale wind energy integration could introduce inevitable challenges to regional transmission systems and hourly system operations. This thesis addresses the congestion identification, simulation and analysis of large-scale electric power systems in different scenarios, large-scale wind energy integration and related transmission expansion issues. A methodology based on the security-constrained unit commitment (SCUC) is applied to analyze the transmission congestions in the Eastern Interconnection of the United States. The identified congestions are visualized along with the Geographical Information System (GIS) data and compared with the results in National Electric Transmission Congestion Study (NETCS) published by the Department of Energy of the United States in 2006. The study also provides the locational marginal price (LMP) information in the Eastern Interconnection, which is not available in the NETCS report. This thesis implements a comprehensive simulation and scenario analysis of the Illinois electric power system for the year 2011. Possible scenarios representing electrical load sensitivities to economic growth, fuel price variations, and the impact of carbon cost, are studied. This thesis presents the hourly simulation results for the large-scale wind energy integration in the Eastern Interconnection of the United States. An hourly unit commitment is applied for the simulation of the economics of wind energy integration in the year 2030. The energy portfolio for supplying the hourly load in 2030 is developed based on wind integration levels. The sensitivities of fuel price, wind energy quantity, xvii load forecast, carbon cost, and load management to the proposed 2030 wind integration are studied. This thesis identifies transmission congestions and expands the existing transmission system in the Eastern Interconnection of the United States for accommodating a large-scale integration of wind energy. Violated transmission flows which would cause the infeasibility of hourly SCUC are identified. An iterative transmission expansion analysis is implemented to identify the minimum required additions to the Eastern Interconnection for mitigating hourly transmission congestions.
Ph.D. in Electrical Engineering, July 2011
Show less
- Title
- DIRECT DIFFEOMORPHIC REPARAMETERIZATION FOR CORRESPONDENCE OPTIMIZATION IN STATISTICAL SHAPE MODELING
- Creator
- Li, Kang
- Date
- 2015, 2015-05
- Description
-
This dissertation proposes an efficient optimization approach for obtaining shape correspondence across a group of objects for statistical...
Show moreThis dissertation proposes an efficient optimization approach for obtaining shape correspondence across a group of objects for statistical shape modeling. With each shape represented in a B-spline based parametric form, the correspondence across the shape population is cast as an issue of seeking a reparametrization for each shape so that a quality measure of the resulting shape correspondence across the group is optimized. The quality measure is the description length of covariance matrix of the shape population, with landmarks sampled on each shape. The movement of landmarks on each B-spline shape is controlled by the reparameterization of the B-spline shape. The reparameterization itself is also represented with B-splines and B-spline coefficients are used as optimization parameters. We have developed formulations for ensuring the bijectivity of the reparameterization. A gradient-based optimization approach is developed, including techniques such as constraint aggregation and adjoint senstivity for efficient, direct di↵eomorphic reparameterization of landmarks to improve the group-wise shape correspondence. Numerical experiments on both synthetic and real 2D and 3D data sets demonstrate the efficiency and e↵ectiveness of the proposed approach.This dissertation proposes an efficient optimization approach for obtaining shape correspondence across a group of objects for statistical shape modeling. With each shape represented in a B-spline based parametric form, the correspondence across the shape population is cast as an issue of seeking a reparametrization for each shape so that a quality measure of the resulting shape correspondence across the group is optimized. The quality measure is the description length of covariance matrix of the shape population, with landmarks sampled on each shape. The movement of landmarks on each B-spline shape is controlled by the reparameterization of the B-spline shape. The reparameterization itself is also represented with B-splines and B-spline coefficients are used as optimization parameters. We have developed formulations for ensuring the bijectivity of the reparameterization. A gradient-based optimization approach is developed, including techniques such as constraint aggregation and adjoint senstivity for efficient, direct di↵eomorphic reparameterization of landmarks to improve the group-wise shape correspondence. Numerical experiments on both synthetic and real 2D and 3D data sets demonstrate the efficiency and e↵ectiveness of the proposed approach.
Ph.D. in Mechanical and Aerospace Engineering, May 2015
Show less
- Title
- DEPTH MAP PROCESSING FOR MULTI-VIEW VIDEO PLUS DEPTH
- Creator
- Vijayanagar, Krisha Rao
- Date
- 2014, 2014-05
- Description
-
The world of multimedia and visual entertainment has grown in leaps and bounds in the past decade with 3-D television being one of the biggest...
Show moreThe world of multimedia and visual entertainment has grown in leaps and bounds in the past decade with 3-D television being one of the biggest technologies. Amongst several formats proposed for representing 3-D content, multi- view video plus depth (MVD) format has gained a lot of interest in the past few years. MVD requires that each view of a particular scene be accompanied by a per-pixel depth. This introduces new problems for compression and transmission of MVD content because a depth map has di erent characteristics from a color image. Keeping the MVD format and depth map characteristics in mind, we highlight three majors problems that plague the MVD format, namely, 1. depth map re nement. 2. depth map compression. 3. novel view synthesis using the depth map at the decoder side. In order to re ne a depth map, we propose a multi-resolution anisotropic di usion algorithm that is optimized to run in real-time thus ensuring that the encoder does not su er from additional latency. Next, we propose two unique solutions for compressing them. We rst propose a solution using the Layered Depth Video (LDV) concept using a rate-distortion optimized quadtree decomposition of the LDV using a novel two-mode block truncation code with improved prediction. We also propose a compression solution using compressive sensing (CS) concepts by creating a hybrid rate-optimized CS codec. This codec achieves two goals:- rstly, block classi cation to ensure lower decoder complexity and secondly, rate-distortion optimization of the measurement rate for each block that is to be compressively sensed. We then look at the view synthesis component of the MVD tool-chain which x is a time-sensitive process. Keeping decoding latency in mind, we propose a lookup- table based approach to the 3-D warping process with a simpli ed hole- lling algorithm that is not only competitive quality-wise with other schemes but is several times faster too. It is hopeful that the presented techniques can be used successfully to create MVD architectures for applications that need low-complexity encoding solutions.
PH.D in Electrical Engineering, May 2014
Show less
- Title
- COMPUTER MODELING OF BREAST LESIONS AND STUDIES OF ANALYZER-BASED X-RAY IMAGING
- Creator
- Garcia, Luis De Sisternes
- Date
- 2011-11, 2011-12
- Description
-
Phase-contrast x-ray imaging is an emerging technique that promises to yield highly sensitive medical images of soft tissue, which is...
Show morePhase-contrast x-ray imaging is an emerging technique that promises to yield highly sensitive medical images of soft tissue, which is difficult to observe via conventional radiography given its low X-ray attenuation differences. One of these phase-contrast techniques, known as analyzer-based imaging, has demonstrated that highly detailed breast tissue images can be obtained using synchrotron radiation. However, synchrotron facilities are impractical for clinical use. This thesis introduces studies and exposure consideration towards the application of analyzer-based imaging in a clinical environment, particularly in the context of breast imaging. It also introduces a computational breast lesion model that generates randomized three-dimensional phantoms which follow realistically the characteristics observed in real lesions. Moving analyzer-based imaging to clinical application requires the consideration of photon noise, inherent from the use of a photon-limited conventional source. We summarize the statistical properties in the presence of photon noise of two popular analyzer-based imaging techniques, known as diffraction-enhanced imaging (DEI) and multiple-image radiography (MIR). The statistics for MIR have not been previously derived and are introduced in this thesis. Comparison of the resulting statistical predictions with results obtained by Monte Carlo simulation validated the analysis. An expression for the maximum-likelihood (ML) solution for analyzer-based imaging is presented as a way of minimizing the effects of photon noise in the reconstruction of the object’s absorption, refraction and ultra-small angle scattering properties, and more practical maximum-likelihood expectation-maximization (ML-EM) and maximum-a-posteriori expectation-maximization (MAP-EM) solutions are also introduced. The behavior of the ML-EM and MAP-EM solutions was compared to the results produced by the five best-known analyzer-based reconstruction methods using computer simulations. The ML-EM and MAP-EM reconstructions proved closer to the theoretical values as they do not rely on commonly known limitations and approximations introduced by the other techniques. We introduce the development and evaluation of a new computational breast lesion phantom model that can simulate either massess or microcalcifications. The proposed tool allows the generation of a large number of randomized three-dimensional breast lesion simulations following desired characteristics normally used to describe breast lesions in clinical practice. The initial motivation for the development of this new phantom model was to enable the proposed evaluations of analyzer-based imaging to be achieved. However, the model became a major focus of this thesis because it improves significantly upon those that can be found in previous literature. The proposed lesion model can be used for evaluation studies across different breast imaging techniques, as well as for training purposes, so it is our hope that it could become an important resource for the broader mammography research community. As part of the lesion modeling research, we also introduce methods to computationally modify experimental mammography and analyzer-based images of breast tissue so that they present the generated tumor simulations embedded within their parenchyma realistically. The realism of the simulated lesion images was evaluated by comparison of 83 real tumor cases observed in mammograms with 83 constructed hybrid images in which simulated tumors matching the characteristics observed in the real cases were embedded, with healthy tissue acting as background. As a quantitative comparison, extracted features describing tumor shape and density showed no statistically significant differences between real and simulated tumors. A known computational tumor classification technique based on their shape observed in mammography was implemented and showed no significant performance differences between real and simulated cases, as well as showing good correlation with previously published performance results in real tumors. To measure the realism for use in human observer studies, we conducted a reader study in which 5 experienced radiologists were asked to judge whether each of the 166 images was real or simulated by assigning a score on a 7-point scale. The results were analyzed in a multiple-reader multiple-case statistical framework. The conclusion of the study was that the readers’ accuracy in assessing whether the lesions were real or simulated was not significantly better than random chance. This thesis also incorporates a reader study to evaluate the degree to which photon-limited analyzer-based images may be effective for visualization of breast cancer features. Our motivation was to establish the x-ray intensity that would be required to make these methods feasible, the purpose being to serve as a guide in parameter selection for future design of imaging hardware. We conducted a series of observer studies that quantify the performance of analyzer-based refraction images at different noise levels for the task of identifying subtle details present in breast tumors which are relevant to clinical diagnosis. The cases shown to the readers consisted of hybrid images where simulated lesions of known characteristics were computationally embedded in real breast analyzer-based background images. The original phase-contrast data was obtained using synchrotron radiation and was later modified to simulate the noise and blurring effects produced from a photon-limited source with a 300μm aperture size, similar to those used in a laboratory environment. Results showed that the analyzer-based imaging techniques statistically outperformed conventional mammography for the given task with an average of just 128 recorded photons per pixel in background image regions
Ph.D. in Electrical Engineering, December 2011
Show less
- Title
- COOPERATIVE BATCH SCHEDULING FOR HPC SYSTEMS
- Creator
- Yang, Xu
- Date
- 2017, 2017-05
- Description
-
The batch scheduler is an important system software serving as the interface between users and HPC systems. Users submit their jobs via batch...
Show moreThe batch scheduler is an important system software serving as the interface between users and HPC systems. Users submit their jobs via batch scheduling portal and the batch scheduler makes scheduling decision for each job based on its request for system resources and system availability. Jobs submitted to HPC systems are usually parallel applications and their lifecycle consists of multiple running phases, such as computation, communication and input/output data. Thus, the running of such parallel applications could involve various system resources, such as power, network bandwidth, I/O bandwidth, storage, etc. And most of these system resources are shared among concurrently running jobs. However, Today's batch schedulers do not take the contention and interference between jobs over these resources into consideration for making scheduling decisions, which has been identified as one of the major culprits for both the system and application performance variability. In this work, we propose a cooperative batch scheduling framework for HPC systems. The motivation of our work is to take important factors about jobs and the system, such as job power, job communication characteristics and network topology, for making orchestrated scheduling decisions to reduce the contention between concurrently running jobs and to alleviate the performance variability. Our contributions are the design and implementation of several coordinated scheduling models and algorithms for addressing some chronic issues in HPC systems. The proposed models and algorithms in this work have been evaluated by the means of simulation using workload traces and application communication traces collected from production HPC systems. Preliminary experimental results show that our models and algorithms can effectively improve the application and the system overall performance, HPC facilities' operation cost, and alleviate the performance variability caused by job interference.
Ph.D. in Computer Science, May 2017
Show less
- Title
- SPEECH INTELLIGIBILITY AND ACCENTS IN SPEECH-MEDIATED INTERFACES: RESULTS AND RECOMMENDATIONS
- Creator
- Lawrence, Halcyon M.
- Date
- 2013, 2013-07
- Description
-
There continues to be significant growth in the development and use of speech–mediated devices and technology products; however, there is no...
Show moreThere continues to be significant growth in the development and use of speech–mediated devices and technology products; however, there is no evidence that non-native English speech is used in these devices, despite the fact that English is now spoken by more non-native speakers than native speakers, worldwide. This relative absence of non-native English speech in devices may be due in part to the costs associated with localizing speech devices, but it may also be attributable to the fact that not enough is known about user performance with accented speech in speech–mediated environments. In the absence of targeted research, developers may be relying on existing studies which focus on perception (impression) of accented speech, as a basis of decision-making. However, perception paints only part of the picture when it comes to understanding how and why people perform in certain ways and in certain environments. Three studies were conducted to answer the following questions: (1) What are the acoustic-phonetic characteristics of negatively- and positively-perceived accented speech? And how are these characteristics related to markers of intelligible speech? (2) How do participants perform on different types of accented-speech tasks? (3) What is the relationship between user perception of accented speech and user performance in response to accented speech? and; (4) How do participants perform on accented speech tasks of varying complexity? Arising out of this research, there are six recommendations for the use of accented speech in speech-mediated devices. Also, the findings of this study raise questions about inherent linguistic stereotypes which impact both our perceptions and our choices about xvi the accents we want to hear on our speech devices. A discussion about if and how these stereotypes can be altered and measured are included. Future research should examine the role of experienced non-native talkers in speech devices. Results of study one demonstrated that some experienced non-native talkers were positively-perceived by raters and may be good candidates for talkers in speech devices. A study like this would explicitly establish if listeners consistently make native vs. non-native distinctions in their preferences or if a prestige continuum emerges.
PH.D in Technical Communication, July 2013
Show less
- Title
- SPECTRUM OBSERVATORY BASED TRAFFIC MODELING AND CHANNEL SELECTION IN SUPPORT OF DYNAMIC SPECTRUM ACCESS
- Creator
- Bacchus, Brent Roger
- Date
- 2015, 2015-05
- Description
-
It is well known that the exponential growth in popularity of wireless devices has created a demand for radio spectrum that cannot be met with...
Show moreIt is well known that the exponential growth in popularity of wireless devices has created a demand for radio spectrum that cannot be met with current regulatory policies. Despite the difficulty in procuring access to new spectrum resources, many empirical studies have indicated that the majority of spectrum is in-fact unused in the temporal, spatial and/or spectral domains, representing an untapped wealth that must be exploited. Dynamic Spectrum Access (DSA) is a promising technology which aims to improve the efficiency of future radios and alleviate the issue of spectrum under-utilization. This dissertation utilizes the data from the IIT Spectrum Observatory to develop models of channel activity on the Land Mobile Radio (LMR) band (used for critical communication by organizations such as public safety) and shows how such models can be applied to improve the performance of DSA. We demonstrate that LMR traffic may possess multi-timescale behavior – such as clustering and dispersion over different time periods – and propose a novel statistical model to account for these observations based on a multiple emission hidden Markov model. We then used this model to design a collision constrained channel selection algorithm that can permit the re-use of licensed spectrum while minimizing interference with incumbent users. The findings in this work are primarily developed for public safety, however the techniques developed are general enough to be applied to other types of traffic possessing similar characteristics. The proposed model, in particular, is well suited for further analytic work and simulations studies in this area.
Ph.D. in Electrical and Computer Engineering, May 2015
Show less
- Title
- SYSTEM SUPPORT FOR RESILIENCE IN LARGE-SCALE PARALLEL SYSTEMS: FROM CHECKPOINTING TO MAPREDUCE
- Creator
- Jin, Hui
- Date
- 2012-05-31, 2012-05
- Description
-
High-Performance Computing (HPC) has passed the Petascale mark and is moving forward to Exascale. As the system ensemble size continues to...
Show moreHigh-Performance Computing (HPC) has passed the Petascale mark and is moving forward to Exascale. As the system ensemble size continues to grow, the occurrence of failures is the norm rather than the exception during the execution of parallel applications. Resilience is widely recognized as one of the key obstacles towards Exascale computing. Checkpointing is currently the de-facto fault tolerant mechanism for parallel applications. However, parallel checkpointing at scale usually generates bursts of concurrent I/O requests, imposes considerable overhead to I/O subsystems, and limits the scalability of parallel applications. Despite the doubt in the feasibility of checkpointing continues to increase, there is still no promising alternative on the horizon yet to replace checkpointing. MapReduce is a new programming model for massive data processing. It has demonstrated a compelling potential in reshaping the landscape of HPC from various perspectives. The resilience of MapReduce applications and its potential in benefiting HPC fault tolerance are active research topics that require extensive investigation. This thesis work targets at building a systematic framework to support resilience in large-scale parallel systems. We address the identified checkpointing performance issue through a three-fold approach: reduce the I/O overhead, exploit storage alternatives, and determine the optimistic checkpointing frequency. This three-fold approach is achieved with three different mechanisms, namely system coordination and scheduling, the utilization of MapReduce framework, and stochastic modeling. To deal with the increasing concerns about MapReduce resilience, we also strive to improve the reliability of MapReduce applications, and investigate the tradeoffs in the programming model selection (e.g., MPI v.s. MapReduce) from the perspective of resilience. This thesis provides a thorough study and a practical solution for solving the outstanding resilience problem of large-scale MPI-based HPC applications and beyond. It makes a noticeable contribution to the state-of-the-art and opens a new research direction for many to follow.
Ph.D. in Computer Science, May 2012
Show less
- Title
- ASYMPTOTIC SIMILARITY IN TURBULENT BOUNDARY LAYERS
- Creator
- Duncan, Richard D.
- Date
- 2011-05-10, 2011-05
- Description
-
The turbulent boundary layer is one of the most fundamental and important applications of fluid mechanics. Despite great practical interest...
Show moreThe turbulent boundary layer is one of the most fundamental and important applications of fluid mechanics. Despite great practical interest and its direct impact on frictional drag among its many important consequences, no theory absent of significant inference or assumption exists. Numerical simulations and empirical guidance are used to produce models and adequate predictions, but even minor improvements in modeling parameters or physical understanding could translate into significant improvements in the efficiency of aerodynamic and hydrodynamic vehicles. Classically, turbulent boundary layers and fully-developed turbulent channels and pipes are considered members of the same “family,” with similar “inner” versus “outer” descriptions. However, recent advances in experiments, simulations, and data processing have questioned this, and, as a result, their fundamental physics. To address a full range of pressure gradient boundary layers, a new approach to the governing equations and physical description of wall-bounded flows is formulated, using a two variable similarity approach and many of the tools of the classical method with slight but significant variations. A new set of similarity requirements for the characteristic scales of the problem is found, and when these requirements are applied to the classical “inner” and “outer” scales, a “similarity map” is developed providing a clear prediction of what flow conditions should result in self-similar forms. An empirical model with a small number of parameters and a form reminiscent of Coles’ “wall plus wake” is developed for the streamwise Reynolds stress, and shown to fit experimental and numerical data from a number of turbulent boundary layers as well as other wall-bounded flows. It appears from this model and its scaling using the free-stream velocity that the true asymptotic form of u′2 may not become self-evident until Re ≈ 275, 000 or δ+ ≈ 105, if not higher. A perturbation expansion made possible by the novel inclusion of the scaled streamwise coordinate is used to make an excellent prediction of the shear Reynolds stress in zero pressure gradient boundary layers and channel flows, requiring only a streamwise mean velocity profile and the new similarity map. Extension to other flows is promising, though more information about the normal Reynolds stresses is needed. This expansion is further used to infer a three layer structure in the turbulent boundary layer, and modified two layer structure in fully-developed flows, by using the classical inner and logarithmic profiles to determine which portions of the boundary layer are dominated by viscosity, inertia, or turbulence. A new inner function for U+ is developed, based on the three layer description, providing a much more simplified representative form of the streamwise mean velocity nearest the wall.
Ph.D. in Mechanical and Aerospace Engineering, May 2011
Show less
- Title
- MECHANICAL PROPERTIES AND SINTERING MECHANISMS OF POWDER METALLURGY TI6AL4V
- Creator
- Xu, Xiaoyan
- Date
- 2013, 2013-05
- Description
-
Titanium has been identified as one of the key materials with a high strength to weight ratio that can reduce the weight of components and...
Show moreTitanium has been identified as one of the key materials with a high strength to weight ratio that can reduce the weight of components and thereby reduce energy consumption. Single press and sinter as a powder metallurgy technique has the potential to provide cost effective components. Armstrong prealloyed Ti6Al4V, HDH prealloyed Ti6Al4V, HDH blended Ti6Al4V powder and their mixtures were pressed and sintered at different conditions. The chemistry, mechanical and microstructural properties have been investigated to establish optimum processing parameters. Sintered parts were sent to Oshkosh Truck to test and compared with aluminum and steel parts. The Titanium and Ti6Al4V parts were successfully applied and tested. All the specimens passed the load test without failures. The sintering mechanisms of Armstrong prealloyed Ti6Al4V powder were investigated. At relative sintered densities of 75% to 90% (around 900°C), surface diffusion cooperate with grain boundary diffusion, which leads to densification of the powder compact. Around 900°C, grain boundary diffusion controls the sintering process. At 1000°C, boundary diffusion made little contribution to the densification of the Ti6Al4V powder compact. Above 900°C and below 91% sintered density, boundary diffusion controls sintering. Lattice diffusion dominates the densification process at higher temperatures (1100°C~1300°C). The sintering of master alloy blended Ti6Al4V powder has been investigated in order to elucidate the mechanism of sintering. Both blended powder compacts and diffusion couples were investigated using backscattered imaging and energy xvi dispersive analysis to determine the phases present and diffusion path on sintering at 1000ºC and 1100ºC. It is shown that transient liquid phase sintering does not occur and the reason for the rapid sintering of this material is due to enhanced diffusion kinetics resulting from a combination of the concentration gradient and stress induced by a phase transformation in the ternary system.
PH.D in Materials Science and Engineering, May 2013
Show less
- Title
- EUTECTIC γ(NI)/γ′(NI3AL)-δ(NI3NB) POLYCRYSTALLINE NICKEL-BASE SUPERALLOYS: CHEMISTRY, PROCESSING, MICROSTRUCTURE AND PROPERTIES
- Creator
- Xie, Mengtao
- Date
- 2012-12-03, 2012-12
- Description
-
Directionally solidified γ(Ni)/γ′(Ni3Al)-δ(Ni3Nb) eutectic alloys possess attrac- tive high temperature mechanical properties and were...
Show moreDirectionally solidified γ(Ni)/γ′(Ni3Al)-δ(Ni3Nb) eutectic alloys possess attrac- tive high temperature mechanical properties and were considered as candidate tur- bine blade materials. Currently, the properties of polycrystalline γ/γ′-δ alloys are of interest as they inherit many advantageous attributes from the directionally solidi- fied γ/γ′-δ alloys, including high volume fraction of reinforcing phases, exceptional thermal stability and resistance to segregation-induced defect formation. If these at- tributes are properly harnessed, these γ/γ′-δ eutectic alloys might provide a unique solution to the problems experienced by traditional γ/γ′ polycrystalline Ni-base su- peralloys. This thesis is therefore dedicated towards the development of a funda- mental understanding of this novel class of eutectic alloys from several important perspectives. To enrich our understanding of this alloy system, this thesis will first be focused on quantifying the specific effect of individual alloying element on this γ/γ′-δ eutectic system. A set of quaternary Ni-Cr-Al-Nb alloy compositions with increasing levels of Chromium(Cr) was designed to investigate the detailed influence of this element on the primary phase formation, solidus and liquidus temperatures and γ-δ eutectic morphology. The alloying effect of Tantalum(Ta), which shares many similarities to Niobium(Nb), was studied by designing a matrix of multi-component γ/γ′-δ alloy compositions with nominally the same overall (Ta+Nb) content but varying Ta/Nb ratios. Here, different solidification segregation and solid state partitioning behaviors of Ta and Nb in this γ/γ′-δ eutectic system will be discussed, as well as the influ- ence of Ta/Nb ratio on solidification characteristics and equilibrium/non-equilibrium phase volume fractions. Thermodynamic calculations using the Computherm Pandat database (PanNi7) were compared to experimental results in these investigations. The second part of this thesis will aim to provide a more general understand- xvii ing of the effect of various alloying elements, including Cr, Co, Al, Ti, Mo, W, Ta and Nb, on this γ/γ′-δ system. A large number of experimental γ/γ′-δ alloys covering a broad range of compositions was selected for the analysis in this study. Important alloy attributes, such as primary phase formation, overall δ volume fraction, phase transformation temperatures and ternary eutectic initiation, were quantitatively char- acterized as a function of individual alloying element concentrations or combined con- tent of more elements. Linear regression analysis was performed to reveal the relative effectiveness of these elements on this eutectic system. Meanwhile, an extensive com- parison between the experimental observations and Pandat predictions was provided to critically evaluate the strength and weakness of existing thermodynamic database model in predicting trends in this eutectic alloy system with substantially higher Nb content compared to traditional γ/γ′ superalloys. The last part of this thesis emphasizes the development of cast and wrought manufacturing processes for cast γ/γ′-δ eutectic alloys as a cost effective alternative to the powder metallurgy route. Hot rolling of workpieces encapsulated within a steel can was performed on a simple model cast γ/γ′-δ alloy (897) to stimulate the ingot to billet. The influence of different deformation levels on breaking down the dendritic structure and promoting fine and homogenized microstructure was investi- gated. The mechanical soundness associated with different microstructures generated by different hot rolling processes was compared via compression and creep testing. Microstructural parameters that contribute to better mechanical properties will be discussed.
PH.D in Materials Science and Engineering, December 2012
Show less
- Title
- AN ADAPTIVE RESCALING SCHEME FOR COMPUTING HELE-SHAW PROBLEMS
- Creator
- Zhao, Meng
- Date
- 2017, 2017-07
- Description
-
In this thesis, we develop efficient adaptive rescaling schemes to investigate interface instabilities associated with moving interface...
Show moreIn this thesis, we develop efficient adaptive rescaling schemes to investigate interface instabilities associated with moving interface problems. The idea of rescaling is to map the current time-space onto a new time-space frame such that the interfaces evolve at a chosen speed in the new frame. We couple the rescaling idea with boundary integral method to demonstrate the efficiency of the rescaling idea, though it can be applied to Cartesian-grid based method in general. As an example, we use the Hele-Shaw problem to examine the efficiency of the rescaling scheme. First, we apply the rescaling scheme to a slowly expanding interface. In the new frame, the evolution is dramatically accelerated, while the underlying physics remains unchanged. In particular, at long times numerical results reveal that there exist nonlinear, stable, self-similarly evolving morphologies. The rescaling idea can also be used to simulate the fast shrinking interface, e.g. the Hele-Shaw problem with a time dependent gap. In this case, the rescaling scheme slows down the interface evolution in the new frame to remove the severe time step constraint that makes the long-time simulations prohibitive. Finally, we study an analytical solution to the stability of the interface of the Hele-Shaw problem, assuming a small surface tension under a time dependent flux Q(t). Following [116, 109], we find the motions of daughter singularity ζd and simple singularity ζ0 do not depend on the flux Q(t). We also find a criterion to identify the relation between ζ0 and ζd.
Ph.D. in Applied Mathematics, July 2017
Show less
- Title
- WIRELESS SCHEDULING IN MULTI-CHANNEL MULTI-RADIO MULTIHOP WIRELESS NETWORKS
- Creator
- Wang, Zhu
- Date
- 2014, 2014-07
- Description
-
Maximum multi ow (MMF) and maximum concurrent multi ow (MCMF) in multi-channel multi-radio (MC-MR) wireless networks have been well-studied in...
Show moreMaximum multi ow (MMF) and maximum concurrent multi ow (MCMF) in multi-channel multi-radio (MC-MR) wireless networks have been well-studied in the literature. They are NP-hard even in single-channel single-radio (SC-SR) wireless networks when all nodes have uniform (and xed) interference radii and the positions of all nodes are available. This disertation studies maximum multi ow (MMF) and maximum concur- rent multi ow (MCMF) in muliti-channel multi-radio multihop wireless networks under the protocol interference model in the bidirectional mode or the unidirectional mode. We introduce a ne-grained network representation of multi-channel multi- radio multihop wireless networks and present some essential topological properties of its associated con ict graph. It was proved that if the number of channels is bounded by a constant (which is typical in practical networks), both MMF and MCMF admit a polynomial-time ap- proximation scheme under the protocol interference model in the bidirectional mode or the unidirectional mode with some additional mild conditions. However, the run- ning time of these algorithms grows quickly with the number of radios per node (at least in the sixth order) and the number of channels (at least in the cubic order). Such poor scalability stems intrinsically from the exploding size of the ne-grained network representation upon which those algorithms are built. In Chapter 2 of this dissertation, we introduce a new structure, termed as concise con ict graph, on the node-level links directly. Such structure succinctly captures the essential advantage of multiple radios and multiple channels. By exploring and exploiting the rich structural properties of the concise con ict graphs, we are able to develop fast and scalable link scheduling algorithms for either minimizing the communication latency or maximizing the (concurrent) multi ow. These algorithms have running time growing linearly in both the number of radios per node and the number of channels, while not sacri cing the approximation bounds. While the algorithms we develop in Chapter 2 admit a polynomial-time ap- proximation scheme (PTAS) when the number of channels is bounded by a constant, such PTAS is quite infeasible practically. Other than the PTAS, all other known approximation algorithms, in both SC-SR wireless networks and MC-MR wireless networks, resorted to solve a polynomial-sized linear program (LP) exactly. The s- calability of their running time is fundamentally limited by the general-purposed LP solvers. In Chapter 3 of this dissertation, we rst introduce the concept of interference costs and prices of a path and explore their relations with the maximum (concurrent) multi ow. Then we develop purely combinatorial approximation algorithms which compute a sequence of least interference-cost routing paths along which the ows are routed. These algorithms are faster and simpler, and achieve nearly the same approximation bounds known in the literature. This dissertation also explores the stability analysis of two link scheduling in MC-MR wireless networks under the protocol interference model in the bidirectional mode or the unidirectional mode. Longest-queue- rst (LQF) link scheduling is a greedy link scheduling in multihop wireless networks. Its stability performance in single-channel single-radio (SC-SR) wireless networks has been well studied recently. However, its stability performance in multi-channel multi-radio (MC-MR) wireless networks is largely under-explored. We present a stability subregion with closed form of the LQF scheduling in MC-MR wireless networks, which is within a constant factor of the network stability region. We also obtain constant lower bounds on the efficiency ratio of the LQF scheduling in MC-MR wireless networks under the protocol interference model in the bidirectional mode or unidirectional mode. Static greedy link schedulings have much simpler implementation than dy- namic greedy link schedulings such as Longest-queue-frst (LQF) link scheduling. However, its stability performance in multi-channel multi-radio (MC-MR) wireless networks is largely under-explored. In this dissertation, we present a stability subre- gion with closed form of a static greedy link scheduling in MC-MR wireless networks under the protocol interference model in the bidirectional mode. By adopting some special static link orderings, the stability subregion is within a constant factor of the stable capacity region of the network. We also obtain constant lower bounds on the throughput efficiency ratios of the static greedy link schedulings in some special static link orderings.
Ph.D. in Computer Science, July 2014
Show less
- Title
- INDUSTRIAL UPGRADING IN KOREA
- Creator
- Woosiklee
- Date
- 2014, 2014-05
- Description
-
One of the most difficult obstacles facing non-western nations is the issue of technology transfer. The main objective of this dissertation is...
Show moreOne of the most difficult obstacles facing non-western nations is the issue of technology transfer. The main objective of this dissertation is to analyze the how South Korea has succeeded through industrial upgrading through technology transfer in achieving the Han River Miracle- making it in 2011, the fourth largest economy in Asia and the 9th largest in the world. From 1910 to 1945, Korean modernization was continuously developed under the Japanese war economy and its military policy. Japanese capital, technology and entrepreneurs were transferred to Korea due to supplement the shortages of Japanese industries or to take advantage of the low labor costs in Korea in order to prepare for the Sino-Japanese War in 1936 and the Pacific War in 1941. There is no doubt that President Chung-Hee Park (1961-1979) was the architect of the Korean economic miracle. During his authoritarian regime, the government had played an important role in the creation and financing of the modern Korean industrial groupings, called the Chaebols. The government also intervened directly in the formation of their policies. In the 1980s, when the country embarked on financial liberalization, the degree of intervention started to decrease. And finally, the 1997 crisis will be examined, with special attention on the introduction of reforms required by the International Monetary Fund (IMF). In the industrial arena, the focus will be on the rationalization policies undertaken to increase the total factor productivity (TFP). It will cover the currently important industries of steel, automobiles and semiconductors, as well as those promising industries which have led the development of South Korea's knowledge-intensive economy. An integral part of the xi ii analysis will study the repercussions of the 1997 financial reforms on both the large and small and medium-size industries. Conventional wisdom assumes that it was under President Park's rule that South Korea had its first experience with industrialization. This assumption, however, ignores the significant industrialization that took place during the colonial period. It also does not take into account the admittedly limited industrial development that took place during the time before the 1961 coup d'état, when civilian governments were in charge. The dissertation would shed light on these overlooked periods.
PH.D in Management Science, May 2014
Show less
- Title
- LOG ANALYSIS FOR RELIABILITY MANAGEMENT IN LARGE-SCALE SYSTEMS
- Creator
- Zheng, Ziming
- Date
- 2012-07-16, 2012-07
- Description
-
With the increasing scale and complexity of high performance computing (HPC) systems, reliability management is becoming a major concern....
Show moreWith the increasing scale and complexity of high performance computing (HPC) systems, reliability management is becoming a major concern. System logs are the primary source of information to understand and analyze system problems. Nevertheless, manual log processing is time-consuming, error-prone, and not scalable. Currently little study has been done on automated log analysis for practical use in HPC systems. In this thesis, we present a log analysis infrastructure by exploiting data mining and machine learning technologies. Our work can be broadly divided into four parts: log pre-processing, online failure prediction, automatic root cause diagnosis, and reliability modeling. We evaluate our results by means of system logs collected from production HPC systems. This work can greatly improve our understanding of faults and failures arising from hardware/software components and their interactions. It can further facilitate the reliability management for HPC systems.
Ph.D. in Computer Science, July 2012
Show less
- Title
- APPLICATION OF SPECTRUM OBSERVATORY MEASUREMENTS TO SUPPORT TRAFFIC MODEL-BASED DYNAMIC SPECTRUM ACCESS
- Creator
- Taher, Tanim Mohammed
- Date
- 2014, 2014-07
- Description
-
In a 2012 report, the President’s Council of Advisors in Science and Technology (PCAST) published a memorandum that calls for the...
Show moreIn a 2012 report, the President’s Council of Advisors in Science and Technology (PCAST) published a memorandum that calls for the identification of 1000 MHz of Federal Spectrum to be shared with private (commercial) users. This dissertation proposes a system that employs RF measurements for spectrum usage modeling and Dynamic Spectrum Access (DSA) methodologies that utilize the modeling information to permit sharing of wireless resources. A procedure called the Comprehensive Band Modeling (CBM) procedure is developed that automatically models measured RF data from any band of interest and identifies the locations of signals and holes present in the band. The output of the CBM procedure is summarized in a compact versatile format that makes DSA applications feasible. The research primarily focuses on the 450-474 MHz land mobile radio (LMR) band, and several additional bands like the TV band and the 2.5-2.7 GHz band. However, the research methodology and techniques are broadly applicable to many more frequency ranges. The research has four main areas: (a) spectrum sensor design and measurements, (b) occupancy modeling, (c) communicating the modeling information in a compact form to secondary users to support DSA algorithms and protocols, and (d) tools and metrics for spectrum sharing favorability analysis. Three spectrum sensor platforms were employed in measurements – (1) a spectrum analyzer based Spectrum Observatory (SO) that was developed earlier, (2) a specially purposed software-defined radio (SDR) for measuring LMR channels, (3) and a high-speed and portable SO system based on a sensor called the RFeye. An SO continually measures RF data in a band at a high temporal resolution such that the channel switching activity is seen – like, transmitters turning on and off. Spectrum measurements of the individual RF channels in the 450-474 MHz LMR band and the two commercial bands are used to generate statistical traffic and occupancy models. Long-term measurement data is used to assess how stationary the channel is, and how often the model parameters need to be updated. The spectrum observatory supports a network of Secondary Users (SU) by communicating the traffic model parameters in a compact format to the SUs. The SUs share Primary User (PU) channels via DSA techniques. The DSA algorithms take advantage of the model parameters provided by the SO to maximize SU throughput with limited interference on the PU. The DSA coexistence techniques are evaluated via simulation. The simulation results including Spectrum Opportunity Accessed (SOA), SU throughput, and collision rates are then analyzed to provide an assessment of DSA-based spectrum sharing in that band. The main contribution of this dissertation is the aforementioned CBM procedure. The white spaces in the frequency and time domains, that is, the underutilized spectrum opportunities available for possible secondary use via DSA are automatically identified, as well as the frequency locations that are not conducive to DSA due to the presence of frequent primary licensee transmissions. In CBM, white spaces are referred to as ‘Holes’, and the licensed primary transmission frequencies as ‘Signals’. Useful information about the duty cycles and traffic patterns of incumbent users’ activity within possible secondary use channels is extracted and modeled. The model enables prospective secondary users of white spaces to predict the expected level of interference in any channel, which allows for channel ranking and optimal selection of DSA transmission parameters. The CBM model is describable by a tiered structure, where the first tier identifies the holes and signals; the second tier ranks the holes in terms of available bandwidth and incumbent duty cycle; and the third tier models the infrequent incumbent transmissions. With the three tiers of information, an SU can readily identify all the suitable DSA channels within the entire spectrum band. This essential summary information is retrieved as a “Hole Descriptor Object” (HDO) that is both compact and tractable. Empirical spectrum measurement data obtained from the three different SO platforms is used to test the performance of the CBM procedure in the 2500-2700 MHz frequency range that currently has WiMAX deployments, the TV white space band, and the 450-474 MHz LMR band in Chicago. Spectrum measurement data runs into hundreds of megabytes or gigabytes. As such, the raw information is not very applicable in practical wireless networks. The HDO objects on the other hand are compact and only kilobytes in size. The HDO objects contain all the useful and applicable information necessary for any smart radio (primary or secondary) to select transmission parameters like frequency of operation and bandwidth, so that it can efficiently operate. Thus, the advantage of the CBM procedure is that it summarizes gigabytes of raw spectrum measurements in a usable compact format that can be directly used by practical smart radios to operate using DSA paradigms. Another advantage of CBM is that it is comprehensive and automatically identifies all holes and signals. The research findings are of interest and value to a variety of Federal and Commercial entities. The models and relevant model parameters for public safety radio in the LMR band have been provided on request to the Public Safety and Homeland Security Bureau of the Federal Communications Commission (FCC). The DSA feasibility analysis methodology is of great national economic interest based on the contents of the PCAST report. The PCAST report recommends finding 1000 MHz of federal frequencies to be allocated for shared commercial and federal use. However, the technology for doing so and identifying the suitable bands requires measurements of actual spectrum usage, modeling the occupancy and existing traffic activity, and assessing DSA feasibility – these are important research aspects, and all of which are addressed in this dissertation. The results are of crucial importance to policy makers like the FCC and NTIA who will ultimately make the spectrum allocations decisions. A future network of commercial DSA SU radios operating in a shared band is likely to need access to a system to obtain live information about PU activity to optimally operate in the band with high throughput and low interference. The overall system, based on the CBM procedure and HDO objects, proposed in this thesis describes a framework for providing this information as a service to DSA networks, and hence the work is also of practical relevance to radio system designers.
Ph.D. in Electrical Engineering, July 2014
Show less