Search results
Pages
- Title
- DEVELOPMENT OF A NEW EMERGENCY EVACUATION SYSTEM FOR MINES
- Creator
- Qian, Qingyi
- Date
- 2011-08, 2011-07
- Description
-
Underground mining is a very high risk industry. There are many potential hazards in the underground mining these include fire, explosion,...
Show moreUnderground mining is a very high risk industry. There are many potential hazards in the underground mining these include fire, explosion, inundation, roof collapse, toxic gases, chemical pollution, etc. Over past centuries, in US alone, more than 100,000 miners lost their life in different accidents. The primary safety methods used in underground mines concentrate on the monitoring of the hazardous gases, fire detection and ventilation. Using advanced instruments and monitoring techniques have significantly reduced the accidents in the modern mines. However despite the advancement of these monitoring facilities, accidents still occur in underground mining annually around the world, and many miners were killed because they were trapped and unable to escape due to blocked of exit access. This thesis describes a new development of an emergency evacuation system in underground mines and analyzes the advantages and disadvantages of the system. It is expected that the new system will greatly improve the emergency exit methods and save more lives in the future. The new emergency evacuation system consists of vertical concrete mineshaft, high capacity mineshaft elevators, surface terminal and underground support structures. In addition, the study of numerical simulation was carried out to observe the ground response during excavation. A typical ground profiles for underground mining in south part of China was used in this analysis. The results selected from shaft excavation simulation indicate that fluid drilling method effectively prevents the soil around mineshaft from collapse hazard. Compared to soil strength, soil stiffness has a significant influence on the soil response induced by excavating shafts.
Ph.D. in Civil Engineering, July 2011
Show less
- Title
- DIRECT DIFFEOMORPHIC REPARAMETERIZATION FOR CORRESPONDENCE OPTIMIZATION IN STATISTICAL SHAPE MODELING
- Creator
- Li, Kang
- Date
- 2015, 2015-05
- Description
-
This dissertation proposes an efficient optimization approach for obtaining shape correspondence across a group of objects for statistical...
Show moreThis dissertation proposes an efficient optimization approach for obtaining shape correspondence across a group of objects for statistical shape modeling. With each shape represented in a B-spline based parametric form, the correspondence across the shape population is cast as an issue of seeking a reparametrization for each shape so that a quality measure of the resulting shape correspondence across the group is optimized. The quality measure is the description length of covariance matrix of the shape population, with landmarks sampled on each shape. The movement of landmarks on each B-spline shape is controlled by the reparameterization of the B-spline shape. The reparameterization itself is also represented with B-splines and B-spline coefficients are used as optimization parameters. We have developed formulations for ensuring the bijectivity of the reparameterization. A gradient-based optimization approach is developed, including techniques such as constraint aggregation and adjoint senstivity for efficient, direct di↵eomorphic reparameterization of landmarks to improve the group-wise shape correspondence. Numerical experiments on both synthetic and real 2D and 3D data sets demonstrate the efficiency and e↵ectiveness of the proposed approach.This dissertation proposes an efficient optimization approach for obtaining shape correspondence across a group of objects for statistical shape modeling. With each shape represented in a B-spline based parametric form, the correspondence across the shape population is cast as an issue of seeking a reparametrization for each shape so that a quality measure of the resulting shape correspondence across the group is optimized. The quality measure is the description length of covariance matrix of the shape population, with landmarks sampled on each shape. The movement of landmarks on each B-spline shape is controlled by the reparameterization of the B-spline shape. The reparameterization itself is also represented with B-splines and B-spline coefficients are used as optimization parameters. We have developed formulations for ensuring the bijectivity of the reparameterization. A gradient-based optimization approach is developed, including techniques such as constraint aggregation and adjoint senstivity for efficient, direct di↵eomorphic reparameterization of landmarks to improve the group-wise shape correspondence. Numerical experiments on both synthetic and real 2D and 3D data sets demonstrate the efficiency and e↵ectiveness of the proposed approach.
Ph.D. in Mechanical and Aerospace Engineering, May 2015
Show less
- Title
- COOPERATIVE BATCH SCHEDULING FOR HPC SYSTEMS
- Creator
- Yang, Xu
- Date
- 2017, 2017-05
- Description
-
The batch scheduler is an important system software serving as the interface between users and HPC systems. Users submit their jobs via batch...
Show moreThe batch scheduler is an important system software serving as the interface between users and HPC systems. Users submit their jobs via batch scheduling portal and the batch scheduler makes scheduling decision for each job based on its request for system resources and system availability. Jobs submitted to HPC systems are usually parallel applications and their lifecycle consists of multiple running phases, such as computation, communication and input/output data. Thus, the running of such parallel applications could involve various system resources, such as power, network bandwidth, I/O bandwidth, storage, etc. And most of these system resources are shared among concurrently running jobs. However, Today's batch schedulers do not take the contention and interference between jobs over these resources into consideration for making scheduling decisions, which has been identified as one of the major culprits for both the system and application performance variability. In this work, we propose a cooperative batch scheduling framework for HPC systems. The motivation of our work is to take important factors about jobs and the system, such as job power, job communication characteristics and network topology, for making orchestrated scheduling decisions to reduce the contention between concurrently running jobs and to alleviate the performance variability. Our contributions are the design and implementation of several coordinated scheduling models and algorithms for addressing some chronic issues in HPC systems. The proposed models and algorithms in this work have been evaluated by the means of simulation using workload traces and application communication traces collected from production HPC systems. Preliminary experimental results show that our models and algorithms can effectively improve the application and the system overall performance, HPC facilities' operation cost, and alleviate the performance variability caused by job interference.
Ph.D. in Computer Science, May 2017
Show less
- Title
- SPEECH INTELLIGIBILITY AND ACCENTS IN SPEECH-MEDIATED INTERFACES: RESULTS AND RECOMMENDATIONS
- Creator
- Lawrence, Halcyon M.
- Date
- 2013, 2013-07
- Description
-
There continues to be significant growth in the development and use of speech–mediated devices and technology products; however, there is no...
Show moreThere continues to be significant growth in the development and use of speech–mediated devices and technology products; however, there is no evidence that non-native English speech is used in these devices, despite the fact that English is now spoken by more non-native speakers than native speakers, worldwide. This relative absence of non-native English speech in devices may be due in part to the costs associated with localizing speech devices, but it may also be attributable to the fact that not enough is known about user performance with accented speech in speech–mediated environments. In the absence of targeted research, developers may be relying on existing studies which focus on perception (impression) of accented speech, as a basis of decision-making. However, perception paints only part of the picture when it comes to understanding how and why people perform in certain ways and in certain environments. Three studies were conducted to answer the following questions: (1) What are the acoustic-phonetic characteristics of negatively- and positively-perceived accented speech? And how are these characteristics related to markers of intelligible speech? (2) How do participants perform on different types of accented-speech tasks? (3) What is the relationship between user perception of accented speech and user performance in response to accented speech? and; (4) How do participants perform on accented speech tasks of varying complexity? Arising out of this research, there are six recommendations for the use of accented speech in speech-mediated devices. Also, the findings of this study raise questions about inherent linguistic stereotypes which impact both our perceptions and our choices about xvi the accents we want to hear on our speech devices. A discussion about if and how these stereotypes can be altered and measured are included. Future research should examine the role of experienced non-native talkers in speech devices. Results of study one demonstrated that some experienced non-native talkers were positively-perceived by raters and may be good candidates for talkers in speech devices. A study like this would explicitly establish if listeners consistently make native vs. non-native distinctions in their preferences or if a prestige continuum emerges.
PH.D in Technical Communication, July 2013
Show less
- Title
- OPTIMAL DECISION-MAKING OF INTERDEPENDENT TRANSPORTATION INVESTMENT ALTERNATIVES UNDER RISK AND UNCERTAINTY
- Creator
- Zhou, Bei
- Date
- 2012-07-12, 2012-07
- Description
-
With increasing demand for a more efficient transportation system and decreasing budget levels, transportation investment decision-making that...
Show moreWith increasing demand for a more efficient transportation system and decreasing budget levels, transportation investment decision-making that aims to select the optimal project portfolio which yields maximized overall networkwide benefits in terms of economy, society and environment has increasingly become important. This dissertation has conducted an in-depth investigation into project evaluation and project selection that are crucial steps of transportation decision-making. It begins with information search through a review of existing methods for project evaluation and selection. Several limitations of existing methods have been revealed. In particular, they are in lack of considerations in network impacts of a single investment project, interdependencies of simultaneously implementing multiple projects, and restrictions of total risk of overall benefits of selected projects within an acceptable level. Then, a new methodology is proposed for networkwide traffic assignments, project evaluation, and project selection. A state-of-art large scale transportation simulation software, the TRansportation ANalysis and SIMulation System (TRANSIMS) toolbox, is utilized to perform networkwide dynamic traffic assignments to general redistributed traffic volumes after project implementation needed as inputs for project evaluation. For project evaluation, a life-cycle cost analysis approach is developed to consider all agency costs and user costs in the service life-cycle of two primary categories of highway facilities: pavements and bridges. In order to enhance the robustness of analytical results, risk and uncertainty of input factors concerning traffic volumes, project costs, and discount rates are incorporated into the life-cycle cost computation using @Risk Palisade software, Version 5.5. For project selection, two-stage enhanced Knapsack model, hypergraph Knapsack, and two-stage hypergraph Knapsack model are proposed to choose the best sub-collection of interdependent projects to yield maximized overall benefits at various budget levels, while controlling the total risk within an acceptable level. In terms of two-stage Knapsack model, the Markowitz mean-variance model is utilized for stage-one optimization to generate minimized total risk of all projects subject to constraints of available budget and minimum benefits to be expected for individual projects. At the second stage, the Knapsack model is enhanced by adding stage-one optimization solution as one more constraint. Such a treatment could help control the total risk of overall benefits of all selected projects at a desirable level. Moreover, a hypergraph Knapsack model is introduced to capture project network impacts and interdependency relationships. In order to simultaneously address issues of networkwide project impacts, interdependencies, and total risk levels, a two-stage hypergraph Knapsack model is developed. Efficient solution algorithms are developed and coded to Frontline Solver Xpress V55 software to solve the two-stage Knapsack model, hypergraph Knapsack model, and two-stage hypergraph Knapsack model, respectively. Three computational studies are performed to apply the proposed methodology using two sets of data, including six-year data on 672 candidate projects proposed by Indiana Department of Transportation for state highway programming and 6 mega projects proposed by Illinois State Toll Highway Authority for tollway network major capital improvements. It has generally found that the use of two-stage Knapsack model could readily control the total risk of overall benefits of selected projects at a desirable level, but it may result in significant changes in the overall benefits for different budget levels where significant differences in risks are associated with individual projects. The hypergraph Knapsack model could effectively handle issues of networkwide project impacts and interdependency relationships. However, the two-stage hypergraph Knapsack model appears to be most robust in that it could simultaneously resolve the issues of networkwide project impacts, interdependency relationships, and total risks of overall project benefits, thus generating most reliable information to support rational transportation investment decision-making.
Ph.D. in Civil Engineering, July 2012
Show less
- Title
- SYSTEM SUPPORT FOR RESILIENCE IN LARGE-SCALE PARALLEL SYSTEMS: FROM CHECKPOINTING TO MAPREDUCE
- Creator
- Jin, Hui
- Date
- 2012-05-31, 2012-05
- Description
-
High-Performance Computing (HPC) has passed the Petascale mark and is moving forward to Exascale. As the system ensemble size continues to...
Show moreHigh-Performance Computing (HPC) has passed the Petascale mark and is moving forward to Exascale. As the system ensemble size continues to grow, the occurrence of failures is the norm rather than the exception during the execution of parallel applications. Resilience is widely recognized as one of the key obstacles towards Exascale computing. Checkpointing is currently the de-facto fault tolerant mechanism for parallel applications. However, parallel checkpointing at scale usually generates bursts of concurrent I/O requests, imposes considerable overhead to I/O subsystems, and limits the scalability of parallel applications. Despite the doubt in the feasibility of checkpointing continues to increase, there is still no promising alternative on the horizon yet to replace checkpointing. MapReduce is a new programming model for massive data processing. It has demonstrated a compelling potential in reshaping the landscape of HPC from various perspectives. The resilience of MapReduce applications and its potential in benefiting HPC fault tolerance are active research topics that require extensive investigation. This thesis work targets at building a systematic framework to support resilience in large-scale parallel systems. We address the identified checkpointing performance issue through a three-fold approach: reduce the I/O overhead, exploit storage alternatives, and determine the optimistic checkpointing frequency. This three-fold approach is achieved with three different mechanisms, namely system coordination and scheduling, the utilization of MapReduce framework, and stochastic modeling. To deal with the increasing concerns about MapReduce resilience, we also strive to improve the reliability of MapReduce applications, and investigate the tradeoffs in the programming model selection (e.g., MPI v.s. MapReduce) from the perspective of resilience. This thesis provides a thorough study and a practical solution for solving the outstanding resilience problem of large-scale MPI-based HPC applications and beyond. It makes a noticeable contribution to the state-of-the-art and opens a new research direction for many to follow.
Ph.D. in Computer Science, May 2012
Show less
- Title
- ASYMPTOTIC SIMILARITY IN TURBULENT BOUNDARY LAYERS
- Creator
- Duncan, Richard D.
- Date
- 2011-05-10, 2011-05
- Description
-
The turbulent boundary layer is one of the most fundamental and important applications of fluid mechanics. Despite great practical interest...
Show moreThe turbulent boundary layer is one of the most fundamental and important applications of fluid mechanics. Despite great practical interest and its direct impact on frictional drag among its many important consequences, no theory absent of significant inference or assumption exists. Numerical simulations and empirical guidance are used to produce models and adequate predictions, but even minor improvements in modeling parameters or physical understanding could translate into significant improvements in the efficiency of aerodynamic and hydrodynamic vehicles. Classically, turbulent boundary layers and fully-developed turbulent channels and pipes are considered members of the same “family,” with similar “inner” versus “outer” descriptions. However, recent advances in experiments, simulations, and data processing have questioned this, and, as a result, their fundamental physics. To address a full range of pressure gradient boundary layers, a new approach to the governing equations and physical description of wall-bounded flows is formulated, using a two variable similarity approach and many of the tools of the classical method with slight but significant variations. A new set of similarity requirements for the characteristic scales of the problem is found, and when these requirements are applied to the classical “inner” and “outer” scales, a “similarity map” is developed providing a clear prediction of what flow conditions should result in self-similar forms. An empirical model with a small number of parameters and a form reminiscent of Coles’ “wall plus wake” is developed for the streamwise Reynolds stress, and shown to fit experimental and numerical data from a number of turbulent boundary layers as well as other wall-bounded flows. It appears from this model and its scaling using the free-stream velocity that the true asymptotic form of u′2 may not become self-evident until Re ≈ 275, 000 or δ+ ≈ 105, if not higher. A perturbation expansion made possible by the novel inclusion of the scaled streamwise coordinate is used to make an excellent prediction of the shear Reynolds stress in zero pressure gradient boundary layers and channel flows, requiring only a streamwise mean velocity profile and the new similarity map. Extension to other flows is promising, though more information about the normal Reynolds stresses is needed. This expansion is further used to infer a three layer structure in the turbulent boundary layer, and modified two layer structure in fully-developed flows, by using the classical inner and logarithmic profiles to determine which portions of the boundary layer are dominated by viscosity, inertia, or turbulence. A new inner function for U+ is developed, based on the three layer description, providing a much more simplified representative form of the streamwise mean velocity nearest the wall.
Ph.D. in Mechanical and Aerospace Engineering, May 2011
Show less
- Title
- MECHANICAL PROPERTIES AND SINTERING MECHANISMS OF POWDER METALLURGY TI6AL4V
- Creator
- Xu, Xiaoyan
- Date
- 2013, 2013-05
- Description
-
Titanium has been identified as one of the key materials with a high strength to weight ratio that can reduce the weight of components and...
Show moreTitanium has been identified as one of the key materials with a high strength to weight ratio that can reduce the weight of components and thereby reduce energy consumption. Single press and sinter as a powder metallurgy technique has the potential to provide cost effective components. Armstrong prealloyed Ti6Al4V, HDH prealloyed Ti6Al4V, HDH blended Ti6Al4V powder and their mixtures were pressed and sintered at different conditions. The chemistry, mechanical and microstructural properties have been investigated to establish optimum processing parameters. Sintered parts were sent to Oshkosh Truck to test and compared with aluminum and steel parts. The Titanium and Ti6Al4V parts were successfully applied and tested. All the specimens passed the load test without failures. The sintering mechanisms of Armstrong prealloyed Ti6Al4V powder were investigated. At relative sintered densities of 75% to 90% (around 900°C), surface diffusion cooperate with grain boundary diffusion, which leads to densification of the powder compact. Around 900°C, grain boundary diffusion controls the sintering process. At 1000°C, boundary diffusion made little contribution to the densification of the Ti6Al4V powder compact. Above 900°C and below 91% sintered density, boundary diffusion controls sintering. Lattice diffusion dominates the densification process at higher temperatures (1100°C~1300°C). The sintering of master alloy blended Ti6Al4V powder has been investigated in order to elucidate the mechanism of sintering. Both blended powder compacts and diffusion couples were investigated using backscattered imaging and energy xvi dispersive analysis to determine the phases present and diffusion path on sintering at 1000ºC and 1100ºC. It is shown that transient liquid phase sintering does not occur and the reason for the rapid sintering of this material is due to enhanced diffusion kinetics resulting from a combination of the concentration gradient and stress induced by a phase transformation in the ternary system.
PH.D in Materials Science and Engineering, May 2013
Show less
- Title
- EUTECTIC γ(NI)/γ′(NI3AL)-δ(NI3NB) POLYCRYSTALLINE NICKEL-BASE SUPERALLOYS: CHEMISTRY, PROCESSING, MICROSTRUCTURE AND PROPERTIES
- Creator
- Xie, Mengtao
- Date
- 2012-12-03, 2012-12
- Description
-
Directionally solidified γ(Ni)/γ′(Ni3Al)-δ(Ni3Nb) eutectic alloys possess attrac- tive high temperature mechanical properties and were...
Show moreDirectionally solidified γ(Ni)/γ′(Ni3Al)-δ(Ni3Nb) eutectic alloys possess attrac- tive high temperature mechanical properties and were considered as candidate tur- bine blade materials. Currently, the properties of polycrystalline γ/γ′-δ alloys are of interest as they inherit many advantageous attributes from the directionally solidi- fied γ/γ′-δ alloys, including high volume fraction of reinforcing phases, exceptional thermal stability and resistance to segregation-induced defect formation. If these at- tributes are properly harnessed, these γ/γ′-δ eutectic alloys might provide a unique solution to the problems experienced by traditional γ/γ′ polycrystalline Ni-base su- peralloys. This thesis is therefore dedicated towards the development of a funda- mental understanding of this novel class of eutectic alloys from several important perspectives. To enrich our understanding of this alloy system, this thesis will first be focused on quantifying the specific effect of individual alloying element on this γ/γ′-δ eutectic system. A set of quaternary Ni-Cr-Al-Nb alloy compositions with increasing levels of Chromium(Cr) was designed to investigate the detailed influence of this element on the primary phase formation, solidus and liquidus temperatures and γ-δ eutectic morphology. The alloying effect of Tantalum(Ta), which shares many similarities to Niobium(Nb), was studied by designing a matrix of multi-component γ/γ′-δ alloy compositions with nominally the same overall (Ta+Nb) content but varying Ta/Nb ratios. Here, different solidification segregation and solid state partitioning behaviors of Ta and Nb in this γ/γ′-δ eutectic system will be discussed, as well as the influ- ence of Ta/Nb ratio on solidification characteristics and equilibrium/non-equilibrium phase volume fractions. Thermodynamic calculations using the Computherm Pandat database (PanNi7) were compared to experimental results in these investigations. The second part of this thesis will aim to provide a more general understand- xvii ing of the effect of various alloying elements, including Cr, Co, Al, Ti, Mo, W, Ta and Nb, on this γ/γ′-δ system. A large number of experimental γ/γ′-δ alloys covering a broad range of compositions was selected for the analysis in this study. Important alloy attributes, such as primary phase formation, overall δ volume fraction, phase transformation temperatures and ternary eutectic initiation, were quantitatively char- acterized as a function of individual alloying element concentrations or combined con- tent of more elements. Linear regression analysis was performed to reveal the relative effectiveness of these elements on this eutectic system. Meanwhile, an extensive com- parison between the experimental observations and Pandat predictions was provided to critically evaluate the strength and weakness of existing thermodynamic database model in predicting trends in this eutectic alloy system with substantially higher Nb content compared to traditional γ/γ′ superalloys. The last part of this thesis emphasizes the development of cast and wrought manufacturing processes for cast γ/γ′-δ eutectic alloys as a cost effective alternative to the powder metallurgy route. Hot rolling of workpieces encapsulated within a steel can was performed on a simple model cast γ/γ′-δ alloy (897) to stimulate the ingot to billet. The influence of different deformation levels on breaking down the dendritic structure and promoting fine and homogenized microstructure was investi- gated. The mechanical soundness associated with different microstructures generated by different hot rolling processes was compared via compression and creep testing. Microstructural parameters that contribute to better mechanical properties will be discussed.
PH.D in Materials Science and Engineering, December 2012
Show less
- Title
- MONITORING, MODELING, AND TREATMENT OF ODORS/ODORANTS AT WATER RECLAMATION PLANTS
- Creator
- Zhang, Yanming
- Date
- 2012-04-23, 2012-05
- Description
-
A thorough study including odor monitoring, modeling and treatment as three important aspects of odor control in WRPs has been performed in...
Show moreA thorough study including odor monitoring, modeling and treatment as three important aspects of odor control in WRPs has been performed in this research. Measurement of H2S emissions from odor sources was proven to be an essential step in odor monitoring program. The H2S emission rates were measured from various sources throughout a WRP for 9 sampling events during winter and summer. During summer, both the average and the maximum emission rates of H2S from liquid treatment processes increased significantly compared to those measured during winter. However, for solids-handling processes, the emission rates remained constant because sludge characteristics did not vary throughout the year. The total sulfide concentrations present in liquid treatment processes were higher than those in preliminary and primary treatment units but at much lower levels in secondary treatment. Rates of H2S emission from the headworks were correlated to daily average wastewater temperature, TKN concentration, and flow rate. AERMOD was used as the modeling tool to evaluate the odor impact of Egan WRP on the surrounding communities. The emission rates could significantly affect the modeling results. Long-term H2S monitoring increases the possibility of developing the proper emission rate for the worst-case scenario. Excluding the modeling during the night would avoid overestimation of odor impact and excessive odor control. In the laboratory-scale study of O3 oxidation of H2S, O3 oxidation was proven to be a fast and effective method to remove H2S from the odorous air emitted from wastewater treatment processes. The increased initial ratio of O3/H2S enhances the removal rate of H2S. The consumption ratio of O3/H2S is a function of input reactant ratios. A multiple linear regression model (R2=0.84) has been developed to predict the H2S residual for given initial H2S and O3 concentrations and reaction time. The increased moisture content of the odorous air enhanced the H2S removal while DMS and DMDS inhibit H2S removal by competing for the limited O3 supply.
Ph.D. in Environmental Engineering, May 2012
Show less
- Title
- AN ADAPTIVE RESCALING SCHEME FOR COMPUTING HELE-SHAW PROBLEMS
- Creator
- Zhao, Meng
- Date
- 2017, 2017-07
- Description
-
In this thesis, we develop efficient adaptive rescaling schemes to investigate interface instabilities associated with moving interface...
Show moreIn this thesis, we develop efficient adaptive rescaling schemes to investigate interface instabilities associated with moving interface problems. The idea of rescaling is to map the current time-space onto a new time-space frame such that the interfaces evolve at a chosen speed in the new frame. We couple the rescaling idea with boundary integral method to demonstrate the efficiency of the rescaling idea, though it can be applied to Cartesian-grid based method in general. As an example, we use the Hele-Shaw problem to examine the efficiency of the rescaling scheme. First, we apply the rescaling scheme to a slowly expanding interface. In the new frame, the evolution is dramatically accelerated, while the underlying physics remains unchanged. In particular, at long times numerical results reveal that there exist nonlinear, stable, self-similarly evolving morphologies. The rescaling idea can also be used to simulate the fast shrinking interface, e.g. the Hele-Shaw problem with a time dependent gap. In this case, the rescaling scheme slows down the interface evolution in the new frame to remove the severe time step constraint that makes the long-time simulations prohibitive. Finally, we study an analytical solution to the stability of the interface of the Hele-Shaw problem, assuming a small surface tension under a time dependent flux Q(t). Following [116, 109], we find the motions of daughter singularity ζd and simple singularity ζ0 do not depend on the flux Q(t). We also find a criterion to identify the relation between ζ0 and ζd.
Ph.D. in Applied Mathematics, July 2017
Show less
- Title
- WIRELESS SCHEDULING IN MULTI-CHANNEL MULTI-RADIO MULTIHOP WIRELESS NETWORKS
- Creator
- Wang, Zhu
- Date
- 2014, 2014-07
- Description
-
Maximum multi ow (MMF) and maximum concurrent multi ow (MCMF) in multi-channel multi-radio (MC-MR) wireless networks have been well-studied in...
Show moreMaximum multi ow (MMF) and maximum concurrent multi ow (MCMF) in multi-channel multi-radio (MC-MR) wireless networks have been well-studied in the literature. They are NP-hard even in single-channel single-radio (SC-SR) wireless networks when all nodes have uniform (and xed) interference radii and the positions of all nodes are available. This disertation studies maximum multi ow (MMF) and maximum concur- rent multi ow (MCMF) in muliti-channel multi-radio multihop wireless networks under the protocol interference model in the bidirectional mode or the unidirectional mode. We introduce a ne-grained network representation of multi-channel multi- radio multihop wireless networks and present some essential topological properties of its associated con ict graph. It was proved that if the number of channels is bounded by a constant (which is typical in practical networks), both MMF and MCMF admit a polynomial-time ap- proximation scheme under the protocol interference model in the bidirectional mode or the unidirectional mode with some additional mild conditions. However, the run- ning time of these algorithms grows quickly with the number of radios per node (at least in the sixth order) and the number of channels (at least in the cubic order). Such poor scalability stems intrinsically from the exploding size of the ne-grained network representation upon which those algorithms are built. In Chapter 2 of this dissertation, we introduce a new structure, termed as concise con ict graph, on the node-level links directly. Such structure succinctly captures the essential advantage of multiple radios and multiple channels. By exploring and exploiting the rich structural properties of the concise con ict graphs, we are able to develop fast and scalable link scheduling algorithms for either minimizing the communication latency or maximizing the (concurrent) multi ow. These algorithms have running time growing linearly in both the number of radios per node and the number of channels, while not sacri cing the approximation bounds. While the algorithms we develop in Chapter 2 admit a polynomial-time ap- proximation scheme (PTAS) when the number of channels is bounded by a constant, such PTAS is quite infeasible practically. Other than the PTAS, all other known approximation algorithms, in both SC-SR wireless networks and MC-MR wireless networks, resorted to solve a polynomial-sized linear program (LP) exactly. The s- calability of their running time is fundamentally limited by the general-purposed LP solvers. In Chapter 3 of this dissertation, we rst introduce the concept of interference costs and prices of a path and explore their relations with the maximum (concurrent) multi ow. Then we develop purely combinatorial approximation algorithms which compute a sequence of least interference-cost routing paths along which the ows are routed. These algorithms are faster and simpler, and achieve nearly the same approximation bounds known in the literature. This dissertation also explores the stability analysis of two link scheduling in MC-MR wireless networks under the protocol interference model in the bidirectional mode or the unidirectional mode. Longest-queue- rst (LQF) link scheduling is a greedy link scheduling in multihop wireless networks. Its stability performance in single-channel single-radio (SC-SR) wireless networks has been well studied recently. However, its stability performance in multi-channel multi-radio (MC-MR) wireless networks is largely under-explored. We present a stability subregion with closed form of the LQF scheduling in MC-MR wireless networks, which is within a constant factor of the network stability region. We also obtain constant lower bounds on the efficiency ratio of the LQF scheduling in MC-MR wireless networks under the protocol interference model in the bidirectional mode or unidirectional mode. Static greedy link schedulings have much simpler implementation than dy- namic greedy link schedulings such as Longest-queue-frst (LQF) link scheduling. However, its stability performance in multi-channel multi-radio (MC-MR) wireless networks is largely under-explored. In this dissertation, we present a stability subre- gion with closed form of a static greedy link scheduling in MC-MR wireless networks under the protocol interference model in the bidirectional mode. By adopting some special static link orderings, the stability subregion is within a constant factor of the stable capacity region of the network. We also obtain constant lower bounds on the throughput efficiency ratios of the static greedy link schedulings in some special static link orderings.
Ph.D. in Computer Science, July 2014
Show less
- Title
- INDUSTRIAL UPGRADING IN KOREA
- Creator
- Woosiklee
- Date
- 2014, 2014-05
- Description
-
One of the most difficult obstacles facing non-western nations is the issue of technology transfer. The main objective of this dissertation is...
Show moreOne of the most difficult obstacles facing non-western nations is the issue of technology transfer. The main objective of this dissertation is to analyze the how South Korea has succeeded through industrial upgrading through technology transfer in achieving the Han River Miracle- making it in 2011, the fourth largest economy in Asia and the 9th largest in the world. From 1910 to 1945, Korean modernization was continuously developed under the Japanese war economy and its military policy. Japanese capital, technology and entrepreneurs were transferred to Korea due to supplement the shortages of Japanese industries or to take advantage of the low labor costs in Korea in order to prepare for the Sino-Japanese War in 1936 and the Pacific War in 1941. There is no doubt that President Chung-Hee Park (1961-1979) was the architect of the Korean economic miracle. During his authoritarian regime, the government had played an important role in the creation and financing of the modern Korean industrial groupings, called the Chaebols. The government also intervened directly in the formation of their policies. In the 1980s, when the country embarked on financial liberalization, the degree of intervention started to decrease. And finally, the 1997 crisis will be examined, with special attention on the introduction of reforms required by the International Monetary Fund (IMF). In the industrial arena, the focus will be on the rationalization policies undertaken to increase the total factor productivity (TFP). It will cover the currently important industries of steel, automobiles and semiconductors, as well as those promising industries which have led the development of South Korea's knowledge-intensive economy. An integral part of the xi ii analysis will study the repercussions of the 1997 financial reforms on both the large and small and medium-size industries. Conventional wisdom assumes that it was under President Park's rule that South Korea had its first experience with industrialization. This assumption, however, ignores the significant industrialization that took place during the colonial period. It also does not take into account the admittedly limited industrial development that took place during the time before the 1961 coup d'état, when civilian governments were in charge. The dissertation would shed light on these overlooked periods.
PH.D in Management Science, May 2014
Show less
- Title
- LASER MICROMACHINING, SINTERING, AND LASER-INDUCED PLASMA DEBURRING
- Creator
- Gao, Yibo
- Date
- 2013, 2013-12
- Description
-
Lasers can provide non-mechanical-contact, localized and concentrated energy input to materials with controlled durations and high spatial...
Show moreLasers can provide non-mechanical-contact, localized and concentrated energy input to materials with controlled durations and high spatial resolutions down to a few microns or less. Therefore, lasers have more and more applications in manufacturing and materials processing, such as laser micromachining (which is to create micro-scale features through laser-induced material removal) and laser sintering. Despite the previous research work in the literature, many laser-based manufacturing and materials processing areas still require lots of further research work. Specifically, the following topics will be investigated in the research work in this thesis: nanosecond-pulsed laser ablation of silicon carbide at an infrared wavelength, nanosecond laser-induced plasma deburring, two-step nanosecond laser surface texturing, and the fabrication of carbon nanotube (CNT)-ceramic composites through the laser sintering process.
PH.D in Mechanical and Aerospace Engineering, December 2013
Show less
- Title
- COLLABORATIVE CONSUMPTION: PROFITS, CONSUMER BENEFITS, AND ENVIRONMENTAL IMPACTS
- Creator
- Supangkat, Hendrarto Kurniawan
- Date
- 2014, 2014-05
- Description
-
With increasingly connected consumers and technological advancement, peer- to-peer sharing is emerging as a consumer-led initiative, which is...
Show moreWith increasingly connected consumers and technological advancement, peer- to-peer sharing is emerging as a consumer-led initiative, which is aimed to exploit slack capacities and lower the cost of consuming private goods. Sharing is praised for its potential bene ts of improving consumer access, consumer surplus, and environmental impact. On the other hand, sharing may possess credible threats to producers because of cannibalization and reduced sales quantity. This thesis is composed of three papers on the subject of peer-to-peer sharing of durable goods, e.g., cars, bikes, gadgets, and household appliances. The rst paper studies pricing and product design decisions of a single-product monopolist in a market. We identify the conditions under which a rm would accom- modate or hinder peer-to-peer sharing by pricing the product appropriately. We nd that the rm's pro t can be enhanced only when the consumer valuation heterogene- ity is neither too high nor too low, and the product's intrinsic value is su ciently high. In addition, contrary to the conventional wisdom, we show that sharing does not always improve consumer access to products. Furthermore, some consumers may end up being worse o . Finally, we nd that social sharing may enhance or impede product innovation, depending on consumer heterogeneity and the size of sharing groups. In the second paper, we study whether social sharing will encourage or discour- age product di erentiation. We nd that the two ways of expanding the market, one consumer-initiated and one rm-initiated, can be strategic complements or substi- tutes, depending on consumer heterogeneity, group size, product intrinsic value, and cost structure. We characterize such conditions. For example, we show that accom- modating sharing provides the rm a higher incentive to introduce a di erentiated product when the product intrinsic value and consumer heterogeneity are both low, x or are both high. We also extend the study by allowing consumers to endogenously choose their sharing group size, and show that it may enhance or worsen the rm's pro t. The third paper focuses on the environmental impact stemming from produc- tion and consumption, in the presence of peer-to-peer sharing. The product usage of sharing consumers is modeled as a function of capacity congestion and group size. We show that a "danger" zone exists where sharing is pro table for the rm but is not friendly to the environment. When the rm has an in uence on the sharing group size (e.g., by promoting sharing programs in metropolitan areas or college towns), the economic incentive and environmental impact can be aligned. Speci cally, we nd that stronger congestion e ects may induce the producer to promote sharing in larger groups, which in turn results in a more positive environmental impact. Such situations are more likely to occur when the product unit cost is large. Moreover, we characterize conditions under which the rm may prefer heterogeneous networks composed of groups with di erent sizes or social networks with lower homophily, and meanwhile the environmental impact can be improved.
PH.D in Management Science, May 2014
Show less
- Title
- FUNCTIONAL ANALYSIS OF UBIQUITIN-LIKE PROTEIN 4A
- Creator
- Zhao, Yu
- Date
- 2015, 2015-12
- Description
-
Ubiquitin-like protein 4A (Ubl4A) was identified as a housekeeping gene at X chromosome. It involves in the guided entry of tail-anchored (GET...
Show moreUbiquitin-like protein 4A (Ubl4A) was identified as a housekeeping gene at X chromosome. It involves in the guided entry of tail-anchored (GET) protein pathway in which tail-anchored (TA) proteins are transported to endoplasmic reticulum (ER). However, Ubl4A also involves other functions not related to GET pathway, such as tumor suppression and DNA damage-mediated apoptosis. Up to date, the function of Ubl4A in mammals is still largely unknown. We found that either overexpression or knockdown of Ubl4A promoted cell death in cell culture system. Using an in vivo genetic knockout system, we found that Ubl4A knockout mice displayed a high neonatal mortality and had a defect in glycogen synthesis, which is mainly controlled by a key protein kinase Akt. Loss of Ubl4A resulted in the impairment of insulin-induced Akt translocation to the plasma membrane, an essential step for Akt activation. We demonstrated that Ubl4A directly interacted with actin-related protein 2/3 (Arp2/3) complex, further accelerated Arp2/3 complex-dependent actin branching, thereby bringing Akt to proximity into the plasma membrane for activation. Furthermore, we showed that Ubl4A-mediated actin branching also played important roles in other cellular activities, such as formations oflamellipodia and filopodia, macrophage phagocytosis, wound healing, and neutrophil chemotaxis. These findings provide us a new insight into understanding the roles of Ubl4A in cellular function and a molecular basis for treatment of related human diseases.
Ph.D. in Biology, December 2015
Show less
- Title
- APPLICATION-AWARE OPTIMIZATIONS FOR BIG DATA ACCESS
- Creator
- Yin, Yanlong
- Date
- 2014, 2014-07
- Description
-
Many High-Performance Computing (HPC) applications spend a significant portion of their execution time in accessing data from les and they are...
Show moreMany High-Performance Computing (HPC) applications spend a significant portion of their execution time in accessing data from les and they are becoming increasingly data-intensive. For them, I/O performance is a significant bottleneck leading to wastage of CPU cycles and the corresponding wasted energy consumption. Various optimization techniques exist to improve data access performance. However, the existing general-purpose optimization techniques are not able to satisfy diverse applications' demands. On the other hand, the application-specific optimization pro- cess is usually a difficult task due to the complexity involved in understanding the parallel I/O system and the applications' I/O behaviors. To address these challenges, this thesis proposes an application-aware data access optimization framework and claims that it is feasible and useful to utilize applications' characteristics to improve the performance and efficiency of the parallel I/O system. Under this framework, an optimization may consist of several basic but challenging steps, including capturing the application's characteristics, identifying the causality of I/O performance degra- dation, and delivering optimization solutions. To make these steps easier, we design and implement the IOSIG toolkit as an essential system support for the default par- allel I/O system. The toolkit is able to pro le the applications' I/O behaviors and then generate comprehensive characteristics through trace analysis. With the help of IOSIG, we design several optimization techniques on data layout optimization, data reorganization, and I/O scheduling. The proposed framework has significant poten- tial to boost application-aware I/O optimization. The results prove that the proposed optimization techniques can significantly improve the data access performance.
Ph.D. in Computer Science, July 2014
Show less
- Title
- THE EUML-ARC PROGRAMMING MODEL
- Creator
- Marth, Kevin
- Date
- 2014, 2014-07
- Description
-
The EUML-ARC programming model shows that the increasing parallelism available on multi-core processors requires evolutionary (not...
Show moreThe EUML-ARC programming model shows that the increasing parallelism available on multi-core processors requires evolutionary (not revolutionary) changes in software design. The EUML-ARC programming model combines and extends software technology available even before the introduction of multi-core processors to provide software engineers with the ability to specify software systems that expose abstract platform-independent parallelism. The EUML-ARC programming model is a synthesis of Executable UML, the Actor model, role-based modeling, split objects, and aspect-based coordination. Computation in the EUML-ARC programming model is structured in terms of semantic entities composed of actor-based agents whose behaviors are expressed in hierarchical state machines. An entity is composed of a base intrinsic agent and multiple extrinsic role agents, all with dedicated conceptual threads of control. Entities interact through their role agents in the context of featureoriented collaborations orchestrated by coordinator agents. The conceptual threads of control associated with the agents in a software system expose both intra-entity and inter-entity parallelism that is mapped by the EUML-ARC model compiler to the hardware threads available on the target multi-core processor. The hardware and software e ciency achieved with representative benchmark systems show that the EUML-ARC programming model and its compiler can exploit multi-core parallelism while providing a productive model-driven approach to software development.
Ph.D. in Computer Science, July 2014
Show less
- Title
- DETECTING GNSS SPOOFING ATTACKS USING INS COUPLING
- Creator
- Tanil, Cagatay
- Date
- 2016, 2016-12
- Description
-
Vulnerability of Global Navigation Satellite Systems (GNSS) users to signal spoofing is a critical threat to positioning integrity, especially...
Show moreVulnerability of Global Navigation Satellite Systems (GNSS) users to signal spoofing is a critical threat to positioning integrity, especially in aviation applications, where the consequences are potentially catastrophic. In response, this research describes and evaluates a new approach to directly detect spoofing using integrated Inertial Navigation Systems (INS) and fault detection concepts based on integrity monitoring. The monitors developed here can be implemented into positioning systems using INS/GNSS integration via 1) tightly-coupled, 2) loosely-coupled, and 3) uncoupled schemes. New evaluation methods enable the statistical computation of integrity risk resulting from a worst-case spoofing attack – without needing to simulate an unmanageably large number of individual aircraft approaches. Integrity risk is an absolute measure of safety and a well-established metric in aircraft navigation. A novel closed-form solution to the worst-case time sequence of GNSS signals is derived to maximize the integrity risk for each monitor and used in the covariance analyses. This methodology tests the performance of the monitors against the most sophisticated spoofers, capable of tracking the aircraft position – for example, by means of remote tracking or onboard sensing. Another contribution is a comprehensive closed-loop model that encapsulates the vehicle and compensator (estimator and controller) dynamics. A sensitivity analysis uses this model to quantify the leveraging impact of the vehicle’s dynamic responses (e.g., to wind gusts, or to autopilot’s acceleration commands) on the monitor’s detection capability. The performance of the monitors is evaluated for two safety-critical terminal area navigation applications: 1) autonomous shipboard landing and 2) Boeing 747 (B747) landing assisted with Ground Based Augmentation Systems (GBAS). It is demonstrated that for both systems, the monitors are capable of meeting the most stringent precision approach and landing integrity requirements of the International Civil Aviation Organization (ICAO). The statistical evaluation methods developed here can be used as a baseline procedure in the Federal Aviation Administration’s (FAA) certification of spoof-free navigation systems. The final contribution is an investigation of INS sensor quality on detection performance. This determines the minimum sensor requirements to perform standalone GNSS positioning in general en route applications with guaranteed spoofing detection integrity.
Ph.D. in Mechanical and Aerospace Engineering, December 2016
Show less
- Title
- Development and Application of an Occupational Odor Hazard Index
- Creator
- Wang, Tingting
- Date
- 2011-04-24, 2011-05
- Description
-
Odors emitted from wastewater treatment and sludge processing facilities may lead to employee complaints regarding discomfort, stress or...
Show moreOdors emitted from wastewater treatment and sludge processing facilities may lead to employee complaints regarding discomfort, stress or disease, and affect productivity and worker turnover in Water Reclamation Plants (WRPs). This study reports and assesses a comprehensive method that estimates the odor perception and associated hazards from exposures to odors in a post-digestion dewatering building in a WRP and its vicinity areas. An Odor Reference Concentration (ORfC) is developed as an index of acceptable odor level. This index is applied to ensure that the majority of building occupants (80 percent or more) do not perceive the odor. This index is developed to fill the lack of a uniform standard and method to assess hazard of exposed individuals to odors in occupational environments and to regulate odor exposures.A comprehensive odor and odorant concentration database was formulated by a monitoring study in the occupational environment of a post-digestion dewatering building. The presence of odorants in the building are at concentrations below occupational exposure limits but higher than odor detection threshold values. This finding indicates that reducing odorant concentrations below exposure limits does not assure an odor-free environment. A model is formulated and validated for this dewatering building associating odor perception with concentrations of total sulfur compounds and relative humidity and is used for prediction of indoor odor concentrations under various conditions. Odor and odorant emission rates as the strength of sources are input variables of the indoor air quality model. In this study, odor and odorant emission rates from freshly dewatered biosolids in a dewatering building were measured using two widely used dynamic methods: the USEPA flux chamber and wind tunnel, and results from the two methods are not significantly different. Comparison of the two methods indicates that both methods can be used to estimate odor and odorant emission rates but the most effective and efficient method depends on prevailing environmental conditions. The ORfC established based on the comprehensive odor and odorant concentration database for this dewatering building is 13D/T (dilution to threshold). This index is used to evaluate seven control strategies recommended to reduce odor levels. If indoor odor concentrations in the occupational environment exceed the ORfC, then the hazard of odor exposures is unacceptable. Deterministic results of this study indicate that if appropriate control strategy is applied, odor concentration in the dewatering building would reach to below levels that cause unnecessary stress and other effects. The control strategy focus of this work is reduction of the indoor odor perception. But indoor control strategies must not cause outdoor odor problems to surrounding residential areas. Therefore, the potential impact of the control strategy recommended is also investigated in this thesis using the US EPA recommended air dispersion modeling AERMOD. Predictions of hydrogen sulfide concentrations at surrounding areas of the plant indicate that only one strategy, which proposes to add a new exhaust system in the dewatering building, would cause the ambient hydrogen sulfide concentration to be 7% higher than the odor detection threshold; other six strategies would not induce odor annoyance to surrounding areas. Acute and long-term ambient hydrogen sulfide exposure limits based on human health and irritation effects would not be violated under any of the seven control strategies.
Ph.D. in Environmental Engineering, May 2011
Show less