Search results
(21 - 40 of 1,080)
Pages
- Title
- BIG DATA SYSTEM INFRASTRUCTURE AT EXTREME SCALES
- Creator
- Zhao, Dongfang
- Date
- 2015, 2015-07
- Description
-
Rapid advances in digital sensors, networks, storage, and computation along with their availability at low cost is leading to the creation of...
Show moreRapid advances in digital sensors, networks, storage, and computation along with their availability at low cost is leading to the creation of huge collections of data { dubbed as Big Data. This data has the potential for enabling new insights that can change the way business, science, and governments deliver services to their consumers and can impact society as a whole. This has led to the emergence of the Big Data Computing paradigm focusing on sensing, collection, storage, management and analysis of data from variety of sources to enable new value and insights. To realize the full potential of Big Data Computing, we need to address several challenges and develop suitable conceptual and technological solutions for dealing them. Today's and tomorrow's extreme-scale computing systems, such as the world's fastest supercomputers, are generating orders of magnitude more data by a variety of scienti c computing applications from all disciplines. This dissertation addresses several big data challenges at extreme scales. First, we quantitatively studied through simulations the predicted performance of existing systems at future scales (for example, exascale 1018 ops). Simulation results suggested that current systems would likely fail to deliver the needed performance at exascale. Then, we proposed a new system architecture and implemented a prototype that was evaluated on tens of thousands nodes on par with the scale of today's largest supercomputers. Micro benchmarks and real-world applications demonstrated the e ectiveness of the proposed architecture: the prototype achieved up to two orders of magnitude higher data movement rate than existing approaches. Moreover, the system prototype was incorporated with features that were not well supported in conventional systems, such as distributed metadata management, distributed caching, lightweight provenance, transparent compression, acceleration through GPU encoding, and parallel serialization. Towards exploring the proposed architecture at millions of node scales, simulations were conducted and evaluated with a variety of workloads, showing near linear scalability and orders of magnitude better performance than today's state-of-the-art storage systems.
Ph.D. in Computer Science, July 2015
Show less
- Title
- BIO-COMPOSITE NANOMATERIALS; CHARACTERIZATION, MANIPULATION AND APPLICATION
- Creator
- Kim, Taeyoung
- Date
- 2013, 2013-05
- Description
-
Biomaterials are any matter or material which interacts with biological system which were present on natural or synthetic nonviable materials....
Show moreBiomaterials are any matter or material which interacts with biological system which were present on natural or synthetic nonviable materials. An ideal biomaterial must physically and chemically compatible for biological applications to avoid toxicity and immune reaction. Collagen is a widely abundant, robust biopolymer that is found in the extracellular matrix (ECM) of many tissues. In terms of its structure and organization, collagen is a bio-nano material. The use of collagen in stem cell differentiation is particularly enticing since it is now evident that the extracellular matrix (ECM) proteins can regulate the cell’s fate and growth by their molecular and physical properties. In this thesis research, we have characterized the physical and structural properties of collagen and its composite materials. We found collagen interacts with NaCl and other nano-materials, such as carbon nanotube (CNT) and TiO2 nanotube, resulting in the modification of collagen structure and increase of its stiffness. In particular, CNT has changed collagen structure at both the molecular level and the higher organizational level. The effects of the matrices on cell growth and development were studied. It was found that collagen/CNT matrix specify and accelerate the stem cell differentiation toward neural cells. Since neurons form the foundation of biological circuit, manipulation of neurons in growth has the potential to form desired biological circuit. We also examined collagen in vaginal wall connective tissues, and found that alteration of collagen properties in clinical tissues is correlated to the onset of pelvic organ prolapse.
PH.D in Chemistry, May 2013
Show less
- Title
- PROBING THE PAN-GENOME OF LISTERIA MONOCYTOGENES
- Creator
- Deng, Xiangyu
- Date
- 2011-04-26, 2011-05
- Description
-
Bacterial pathogens often show significant intra-species variations in ecological fitness, host preference and pathogenic potential to cause...
Show moreBacterial pathogens often show significant intra-species variations in ecological fitness, host preference and pathogenic potential to cause infectious disease. The species of Listeria monocytogenes, a facultative intracellular pathogen and the causative agent of human listeriosis, consists of at least three distinct genetic lineages. Two of these lineages predominantly cause human sporadic and epidemic infections, whereas the third lineage has never been implicated in human disease outbreaks despite its overall conservation of many known virulence factors. The genomes of 26 L. monocytogenes strains representing the three lineages are compared based on both in silico comparative genomic analysis and high-density, pan-genomic DNA microarray hybridizations. We uncover 86 genes and 8 small regulatory RNAs that likely make L. monocytogenes lineages differ in carbohydrate utilization and stress resistance during their residence in natural habitats and passage through the host gastrointestinal tract. We also identify 2,330 to 2,456 core genes that define this species along with an open pan-genome pool that contains more than 4,052 genes. Phylogenomic reconstructions based on 3,560 homologous groups allowed robust estimation of phylogenetic relatedness among L. monocytogenes strains. The pan-genome approach enables accurate co-analysis of DNA sequence and hybridization array data for both core gene estimation and phylogenomic reconstruction. Application of our method to the pan-genome of L. monocytogenes sheds new insights into the intra-species genomic diversification, niche expansion and evolution of this important foodborne pathogen.
Ph.D. in Biology, May 2011
Show less
- Title
- OPTIMIZATION AND MARKET CLEARING IN THE POWER SYSTEMS WITH HIGH-LEVEL RENEWABLES
- Creator
- Ye, Hongxing
- Date
- 2016, 2016-05
- Description
-
The increasing penetration of renewable energy source (RES), such as wind and solar power generation, in recent years to meet various...
Show moreThe increasing penetration of renewable energy source (RES), such as wind and solar power generation, in recent years to meet various renewable portfolio standards (RPS) has led to more uncertainties in power systems. The RES penetration level is expected to further increase in order to reduce emission and fight climate change. The growing uncertainties caused by RES pose new challenges in power and energy systems. Advanced models and technologies are urgent to provide secure, affordable and clean energy to customers. Security-Constrained Unit Commitment (SCUC) problem is one of the most important tools in the modern power system. It determines the optimal short-term generating planning. The electricity is priced and settled based on its solution. In order to manage the uncertainties caused by renewables, it is urgent to develop new SCUC models and solution approaches. SCUCs considering the uncertainties have become a focus of research in recent years. The proposed optimistic robust SCUC combines the idea of robust optimization and reserve concept in the electricity. The merit of the robust optimization is that its solution can be immunized against any uncertainties. It exactly meets the first priority, reliability, in power system operation. The robust optimization is attractive in theory. However, a solution is robust if and only if the system can survive in the worst case scenario. Hence, the key task is to identify the worst case scenario. Unfortunately, finding the worst case scenario in general is a non-deterministic polynomial-time hard (NP-hard) problem. This will create issue in satisfying the timeliness requirement that the optimal scheduling must be obtained quickly (e.g., within several hours) in the day-ahead electricity markets. This dissertation proposes a fast solution approach to finding the worst case scenario by exploring the special structure in the SCUC problem. This dissertation proposes a new market mechanism for managing uncertain ties caused by high-level RES based on the robust optimization. A new concept, Uncertainty Marginal Price (UMP), is proposed to charge uncertainty sources and to credit flexible sources. For the first time, explicit price signals are provided and utilized to manage any level uncertainties within a robust optimization framework. The proposed mechanism manages uncertainties from both the source side (uncertainty reduction) and resource side (uncertainty accommodation). In short term, it provides incentives for RES operators to improve forecasting accuracy (i.e. to reduce uncertainties) and existing flexible resources (e.g. storage) to participate the uncertainty accommodation. In long term, the proposed mechanism provides price signals for siting new flexible resources (e.g. energy storage) to accommodate uncertainties from increasing RES penetration.
Ph.D. in Electrical Engineering, May 2016
Show less
- Title
- Relationship of Executive Fuction Patterns and Academic Achievement Across DSM-IV Subtypes of Attention-Deficit/Hyperactivity Disorder
- Creator
- Young, Crystal
- Date
- 2012-07-18, 2012-07
- Description
-
The aim of this study was to expand our understanding of the impact of executive function skills on academic achievement in children with ADHD...
Show moreThe aim of this study was to expand our understanding of the impact of executive function skills on academic achievement in children with ADHD, as well as investigate trends in executive functioning and academic achievement between ADHD subtypes in an adolescent sample. Each participant completed neuropsychological assessments including a measure of intelligence and academic functioning as well as several measures of executive functioning. Results suggested that children with ADHD – Combined Type do not significantly differ from children with ADHD – Inattentive Type on tasks of academic or executive functioning. Issues regarding outcomes associated with ADHD as well as diagnostic procedures are discussed.
Ph.D. in Psychology, July 2012
Show less
- Title
- SPATIO-TEMPORAL RECONSTRUCTION FOR GATED CARDIAC SPECT
- Creator
- Niu, Xiaofeng
- Date
- 2011-07, 2011-07
- Description
-
In myocardial perfusion imaging using single photon emission computed tomography (SPECT), gated acquisition is often used in order to deal...
Show moreIn myocardial perfusion imaging using single photon emission computed tomography (SPECT), gated acquisition is often used in order to deal with blur caused by cardiac motion in the resulting images. While this can provide useful information about the myocardial function, it also inevitably leads to reduced signal-to-noise ratio in the acquired data due to gating. In this work, we aim to investigate and evaluate image reconstruction methods for improving the quality of the reconstructed images in cardiac gated SPECT imaging. First, we propose a spatio-temporal (aka 4D) reconstruction procedure for gated images based on use of discrete Fourier transform (DFT) basis functions, wherein the image activity at each spatial location is regulated by a Fourier representation along the gate dimension. The gated images are then reconstructed through determination of the coefficients of the Fourier representation. We explore two different reconstruction algorithms, one is a penalized least-square approach and the other is a maximum a posteriori approach. Our simulation results demonstrate that use of DFT-basis functions in gated imaging can improve the accuracy of the reconstruction. While in gated imaging the tracer distribution is traditionally treated as constant, a recent development is gated dynamic imaging where the goal is to obtain an image sequence from a single acquisition which shows simultaneously both cardiac motion and tracer distribution change over the course of imaging. In this work, we further develop and demonstrate a fully 5D (3D space plus time plus gate) reconstruction procedure for cardiac gated, dynamic SPECT imaging, where the challenge is even greater without the use of multiple fast camera rotations. We develop and compare two iterative reconstruction algorithms: one is based on the modified block sequential regularized EM (BSREM-II) algorithm, and the other is based on the Bsplines algorithm. Our simulation results demonstrate that the 5D reconstruction xiii procedure can yield gated dynamic images which show quantitative information for both perfusion defect detection and cardiac motion. Based upon the success of 5D reconstruction, we also study the saliency of 5D images for detection of perfusion defects. We explore efficient ways for characterization and visualization of information pertinent to perfusion defects in a 5D image sequence. We apply various metrics to quantify the degree to which perfusion deficits can be detected. We show that these metrics can be used to produce new types of visualizations, showing wall motion and perfusion information, that may potentially be useful for clinical evaluation. Finally, with the ultimate goal of effective detection of lesion defect for clinical use, we also investigate a direct reconstruction approach to determine a sequence of gated, kinetic parameter images from a single acquisition, which can provide information simultaneously for both tracer kinetics and wall motion. To combat the greatly under-determined nature of the problem, we apply smoothness constraints to exploit the similarity both among the different gates and among the local spatial neighborhood. The parameter images of the different gates are then determined jointly using maximum a posteriori estimation from all the available image data.
Ph.D. in Electrical Engineering, July 2011
Show less
- Title
- EFFECT OF TEMPERATURE, MOISTURE CONTENT AND SOIL SURFACE AREA ON THE BIOLOGICAL DEGRADATION OF BIOSOLIDS ADDED TO SOIL
- Creator
- Wen, Dongqi
- Date
- 2016, 2016-05
- Description
-
The objective is to develop and apply an organic matter degradation model (OMDM) that predicts changes in the biological degradation of...
Show moreThe objective is to develop and apply an organic matter degradation model (OMDM) that predicts changes in the biological degradation of organic matter added to soils as a function of soil temperature, moisture content and mineral surface area using data from field and laboratory measurements. A one year long laboratory study was conducted to determine biosolids microbial degradation rates (21 ˚C and 20% moisture content) for soils from eight different fields Biosolids microbial degradation rates were also determined for the eight soils under field condition (10 ˚C and 10% moisture content) by applying a degradation rate model (DRM). The total mineral surface areas for 27 soil samples were also determined. Regression analysis revealed that the degradation rate was positively associated with mineral soil surface area (R2=0.85) and that the biosolids application rate is not significantly associated with the biosolids degradation rate (p value =0.35>0.01). The OMDM determines a decomposition rate of 1.04 yr-1 at 30 ˚C, 20% moisture content, and 10 m2/g soil and provides correction factors for changes in soil temperature, moisture, and mineral surface area. The model predicted in changes of the biological degradation rate of biosolids added to soils with mineral surface areas that vary from 1 to 10 m2/g soil under laboratory and field conditions. A comparison of the measured and simulated degradation rates for eight different soils produced R2 = 0.87 for field data. The model was also able to predict the degradation rate of biosolids for 19 additional soils under field conditions (R2=0.89). The model was also applied to agricultural plant litter. The estimated degradation rates are 0.45 yr-1 corn stover, 0.35 yr-1 oats, 0.26 yr-1 wheat, and 0.13 yr-1 millet at 30 ˚C, 20% moisture content and 10 m2/g. This study demonstrates that the effects of environmental conditions and soil mineral surface area have a significant impact on degradation rates of organic matter and must be considered when assessing degradation rates for organic matter added to soil. Higher temperatures and moisture content and finer soils are related to larger degradation rates.
Ph.D. in Environmental Engineering, May 2016
Show less
- Title
- INSTANTANEOUS DETECTION OF SPATIAL GRADIENT ERRORS IN DIFFERENTIAL GNSS
- Creator
- Jing, Jing
- Date
- 2014, 2014-12
- Description
-
Differential GNSS (Global Navigation Satellite System) is currently being ex- tended using a high integrity Ground Based Augmentation System ...
Show moreDifferential GNSS (Global Navigation Satellite System) is currently being ex- tended using a high integrity Ground Based Augmentation System (GBAS) to enable civil aircraft precision approach and landing. Differential GPS carrier phase measure- ments between ground-based GBAS reference antennas can provide the means to de- tect and isolate certain signal-in-space (SIS) failures and anomalies that are hazardous to aircraft, most notably ionospheric anomalies and ephemeris failures, which can be characterized as spatial range error gradients. In this research, we develop a monitor capable of instantaneously detecting multi-dimensional spatial gradient faults. The existence of large gradients, while rare, has prompted the development of numerous ground monitors for their detection. One drawback of previously proposed monitors is that their performance for a given ground antenna con guration is de- pendent on how antennas are paired to form measurement differences. In contrast, in this work a new monitor approach is developed to provide consistent detection performance, regardless of how the antennas are paired, by combining measurements from multiple, spatially separated ground antennas through a null space transforma- tion. It is shown that the `null space' monitor signi cantly improves the detection performance over existing fault detection algorithms and enables GBAS to support Category III precision approach and landing. The instantaneous carrier phase monitor cannot detect all gradients due to the presence of integer cycle ambiguities. These ambiguities cannot be resolved because the gradient magnitude is unknown a priori. Furthermore, it has been shown that the performance of such monitors is highly dependent on the reference antenna topol- ogy. The range of detectable gradients for all carrier phase monitors depends on two factors: the number of antennas and their con guration. One can always expand the detection range by using many antennas, but only at greatly increased operational cost. Optimizing antenna con guration has been overlooked as a means to improve performance; simple, heuristic arguments typically prevail in the associated siting decisions. For example, when given four antennas it is generally assumed that they should be sited in a square arrangement to balance performance in all directions. However, such heuristics do not provide the maximum detectable range of gradients, and exploiting the freedom to choose the antenna topology can dramatically expand the detectable range. Due to the presence of carrier phase cycle ambiguities, the re- sulting optimization problem is a constrained, mixed integer nonlinear programming problem. By solving the nonlinear program, the optimal GBAS antenna topology that maximizes the range of detectable gradients can be found for any number of antennas. For ionospheric fronts, the magnitude of the resulting gradient is bounded. However, orbit ephemeris faults can be arbitrarily large. The monitor must be able to detect all large gradients. To detect gradients beyond the capability of the car- rier phase monitor, code phase measurements are integrated into the monitor. Al- though we show this allows for the detection of all gradients, the number and spacing of ground antennas may not be suitable for all ground stations. Therefore, dual frequency carrier phase measurements are also considered to reduce the number of antennas required to achieve the desired detection performance. Finally, a search algorithm is developed to nd the antenna topology that enables the null space mon- itor to detect all hazardous gradients using code and dual frequency carrier phase measurements.
Ph.D. in Mechanical and Aerospace Engineering, December 2014
Show less
- Title
- COMPUTATION AND ANALYSIS OF TUMOR GROWTH
- Creator
- Turian, Emma
- Date
- 2016, 2016-05
- Description
-
The ability of tumors to metastasize is preceded by morphological instabilities such as chains or fingers that invade the host environment....
Show moreThe ability of tumors to metastasize is preceded by morphological instabilities such as chains or fingers that invade the host environment. Parameters that control tumor morphology may also contribute to its invasive ability. In this thesis, we investigate tumor growth using a two-phase Stokes model. We first examine the morphological changes using the surface energy of the tumor-host interface and investigate its nonlinear dynamics using a boundary integral method. In an effort to understand the interface stiffness, we then model the tumor-host interface as an elastic membrane governed by the Helfrich bending energy. Using an energy variation approach, we derive a modified Young-Laplace condition for the stress jump across the interface, and perform a linear stability analysis to evaluate the effects of viscosity, bending rigidity, and apoptosis on tumor morphology. Results show that increased bending rigidity versus mitosis rate contributes to a more stable growth. On the other hand, increased tumor viscosity or apoptosis may lead to an invasive fingering morphology. Comparison with experimental data on glioblastoma spheroids shows good agreement especially for tumors with high adhesion and low proliferation. Next, we evaluate tumor regression during cancer therapy by a combined modality involving chemotherapy and radiotherapy. The goal is to address the complexities of a vascular tumor (e.g. apoptosis and vascularization) during treatment. We introduce an apoptotic time delay and study its impact on tumor regression using numerical and asymptotic techniques. In particular, we implement the linear-quadratic model and identify two extreme sets of parameter data, namely the slow, and fast tumor response to therapy. Numerical simulations for the slow response set show good agreements with data representing non-small cell lung carcinoma. Using the evolution equation for tumor radius with time delay, we find that tumors with shorter time interval to the onset of apoptosis shrink faster.
Ph.D. in Applied Mathematics, May 2016
Show less
- Title
- Large Scale Integration of Sustainable Energy and Congestion Management in Western Interconnection
- Creator
- Aflaki Khosrosha, Kaveh
- Date
- 2012-07-12, 2012-07
- Description
-
Large scale sustainable energy like wind and solar energy integration to the bulk grid could introduce inevitable challenges to regional...
Show moreLarge scale sustainable energy like wind and solar energy integration to the bulk grid could introduce inevitable challenges to regional transmission and generation systems. The most important challenges for transmission system are the congestion management and planning for transmission expansion to transfer the zero cost generated electricity. Another big challenge is competition of current fuel based generation units in the electricity market with zero cost sustainable energy. In this dissertation all these challenges identified and analyzed for large scale grid. This thesis brings a new method used to study transmission congestions in Western Interconnection of the United States. The process involved Security-Constrained Unit Commitment (SCUC) formulation applying its results for analysis of transmission congestion. This thesis also presents results and findings in simulation of the system operation in the Western Interconnection of the United States with the inclusion of large scale wind and solar energy integration for year 2030. High level of wind and solar energy with the forecasted wind and solar time series profiles were integrated to the Western Interconnection grid. Their impact on different existing types of generation plants is studied. The sensitivity of the fuel prices, wind turbine power output, load volatility and demand side management as well as carbon tax are analyzed in different possible scenarios. In order to incorporate large scale of sustainable energy into a bulk electricity grid footprint, planned transmission expansion showed need to take place. Transmission expansion reduces grid congestion and balances Locational Marginal Prices (LMP). This thesis explores the advancements in high-performance computing and visual analytics of economic-based transmission expansion in the Western Electricity Coordinating Council (WECC). This expansion is based on 2018 and 2029 forecasted data. It identifies transmission congestions and average of LMP for each area, and expands the transmission system while accommodating large scale wind and solar energy to achieve the Department of Energy’s renewable energy vision for year 2030. An iterative transmission expansion analysis, based on the average LMP for each area, is used to identify the minimum WECC transmission lines required. All results are visualized on the Geographical Information System (GIS) format map of North America.
Ph.D. in Electrical Engineering, July 2012
Show less
- Title
- CORROSION-RESISTANT ELECTRO-CATALYSTS AND SUPPORTS FOR ELECTROCHEMICAL ENERGY CONVERSION
- Creator
- Wang, Guanxiong
- Date
- 2016, 2016-12
- Description
-
Polymer electrolyte fuel cells (PEFCs) convert chemical energy of fuels (eg. Hydrogen) directly to electrical energy with excellent power...
Show morePolymer electrolyte fuel cells (PEFCs) convert chemical energy of fuels (eg. Hydrogen) directly to electrical energy with excellent power density, high efficiency, and zero emissions. Several challenges have delayed the commercialization of fuel cells with one being the high cost and durability of the carbon-supported-platinum-based (Pt/C) electrocatalysts. The lifetime/durability issue is critical as insufficient durability/reliability of the catalysts affects the lifetime and economical viability of these devices. Carbon support corrosion is a major durability issue since the corrosion reaction is thermodynamically favorable but kinetically sluggish under normal operating conditions. The potential transients that occur during start and stop in automotive applications can lead to electrode potential excursions of up to 1.5 V and contribute to carbon corrosion. The best way to mitigate support corrosion in PEFCs is to replace the carbon supports with alternatives having high electronic conductivity, surface area and porosity. This dissertation investigates the following carbon alternatives: (i) tin doped indium oxide (ITO) and (ii) 1:1 mixed oxides of ruthenia and silica (RSO). Microstructure characterization and electrochemical evaluations, including accelerated stress tests (start-up/shut-down and load cycling protocols) were performed to evaluate ORR activity, fuel cell performance, and electrochemical stability under PEFC operating conditions. The ITO support and 40%Pt/ITO catalysts demonstrated exceptional electrochemical stability (and reasonable ORR activity) in rotating disk electrode (RDE) experiments under accelerated potential cycling that mimicked automotive drive cycles. However, Pt/ITO exhibited poor performance and stability during MEA evaluation in a PEFC. X-ray photoelectron spectroscopy (XPS) was employed to reveal the degradation modes of Pt/ITO during PEFC operation and it was found that the increase in the surface hydroxide concentration generates a passivating In(OH)3 layer that increases electrode resistance and undermines PEFC performance. The influence of the catalyst support on PEM degradation during PEFC operation was also studied. Rotating ring-disk electrode (RRDE) experiments were employed to estimate the fraction of H2O2 generated during the ORR on the supports (C and RSO) and catalysts (benchmark Pt/C and Pt/RSO). The percentage of H2O2 generated on C and Pt/C was 50% higher than that on RSO and Pt/RSO thus explaining the observed oxidative degradation resistance of the PEM with the latter supports/catalysts.
Ph.D. in Chemical Engineering, December 2016
Show less
- Title
- AUTOMATIC SUMMARIZATION OF CLINICAL ABSTRACTS FOR EVIDENCE-BASED MEDICINE
- Creator
- Summerscales, Rodney L.
- Date
- 2013, 2013-12
- Description
-
The practice of evidence-based medicine (EBM) encourages health professionals to make informed treatment decisions based on a careful analysis...
Show moreThe practice of evidence-based medicine (EBM) encourages health professionals to make informed treatment decisions based on a careful analysis of current research. However, after caring for their patients, medical practitioners have little time to spend reading even a small fraction of the rapidly growing body of medical research literature. As a result, physicians must often rely on potentially outdated knowledge acquired in medical school. Systematic reviews of the literature exist for speci c clinical questions, but these must be manually created and updated as new research is published. Abstracts from well-written clinical research papers contain key information regarding the design and results of clinical trials. Unfortunately, the free text nature of abstracts makes it di cult for computer systems to use and time consuming for humans to read. I present a software system that reads abstracts from randomized controlled trials, extracts key clinical entities, computes the e ectiveness of the proposed interventions and compiles this information into machine readable and human readable summaries. This system uses machine learning and natural language processing techniques to extract the key clinical information describing the trial and its results. It extracts the names and sizes of treatment groups, population demographics, outcome measured in the trial and outcome results for each treatment group. Using the extracted outcome measurements, the system calculates key summary measures used by physicians when evaluating the e ectiveness of treatments. It computes absolute risk reduction (ARR) and number needed to treat (NNT) values complete with con dence intervals. The extracted information and computed statistics are automatically compiled into XML and HTML summaries that describe the details and results of the trial. xiii Extracting the necessary information needed to calculate these measures is not trivial. While there have been various approaches to generating summaries of medical research, this work has mostly focused on extracting trial characteristics (e.g. population demographics, intervention/outcome information). No one has attempted to extract all of the information needed, nor has anyone attempted to solve many of the tasks needed to reliably calculate the summary statistics.
PH.D in Computer Science, December 2013
Show less
- Title
- OIL RECOVERY IN SINGLE CAPILLARIES AND POROUS MEDIA USING WETTING NANOFLUIDS
- Creator
- Zhang, Hua
- Date
- 2016, 2016-05
- Description
-
Recent experiments and modeling conducted in our laboratory have demonstrated that the spreading of nanofluids, liquid suspensions of...
Show moreRecent experiments and modeling conducted in our laboratory have demonstrated that the spreading of nanofluids, liquid suspensions of nanosized particles, on solids are enhanced due to self-structuring of nanoparticles in the confined three-phase oil-nanofluid-solid contact region. Nanofluids have recently been proposed as agents for enhanced oil recovery (EOR). Despite recently widely conducted research using nanofluids for EOR, the underlying operating mechanism is not well understood. In this thesis, we attempt to understand the mechanism of nanofluid based EOR and evaluate its performance from reservoir core samples and model systems (glass capillary and sintered bead-pack). To visualize how oil displacement in the rock pores by nanofluid, we conducted model study using hexadecane and single glass capillary and showed the oil film dynamics in air and nanofluid after oil was displaced in the capillary. Based on the understanding of the role of nanofluid on oil displacement in capillaries, we conducted imbibition tests using Berea sandstones and flooding experiments in sintered glass-beads. X-ray microtomography was used to visualize and analyze fluid distribution and to see the effect of nanofluid in EOR. We finally considered fractured media by fabricating such structures. The dynamics of a cylindrical hexadecane layer deposited inside glass capillaries after the oil/air displacement was studied experimentally and by modeling. The oil layer subject to surface perturbation becomes unstable forming uniform, regularly-spaced double concave menisci across the capillary that are bridged with dimples (collars). In order to reveal the phenomena of the film thinning and stability between the double concave meniscus and the dimple, we monitored an air bubble approaching a flat glass surface in hexadecane. We found that the oil film thinning in a cylindrical glass capillary and on a flat glass substrate were similar; We adapted the model proposed by Gauglitz and Radke for our system (oil-air displacement) and solved it numerically. The numerical result shows a stable film between the liquid bridge and the dimple, which is consistent with our experimental observations. We also estimated the meniscus-film-dimple thickness profile and found it was in fair agreement with the model prediction. The dynamics of cylindrical hexadecane film after displacement by a nanofluid in a glass capillary was studied. We found the thick hexadecane film is unstable, and over time it breaks and forms a thin film. Once the thick film ruptures, it retracts and forms an annular rim (liquid ridge) that collects liquid. As the volume of the annular rim increases over time, it forms a double concave meniscus across the capillary and dewetting stops. The thin film on the right side of the double concave meniscus then breaks and the contact angle increases. The process repeats until droplets build along the capillary wall. Finally, the droplets are displaced from the capillary wall by the nanofluid and spherical droplets appear inside the capillary. This is a novel phenomenon not observed during dewetting by a solution without nanoparticles. The theoretical model based on the lubrication approximation using the capillary pressure gradient was developed to estimate the annular rim dewetting velocity. The predicted dewetting velocity is found to be in fair agreement with the experimental value. We conducted imbibition tests using a reservoir crude oil and a reservoir brine solution with a high salinity and a suitable nanofluid that displaces crude oil from Berea sandstone and single glass capillaries. We present visual evidence of the underlying mechanism based on the structural disjoining pressure for the crude oil displacement using a polymeric nanofluid (our definition of such a fluid means a suspension of polymeric particles in an aqueous substrate) in high salinity brine. The polymeric nanofluid is specially formulated to survive in a high salinity environment and is found to result in an increased efficiency of 50% for Berea sandstone compared to 17% using the brine alone at a reservoir temperature of 55 oC. These results aid our understanding of the role of the nanofluid in displacing crude oil from the rock especially in a high salinity environment containing Ca++ and Mg++ ions. Results are also reported using Berea sandstone and a nanofluid containing silica nanoparticles. We conducted a series of flooding experiments at different capillary numbers to quantify the performance of a polymeric nanofluid compared to brine using the sintered glass-beads. A high resolution X-ray microtomography (microCT) was used to visualize oil and brine distribution in a sintered bead-pack before and after nanofluid flooding. The results of flooding experiments showed that an additional oil recovery of approximately 15% is possible with nanofluids compared to brine at low capillary numbers, and is as effective as high capillary number brine flooding. Nanofluid induced additional oil recovery decreases as we increase the capillary number and the total oil recovered shows a marginal decrease. At first glance, these results are opposite of what one expects in the conventional EOR, where oil recovery is known to increase progressively with increasing capillary number. These results cannot be explained based on mobilization theories due to the reduced capillarity. Our results however are consistent with the mechanism of wettability alteration caused by structural disjoining pressure leading to the formation of the wetting nanofluid film between oil and substrate.We presented experimental studies of nanofluid flooding in fractured porous media formed with sintered glass-beads. The nanofluid injection is conducted at a rate where structural disjoining pressure driven recovery is operational. We found an additional 23.8% oil can be displaced using nanofluid after brine injection with an overall displacement efficiency of 90.4% provided the matrix was in its native wettability state. In summary, nanofluids are excellent EOR agents and their economic viability needs to be examined.
Ph.D. in Chemical Engineering, May 2016
Show less
- Title
- COVERAGE AND CONNECTIVITY IN WIRELESS NETWORKS
- Creator
- Xu, Xiaohua
- Date
- 2012-04-25, 2012-05
- Description
-
The limited energy resources, instability, and lacking central control in wireless networks motivates the study of connected dominating set ...
Show moreThe limited energy resources, instability, and lacking central control in wireless networks motivates the study of connected dominating set (CDS) which serves as rout- ing backbone to support service discovery, and area monitoring and also broadcasting. The construction of CDS involves both coverage and connectivity. We ¯rst study sev- eral problems related to coverage. Given are a set of nodes and targets in a plane, the problem Minimum Wireless Cover (MWC) seeks the fewest nodes to cover the targets. If all nodes are associated with some positive prices, the problem Cheapest Wireless Cover (CWC) seeks a cheapest set of nodes to cover the targets. If all nodes have bounded lives, the problem Max-Life Wireless Cover (MLWC) seeks wireless coverage schedule of maximum life subject to the life constraints of individ- ual nodes. We present a polynomial time approximation scheme (PTAS) for MWC, and two randomized approximation algorithms for CWC and MLWC respectively. Given a node-weighted graph, the problem Minimum-Weighted Dominating Set (MWDS) is to ¯nd a minimum-weighted vertex subset such that, for any vertex, it is contained in this subset or it has a neighbor contained in this set. We will propose a (4+²)-approximation algorithm for MWDS in unit disk graphs. Meanwhile, for the connecting part, given a node-weighted connected graph and a subset of terminals, the problem Node-Weighted Steiner Tree (NWST) seeks a lightest tree connecting a given set of terminals in a node-weighted graph. We present three approximation algorithms for NWST restricted to UDGs. This dissertation also explores the applications of CDS, and develops e±cient algorithms for the applications such as real-time aggregation scheduling in wireless networks. Given a set of periodic aggregation queries, each query has its own period , and the subset of source nodes Si containing the data, we ¯rst propose a family of e±cient and e®ective real-time scheduling protocols that can answer every job of each query task within a relative delay under resource constraints by addressing the following tightly coupled tasks: routing, transmission plan constructions, node activity scheduling, and packet scheduling. Based on our protocol design, we further propose schedulability test schemes to e±ciently and e®ectively test whether, for a set of queries, each query job can be ¯nished within a ¯nite delay. We also conduct extensive simulations to validate the proposed protocol and evaluate its practical performance. The simulations corroborate our theoretical analysis.
Ph.D. in Computer Science, May 2012
Show less
- Title
- AN INTEGRATED DATA ACCESS SYSTEM FOR BIG COMPUTING
- Creator
- Yang, Xi
- Date
- 2016, 2016-07
- Description
-
Big data has entered every corner of the fields of science and engineering and becomes a part of human society. Scientific research and...
Show moreBig data has entered every corner of the fields of science and engineering and becomes a part of human society. Scientific research and commercial practice are increasingly depending on the combined power of high-performance computing (HPC) and high-performance data analytics. Due to its importance, several commercial computing environments have been developed in recent years to support big data applications. MapReduce is a popular mainstream paradigm for large-scale data analytics. MapReduce-based data analytic tools commonly rely on underlying MapReduce file systems (MRFS), such as Hadoop Distributed File System (HDFS), to manage massive amounts of data. In the same time, conventional scientific applications usually run on HPC environments, such as Message Passing Interface (MPI), and their data are kept in parallel file systems (PFS), such as Lustre and GPFS, for high-speed computing and data consistency. As scientific applications become data intensive and big data applications become computing hungry, there is a surging interest and need to integrate HPC power and data processing power to support HPC on big data, the so-called big computing. A fundamental issue of big computing is the integration of data management and interoperability between the conventional HPC ecosystem and the newly emerged data processing/analytic ecosystem. However, data sharing between PFS and MRFS is limited currently, due to semantics mismatches, lacking communication middleware, and the diverged design philosophies and goals, etc. Also, challenges also exist in cross-platform task scheduling and parallelism. At the application layer, the data model mismatch between the raw data kept on file systems and the data management software of an application impedes cross-platform data processing as well. To support cross-platform integration, we propose and develop the Integrated Data Access System (IDAS) for big computing. IDAS extends the accessibilities of programming models and integrates the HPC environment with the data processing MapReduce/Hadoop environment. Under IDAS, MPI applications and MapReduce applications can share and exchange data under PFS and MRFS transparently and efficiently. Through this sharing and exchange, MPI and MapReduce applications can collaboratively provide both high-performance computing and data processing power for a given application. IDAS achieves its goal with several steps. First, IDAS enhances MPI-IO so that MPI-based applications can access data stored in HDFS efficiently. Here the term efficient means that HDFS is enhanced to support MPI-based applications. For instance, we have enhanced HDFS to transparently support N-to-1 file write for better write concurrency. Second, IDAS enhances Hadoop framework to enable MapReduce-based applications process data that resides on PFS transparently. Please notice that we have carefully chosen the term “enhance” here. That is MPI-based applications not only can access data stored on HDFS but also can continue access data stored on PFS. The same is for MapReduce-based applications. Through these enhancements, we achieve seamless data sharing. In addition, we have integrated data accessing with several application tools. In particular, we have integrated image plotting, query, and data subsetting within one application, for Earth Science data analysis. Many data centers prefer erasure-coding rather than triplication to achieve data durability, which trades data availability for lower storage cost. To this end, we have also investigated performance optimization of the erasure coded Hadoop system, to enhance Hadoop system in IDAS.
Ph.D. in Computer Science, July 2016
Show less
- Title
- DYNAMIC COHERENT ACCEPTABILITY INDICES AND THEIR APPLICATIONS IN FINANCE
- Creator
- Zhang, Zhao
- Date
- 2011-05-02, 2011-05
- Description
-
This thesis presents a unified framework for studying coherent acceptability indices in a dynamic setup. We study dynamic coherent...
Show moreThis thesis presents a unified framework for studying coherent acceptability indices in a dynamic setup. We study dynamic coherent acceptability indices and dynamic coherent risk measures. In particular, we establish a duality between them. We derive representation theorems for both dynamic coherent acceptability indices and dynamic coherent risk measures in terms of so called dynamically consistent sequence of sets of probability measures. In addition, we present an alternative approach to study dynamic coherent acceptability indices and the representation theorem. Finally, we provide examples and counterexamples of dynamic coherent acceptability indices, and their applications in portfolio management.
Ph.D. in Applied Mathematics, May 2011
Show less
- Title
- ENVIRONMENTAL PERFORMANCE VS. FINANCIAL PERFORMANCE, MARKET INEFFICIENCY AND INVESTMENTS
- Creator
- He, Chaohua
- Date
- 2014, 2014-07
- Description
-
It is challenging to de ne corporate environmental performance or corporate nancial performance. In this study, a company is considered to...
Show moreIt is challenging to de ne corporate environmental performance or corporate nancial performance. In this study, a company is considered to have good environmental performance (namely, be green, environment-friendly or environmentally responsible) if it is among the Top 100 of the 500 US greenest companies ranked by Newsweek, or has environmental strength(s) and no environmental concern in terms of the KLD ratings. A company is regarded to have good nancial performance if it has a high raw return, Sharpe ratio, and excess (or abnormal) return over various benchmarks. Preference will be given to excess return estimated using the Carhart four-factor model [14]. A previous published longitudinal study, co-authored with my advisor [13], unveils that: 1) environmentally responsible companies tend to experience signi cantly positive abnormal performance in the long horizon (e.g. from the fourth to seventh year after being selected); 2) the value-adding e ect and the market's upward price adjustments on undervalued intangible environmental strength(s) might have resulted in the long-term outperformance. Would environmentally responsible companies still outperform during shorter horizons, such as the event period of an environmental disclosure? Using event study methodologies, this paper investigates market responses to independent Newsweek environmental disclosures by analyzing cross-sectional and time-series abnormal security returns. Results suggest that the Top 100 greenest companies tend to display signi cant abnormal returns within 4 days after a disclosure, and the signi cant abnormal returns generally persist for no more than 3 trade days. e.g., the Carhart four-factor abnormal return, with statistical signi cance, is averaged at 0.50% per day over the four disclosure events. The ndings are robust to di erentmodels of normal return, removal of outliers, elimination of confounding e ects, controlling for characteristic factors, and adjusting for cross-sectional correlation and volatility shift on test statistics using BMP-adjusted technology[56]. Signi cant abnormal returns over the event period may indicate ine ciency of the nancial market. Fama-Macbeth regressions further reveal that short-horizon abnormal returns could be explained by a spectrum of characteristic variables, green investing, arbitrage trading, and/or various psychological biases. Complementing the cited longitudinal study, a portfolio-level comparison reveals that an actively managed green portfolio outperforms an actively managed nongreen portfolio in terms of raw return and risk-adjusted measures such as Sharpe ratio, Jensen's alpha and Fama-French alpha in the long horizon. The results are robust to di erent portfolio weighting technologies and the consideration of turnover costs. In addition, the green portfolio's outperformance is driven by a bunch of small, aggressive and relatively inactive stocks that have better performance than the market predicts. No evidence shows that the ever-increasing demand on green securities leads to the green portfolio's outperformance, because green stocks are actually less actively traded. Panel regressions further indicate that long-horizon corporate economic performance positively correlates to historical corporate environmental performance.
Ph.D. in Management Science, July 2014
Show less
- Title
- QUALITY-OF-SERVICE AWARE SCHEDULING AND DEFECT TOLERANCE IN REAL-TIME EMBEDDED SYSTEMS
- Creator
- Li, Zheng
- Date
- 2015, 2015-05
- Description
-
For real-time embedded systems, such as control systems used in medical, automotive and avionics industry, tasks deployed on such systems...
Show moreFor real-time embedded systems, such as control systems used in medical, automotive and avionics industry, tasks deployed on such systems often have stringent real-time, reliability and energy consumption constraints. How to schedule real-time tasks under various QoS constraints is a challenging issue that has drawn attention from the research community for decades. In this thesis, we study task execution strategies that not only minimize system energy consumption but also guarantee task deadlines and reliability satisfaction. We first consider the scenario when all tasks are of the same criticality. For this case, two task execution strategies, i.e. checkpointing based and task re-execution based strategies are developed. Second, considering the scenario when tasks are of different criticalities, a heuristic search based energy minimization strategy is also proposed. When tasks are of different criticalities, a commonly used approach to guaranteeing high-criticality task deadlines is to remove low-criticality tasks whenever the system is overloaded. With such an approach, the QoS provided to low-criticality tasks is rather poor, it can cause low-criticality tasks to have high deadline miss rate and less accumulated execution time. To overcome this shortcoming, we develop a time reservation based scheduling algorithm and a two-step optimization algorithm to meet high-criticality task deadlines, while minimizing low-criticality task deadline miss rate and maximizing their accumulated execution time, respectively. As many-core techniques mature, many real-time embedded systems are built upon many-core platforms. However, many-core platforms have high wear-out failure rate. Hence, the last issue to be addressed in the thesis is how to replace defective cores on many-core platforms so that deployed applications’ real-time properties can be maintained. We develop an offline and an online application-aware system reconfiguration strategy to minimize the impact of the physical layer changes on deployed real-time applications. All the developed approaches are evaluated through extensive simulations. The results indicate that the developed approaches are more effective in addressing the identified problems compared to the existing ones in the literature.
Ph.D. in Computer Science, May 2015
Show less
- Title
- UNDERSTANDING REACTION MECHANISMS AND CONTROLLING REACTIVE SURFACE SPECIES DURING ATOMIC LAYER DEPOSITION OF METAL CHALCOGENIDES
- Creator
- Weimer, Matthew S.
- Date
- 2016, 2016-07
- Description
-
Atomic layer deposition (ALD) is a thin film growth technique that relies on self-limiting reactions between vapor precursors and a surface....
Show moreAtomic layer deposition (ALD) is a thin film growth technique that relies on self-limiting reactions between vapor precursors and a surface. Significant progress has been made in the scope of materials grown by ALD, enabled by precursor development and investigation of surface reaction mechanisms; progress is still necessary. This dissertation has two portions. The first is the development of organometallic compounds to engender new material growth by ALD. Second is the development of X-ray absorption spectroscopy (XAS) tools and techniques for observations of the metal coordination environment during ALD. One material that can be difficult to make is doped materials. Composition of a doped material determines the film properties. One example is vanadium doped indium sulfide. This material was purposed as an intermediate bandgap solar absorbing material which can absorb multiple ranges of light. Two new precursors were developed for the growth of indium sulfide and vanadium sulfide. The indium sulfide material grown has shown superior film photocurrent qualities and doping of vanadium into indium sulfide with fine control over atomic placement was achieved. Films were grown that showed secondary absorptions which aligned with theoretical calculations. The flexibility of the V(III) precursor is shown by a short study on the control of the difficult vanadium-oxygen system in the deposition of stoichiometric oxides. To illustrate how XAS can be used in conjunction with other techniques tin dioxide growth from a stannylene precursor and hydrogen peroxide was studied. From ex situ XAS measurements nucleation, growth and termination reaction mechanism were discovered. Next, a mobile ALD reactor allowed for in situ XAS measurements to be performed on “flat” substrates for the first time. Specifically, the local coordination environment and surface reactions were followed while erbium was doped into alumina and coordination environment of manganese was investigated in the growth of manganese-doped zinc oxide, with the ability to probe in-plane versus out-of-plane bonds. These unique in situ XAS experiments allow for greater understand of metal and non-metal precursors as they interact with surfaces. This enhanced understanding enables new precursor development which leads to better control over surface chemistry and new materials.
Ph.D. in Chemistry, July 2016
Show less
- Title
- DISTRIBUTED VIDEO CODING FOR RESOURCE CONSTRAINED VIDEO APPLICATIONS
- Creator
- Liu, Wenhui
- Date
- 2014, 2014-05
- Description
-
Video coding technology has played a key role in the explosion of current multimedia society with increasing resolution and quality. Such big...
Show moreVideo coding technology has played a key role in the explosion of current multimedia society with increasing resolution and quality. Such big success is largely built on the conventional video coding paradigm where motion estimation and compensation are performed at the encoder. This asymmetry in complexity is well-suited for the applications where the video sequence is encoded once and decoded many times. However, some new emerging applications such as wireless video surveillance, wireless PC cameras and multimedia sensor networks require a low complexity encoding, while possibly a ording a high complexity decoding. Therefore, a challenging problem emerges with the new type of visual communication system is how to achieve low complexity encoding video compression while maintaining good coding e ciency. Distributed video coding (DVC) provides low complexity encoding solutions for video communication with limited computational power or energy constraints. In DVC, the source video information is independently encoded at lightweight encoders. At the decoder, all the received bitstreams are jointly exploited their statistical dependencies between them. In such a way, motion estimation and its computational complexity is shifted from the encoder to the decoder. However, DVC also has its own restrictions. The low coding e ciency remains a challenging issue for DVC compare to the conventional video coding. Although DVC is robust to channel loss due to its intrinsic feature of independent encoders and joint decoder, the error resiliency for medium to large transmission errors is weak. In this dissertation, previously proposed low-complexity DVC (LC-DVC) architecture is rstly introduced. After that, a continued work is presented to further improve quality of SI. The proposed method is called spatio-temporal joint bilateral upsampling (STJBU) based SI generation, where geometric closeness of pixels and their photometric similarity is exploited to reduce the noise while preserving the edge xiv information. Moreover, a distributed multiple description coding (DMDC) scheme is proposed by combining the multiple description (MD) coding into LC-DVC to improve its error resiliency. All the proposed schemes are well described and the ratedistortion analyses are presented in this dissertation. All these features have made the LC-DVC a great solution for resource constraints applications.
PH.D in Electrical Engineering, May 2014
Show less