Search results
(81 - 100 of 1,034)
Pages
- Title
- COMPUTATIONAL FLUID DYNAMICS AND POPULATION BALANCE MODEL FOR SIMULATION OF DRY SORBENT BASED CO2 CAPTURE PROCESS
- Creator
- Abbasi, Emadoddin
- Date
- 2013, 2013-12
- Description
-
Carbon capture and sequestration (CCS) is one of the key technologies needed to mitigate carbon dioxide emission from industrial sources and...
Show moreCarbon capture and sequestration (CCS) is one of the key technologies needed to mitigate carbon dioxide emission from industrial sources and power plants. Development of CFD-based design tool for prediction of the extent of CO2 capture in a regenerable dry sorbent-based technology, in an efficient power plant design (i.e., modern IGCC power plants) was the driving force behind this project. In this study, we established a systematic methodology, starting from investigating the properties of the sorbent and its reaction mechanism, to developing a model for design and scale-up of the reactors that is needed to deploy this technology at larger scales. This dissertation provides a coupled CFD-PBE model based on the novel FCMOM approach with broad application in reaction engineering and reactor design where the polydisperse nature of the phases has strong effect on the hydrodynamics of the system. Detailed investigations of the MgO-based sorbent and its performance toward capturing CO2 from a coal gas stream were performed that result in development of the two-zone variable diffusivity shrinking core reaction model. Furthermore, a baseline design for a circulating fluidized bed (CFB) reactor, using numerical modeling and threedimensional simulations of a full-loop circulating fluidized bed reactor was provided based on the coupled CFD-PBE, which in combination with the reaction model can perform as a base for parametric studies and optimization of the process.
PH.D in Chemical Engineering, December 2013
Show less
- Title
- AN INTRINSICALLY CONDUCTING POLYMER-BASED COATING SYSTEM FOR CORROSION PROTECTION OF STEELS
- Creator
- Yu, Qifeng
- Date
- 2016, 2016-12
- Description
-
Among the various corrosion protection strategies for structural steels, coating techniques provide the most cost-effective protection and...
Show moreAmong the various corrosion protection strategies for structural steels, coating techniques provide the most cost-effective protection and have been used as the primary mode for corrosion protection. Existing coating techniques have been used mainly for their barrier capability and all have a limited service life. In this research work, a waterborne two-strand polyaniline: poly (acrylic acid) complex was synthesized and utilized to fabricate the primer layer of a two-layer coating system. The techniques of Scanning Kelvin Probe Force Microscopy (SKPFM) and Electrochemical Impedance Spectroscopy (EIS) were used to evaluate the anti-corrosion capability of the polymeric complex when mixed in an epoxy matrix and coated on steel samples as the primer layer. The evaluation results show that coating systems including a PANi-based primer has measurable anticorrosion capability and the anti-corrosion capability of PANi-based primer depends on the usage of PANi and the type of matrix material of the primer layer. In the laboratory condition, a prototype two-layer coating system including the PANi-based primer and a polyurethane topcoat was manufactured. The ASTM Salt-Spray Test and EIS were used to prove the anti-corrosion performance of the prototype using a two-layer, polyurethane-over-epoxy system (no PANi) as the control system. After the proof of concept, a non-waterborne epoxy was used to fabricate a different PANi-based primer. The two types of primers and two other commercial primers (a zinc-rich primer and an epoxy-only primer) were used to make a total of eight two-layer coating systems using two widely used topcoats. Salt-Spray Test, Cyclic Salt Fog/UV Exposure Test, Pull- Off Adhesion Test, and the techniques of EIS, SKPFM, and Scanning Electron Microscope (SEM) were used to evaluate the long-term performance of the eight systems. Based on the laboratory-based recommendations, six groups of two-layer coating systems were then subjected to the outdoor-exposure test to evaluate their anti-corrosion durability at two testing sites. The field durability of the coating systems was evaluated in terms of their surface gloss reduction, color change, adhesion change and surface deteriorations. The matrix material in which the PANi is mixed plays an important role in the longterm anti-corrosion performance of coatings. The waterborne epoxy is effective in dispersing PANi nano-particles and has zero VOC; however, it does not bond to the steel surface as strongly as the regular non-waterborne epoxy. The topcoat material also plays an important role in the long-term anti-corrosion performance of coatings; polyurethane has higher durability than epoxy as a topcoat material. The PANi-based systems possess long-term corrosion protection comparable to the performance of the conventional zincrich three-layer system based on the one-year field evaluation.
Ph.D. in Civil Engineering, December 2016
Show less
- Title
- QUANTITATIVE ANALYSIS OF THE EFFECTS OF BIOFUNCTIONAL AND PHYSICAL GRADIENTS ON CELL BEHAVIOR IN POLY (ETHYLENE GLYCOL) DIACRYLATE HYDROGELS
- Creator
- Turturro, Michael
- Date
- 2012-10-29, 2012-12
- Description
-
The continued enhancement of tissue engineered scaffolds relies on their ability to stimulate the formation of a stable microvascular network...
Show moreThe continued enhancement of tissue engineered scaffolds relies on their ability to stimulate the formation of a stable microvascular network within the biomaterial. In vivo, the spatial presentation of immobilized extracellular matrix cues and matrix mechanical properties play an important role in directed and guided cell behavior and neovascularization. The overall goals of this thesis are to develop a technique for the generation of gradients of physical properties and incorporated biofunctionality within poly(ethylene glycol) diacrylate (PEGDA) scaffolds and to investigate the effects of these gradients on 3D cell invasion and neovascularization. To this end, a novel photopolymerization technique for generating spatial variations in matrix properties and incorporated biofunctionality within synthetic PEGDA hydrogels, perfusion-based frontal polymerization (PBFP), was developed. This technique relies on the controlled perfusion of a photoinitiator to a reaction chamber containing a precursor solution and results in the propagation of a polymer reaction front that travels through the monomer solution creating a gradient in hydrogel crosslinking. Manipulation of the magnitude of the gradient can be achieved through alterations in the polymerization conditions. Scaffolds with embedded gradients were designed and optimized based on a range of properties shown to support 2D cell adhesion, proliferation, and 3D vascular cell invasion in bulk photopolymerized hydrogels with homogeneous properties. An in vitro model of neovascularization was used to evaluate the effect of these gradients on vascular sprout formation. Sprout invasion in gradient hydrogels occurred bi-directionally with sprout alignment observed in the direction parallel to the gradient while control hydrogels with homogeneous properties resulted in uniform invasion. In PBFP gradient hydrogels, sprout xvi length was found to be twice as long in the direction parallel to the gradient as compared to the perpendicular direction after three weeks in culture. This directionality was found to be more prominent in gradient regions of increased stiffness, crosslinked matrix metalloproteinase (MMP)-sensitive peptide presentation, and immobilized YRGDS concentration. In vivo tissue invasion was shown to be directly related to gradient properties and orientation. Alterations in the magnitude of the gradient in elastic modulus enhanced the directionality of invading vascular sprouts while restricting in vivo tissue invasion.
PH.D in Biomedical Engineering, December 2012
Show less
- Title
- PSYCHOMETRIC PROPERTIES OF THE CENTER FOR EPIDEMILOGICAL STUDIES DEPRESSION SCALE (CES-D) USED AMONG NATIVE CHINESE INDIVIDUALS WITH SPINAL CORD INJURY
- Creator
- Xiong, Ying
- Date
- 2014, 2014-07
- Description
-
Depressive symptoms are highly prevalent among people with Spinal Cord Injury (SCI) and yet there is a lack of consensus over psychometrically...
Show moreDepressive symptoms are highly prevalent among people with Spinal Cord Injury (SCI) and yet there is a lack of consensus over psychometrically sound diagnostic criteria or screening tools for depression. This is particularly true with the SCI population in China. Currently, there is limited information regarding the prevalence of depression, severity, and depressive symptomatology among individuals with SCI in China. CES-D lOis a simple and quick tool to use, and it avoids over-estimating depression due to frequent somatic complaints associated with SCI. To our best knowledge, the CES-D 10 had not been used among the native Chinese population with SCI. The current study used the CES-D 10 to measure depressive symptoms among individuals with SCI in China. The purpose ofthis study was to examine factorial validity, internal consistency, construct validity, and concurrent validity ofCES-D 10 among 260 Chinese individuals with SCI. Results showed an alarmingly high prevalence of depressive symptoms among the sample. Consistent with existing literature and hypotheses, a two-factor structure of CES-D 10 was replicated based on a confirmatory factor analysis. Hierarchical regression analyses showed several important psychosocial constructs such as acceptance of disability, social support, and functional disability were predictors of overall depressive symptoms. Surprisingly, depressive symptoms were not predictive of employment status. The scale showed low internal consistency, and a cultural response bias in which participants are less likely to endorse positively-stated CES-D items among the current sample. Such finding is consistent with past studies among the East Asian population. Limitations and implications ofthe study were discussed.
Ph.D. in Psychology, July 2014
Show less
- Title
- CAPACITY BOUNDS FOR LARGE SCALE WIRELESS SENSOR NETWORKS
- Creator
- Tang, Shaojie
- Date
- 2012-11-20, 2012-12
- Description
-
We study the network capacity of large scale wireless sensor networks under both Gaussian Channel model and Protocol Interference Model. To...
Show moreWe study the network capacity of large scale wireless sensor networks under both Gaussian Channel model and Protocol Interference Model. To study network capacity under gaussian channel model, we assume n wireless nodes {v1, v2, · · · , vn} are randomly or arbitrarily distributed in a square region Ba with side-length a. We randomly choose ns multicast sessions. For each source node vi, we randomly select k points pi,j (1 ≤ j ≤ k) in Ba and the node which is closest to pi,j will serve as a destination node of vi. The per-flow unicast(multicast) capacity is defined as the minimum data rate of all unicast(multicast) sessions in this network. We derive the achievable upper bounds on unicast capacity and a upper bound(partial achievable) on multicast capacity of the wireless networks under and Gaussian Channel model. We found that the unicast(multicast) capacity for wireless networks under both two models has three regimes. Under protocol interference model, we assume that n wireless nodes are randomly deployed in a square region with side-length a and all nodes have the uniform transmission range r and uniform interference range R > r. We further assume that each wireless node can transmit/receive at W bits/second over a common wireless channel. For each node vi, we randomly pick k − 1 nodes from the other n − 1 nodes as the receivers of the multicast session rooted at node vi. The aggregated multicast capacity is defined as the total data rate of all multicast sessions in the network. In this work we derive matching asymptotic upper bounds and lower bounds on multicast capacity of large scale random wireless networks under protocol interference model.
PH.D in Computer Science, December 2012
Show less
- Title
- EMPIRICALLY KEYING PERSONALITY MEASURES TO MITIGATE FAKING EFFECTS AND IMPROVE VALIDITY: A MONTE CARLO INVESTIGATION
- Creator
- Tawney, Mark Ward
- Date
- 2012-12-05, 2012-12
- Description
-
Personality-type measures should be viable tools to use for selection. They have incremental validity over cognitive measures and they add...
Show morePersonality-type measures should be viable tools to use for selection. They have incremental validity over cognitive measures and they add this incremental validity while decreasing adverse impact (Hough, 1998; Ones, Viswesvaran & Schmidt, 1993; Ones & Viswesvaran, 1998a). However, personality measures are susceptible to faking; individual’s instructed to fake on personality measures are able to increase their scores (Barrick & Mount, 1996; Ellingson, Sackett & Hough, 1999; Hough, Eaton, Dunnette, Kamp, & McCloy, 1990). Further, personality measures often reveal less than optimal validity estimates as research continually finds meta-analytic coefficients near .2 (e.g., Morgeson, Campion, Dipboye, Hollenbeck, Murphy, & Schmitt, 2007). Some researchers have suggested that these two problems are linked as faking on personality measure may reduce their ability to predict job performance (e.g., Tett & Christansen, 2007). Empirically keyed instruments traditionally enhance prediction and have been found to mitigate the effects of faking (Kluger, Reilly & Russell, 1991; Scott & Sinar, 2011). Recently suggested as a means to key to personality measures (e.g., Tawney & Mead, In Prep), this dissertation further investigates empirical keying methods as a means to both mitigate faking effects and as a means to increase validity of personality-type measures. A Monte Carlo methodology is used due to the difficulties in obtaining accurate measures of faking. As such, this dissertation investigates faking issues under controlled and known parameters, allowing for more robust conclusions as compared to prior faking research.
PH.D in Psychology, December 2012
Show less
- Title
- URBAN SPRAWL AND SUSTAINABLE DEVELOPMENT IN CHINA
- Creator
- Wang, Xiaoxiao
- Date
- 2012-07-11, 2012-07
- Description
-
Compared to the rich literature on urban sprawl in Western cities, relatively little is known of the driving factors, processes, and future...
Show moreCompared to the rich literature on urban sprawl in Western cities, relatively little is known of the driving factors, processes, and future trends of urban sprawl in China. This research will analyze the socio-economic forces behind two parts of urban sprawl in China: urban decentralization (the creation of development zones and new towns) and urban renewal (infrastructural changes to existing urban fabrics) and reveal two basic characteristic for Chinese urban sprawl: a). de-densification; and b). expansion of urbanized areas (urban built-up areas). This proposal aims to use the term “urban sprawl” to consider the reasons behind urban land-use changes and urban pattern transformations on a regional level. It begins with definitions of sprawl in Western and Eastern countries, and follows an analysis of the social, political, and cultural factors of sprawl. Three case studies will focus on three urban centers in China: Beijing, Shanghai, and Guangzhou. Still another component is data analysis with the program SPSS based on related Index for urban sprawl and sustainable development for 15 top urban regions in China during 10 years. This research has explored causes of urban sprawl in China: a). the changing residential preferences of some residents: willing to move out of the core; and b). overcrowded, deteriorated, and old-fashioned structures in central cities becoming targets for demolition in pursuing a new era of modernity, prosperity, and renaissance. Then, this research has pointed out: a). uneven land reform is the key to understand Chinese-style urban sprawl and it is also the necessary condition to the paradox posed by development zones and urbanized villages; b). China’s urban sprawl is driven by both market and government forces; and c). there are a series of new conditions for urban sprawl in China, for example: rising private automobile ownership, rising demand for space and changing residential preference, local public policy, and the real-estate industry. This research intends to provide a comprehensive definition of “urban sprawl” in China, identify the patterns of urban sprawl and growth in three urban regions (Beijing, Shanghai, and Guangzhou), and illustrate the concepts and possible alternative strategies for green urban growth and change in China. Finally, it will offer suggestions on how to effectively control urban sprawl in China, as well as provide a pathway to achieving sustainable development.
Ph.D. in Architecture, July 2012
Show less
- Title
- INJECTION LOCKING BASED ULTRA LOW POWER RADIO FREQUENCY INTEGRATED CIRCUITS
- Creator
- Zhu, Qiang
- Date
- 2012-05-31, 2012-07
- Description
-
Recent advances in radio frequency integrated circuit (RFIC) technology enable various innovative and versatile applications through ultra-low...
Show moreRecent advances in radio frequency integrated circuit (RFIC) technology enable various innovative and versatile applications through ultra-low-power wireless link such as mesh sensor network, personal area network (PAN) and semi-active RFID. This thesis introduces energy efficient demodulator and transceiver design for wireless communications. At the receiver front end, an ultra-low-power BPSK demodulator based on injection locked oscillators (ILOs) is introduced. Two second harmonic ILOs are employed to convert BPSK signals to ASK signals, which are then demodulated by an envelope detector to baseband. For sub-GHz applications, the ILOs are implemented using ring oscillators to allow compact chip area and ultra-low power dissipation. Bit error rate (BER) analysis of this demodulator indicates erroneous polarity flipping of demodulated bits due to phase noise of the ILO. The prototype chip is fabricated in a 65nm CMOS technology that consumes 228μW of power and occupies 0.014mm2 of die area. Measurement results reveal the demodulation of 750MHz 5Mb/s differential BPSK signal with a sensitivity of -43dBm. Theoretical BER analysis has been verified with erroneous flipping observed in the measurement and its probability close to the prediction. Then, an innovative injecting locking based transceiver architecture for ultra low power operation is proposed. It applied the ILO based BPSK demodulator at the receiver side. The oscillating signal at one receiver ILO also injects to another transmitter ILO for accurate carrier generation. Thus local frequency synthesis circuit which consumes considerable portion of power in traditional transceiver is not required. This design is implemented in a 45nm CMOS SOI technology. Measurement results indicate that the transceiver achieves downlink demodulation of -35dBm BPSK signal at 5Mb/s data rate and uplink transmission of -23dBm ASK signal at 1Mb/s data rate with 0.93mA current consumption from 1V power supply.
Ph.D. in Electrical Engineering, July 2012
Show less
- Title
- A MULTI-CURVE LIBOR MARKET MODEL WITH UNCERTAINTIES DESCRIBED BY RANDOM FIELDS
- Creator
- Xu, Shengqiang
- Date
- 2012-12-19, 2012-12
- Description
-
The LIBOR (London Interbank Offered Rate) market model has been widely used as an industry standard model for interest rates modeling and...
Show moreThe LIBOR (London Interbank Offered Rate) market model has been widely used as an industry standard model for interest rates modeling and interest rate derivatives pricing. In this thesis, a multi-curve LIBOR market model, with uncertainty described by random fields, is proposed and investigated. This new model is thus called a multi-curve random fields LIBOR market model (MRFLMM). First, the LIBOR market model is reviewed and the closed-form formulas for pricing caplets and swaptions are provided. It is extended to the case when the uncertainty terms are modeled as random fields and consequently the closed-form formulas for pricing caplets and swaptions are derived. This is a new model called the random fields LIBOR market model (RFLMM). Second, local volatility models and stochastic volatility models are combined with the RFLMM to explain the volatility skews or smiles observed in market. Closedform volatility formulas are derived via the lognormal mixture model in local volatility case, while the approximation scheme for the stochastic volatility case is obtained by a stochastic Taylor expansion method. Moreover, the above work is further extended to a multi-curve framework, where the curves for generating future forward rates and the curve for discounting cash flows are modeled distinctly but jointly. This multi-curve methodology is recently introduced lately by some pioneers to explain the inconsistency of interest rates after the 2008 credit crunch. Both LIBOR market model and RFLMM mentioned above can be categorized as models in singe-curve framework. Third, analogous to the single-curve framework, the multi-curve random fields LIBOR market model is derived and caplets and swaptions are priced with closedform formulas that can be reduced to exactly the Black’s formulas. This model is called a multi-curve random fields LIBOR market model (MRFLMM). Meanwhile, xii local volatility and stochastic volatility models are also combined with the multi-curve LIBOR market model to explain the volatility skews and smiles in the market. Fourth, the calibration of the above models is considered. Taking two-curve setting as an example, four different models, single-curve LIBOR market model, single-curve RFLMM, two-curve LIBOR market model and two-curve RFLMM are compared. The calibration is based on the spot market data on one trading day. The four models are calibrated to European cap volatility surface and swaption volatilities, given the specified parameterized form of correlation and instantaneous volatility. The calibration results show that the random fields models capture the volatility smiles better than non-random fields models and has less pricing error. Moreover, multi-curve models perform better than single-curve models, especially during/after credit crunch. Finally, the estimation of these four models, including pricing and hedging performance, is considered. The estimation uses time series of forward rates in market. Given a time series of term structure, the parameters of the four models are estimated using unscented Kalman filter (UKF). The results show that the random fields models have better estimation results than non-random fields models, with more accurate in-sample and out-sample pricing and better hedging performance. The multi-curve models also over-perform the single-curve models. In addition, it is shown theoretically and empirically that the random fields models have advantages that it is unnecessary to determine the number of factors in advance and not needed to re-calibrate. The multi-curve random fields LIBOR market model has the advantages of both multi-curve framework and random fields setting.
PH.D in Applied Mathematics, December 2012
Show less
- Title
- ELECTROSPUN COLLAGEN/SILK TISSUE ENGINEERING SCAFFOLDS: FIBER FABRICATION, POST-TREATMENT OPTIMIZATION, AND APPLICATION IN NEURAL DIFFERENTIATION OF STEM CELLS
- Creator
- Zhu, Bofan
- Date
- 2017, 2017-05
- Description
-
Biocompatible scaffolds mimicking the locally aligned fibrous structure of native extracellular matrix (ECM) are in high demand in tissue...
Show moreBiocompatible scaffolds mimicking the locally aligned fibrous structure of native extracellular matrix (ECM) are in high demand in tissue engineering. In this thesis research, unidirectionally aligned fibers were generated via a home-built electrospinning system. Collagen type I, as a major ECM component, was chosen in this study due to its support of cell proliferation and promotion of neuroectodermal commitment in stem cell differentiation. Synthetic dragline silk proteins, as biopolymers with remarkable tensile strength and superior elasticity, were also used as a model material. Good alignment, controllable fiber size and morphology, as well as a desirable deposition density of fibers were achieved via the optimization of solution and electrospinning parameters. The incorporation of silk proteins into collagen was found to significantly enhance mechanical properties and stability of electrospun fibers. Glutaraldehyde (GA) vapor post-treatment was demonstrated as a simple and effective way to tune the properties of collagen/silk fibers without changing their chemical composition. With 6-12 hours GA treatment, electrospun collagen/silk fibers were not only biocompatible, but could also effectively induce the polarization and neural commitment of stem cells, which were optimized on collagen rich fibers due to the unique combination of biochemical and biophysical cues imposed to cells. Taken together, electrospun collagen rich composite fibers are mechanically strong, stable and provide excellent cell adhesion. The unidirectionally aligned fibers can accelerate neural differentiation of stem cells, representing a promising therapy for neural tissue degenerative diseases and nerve injuries.
Ph.D. in Chemistry, May 2017
Show less
- Title
- AUTOMATED PROGRESS CONTROL USING LASER SCANNING TECHNOLOGY
- Creator
- Zhang, Chengyi
- Date
- 2013, 2013-05
- Description
-
Assessing progress in different construction activities at the end of every payment period is time consuming and requires specialized...
Show moreAssessing progress in different construction activities at the end of every payment period is time consuming and requires specialized personnel employed by the contractor and the owner. Automatic progress control that requires a minimum amount of human involvement could reduce the time spent on this activity, reduce the number of personnel used, reduce the cost involved, reduce disagreements between contractor and owner, and add to the overall efficiency of project management. Attempts have been made in the past to resolve this issue using image processing and other techniques but the results have not been satisfactory. A new attempt was made to set up a system that can assess progress control with minimum human input and the results are presented in this paper. The experiment made use of laser scanning technology and was conducted both in laboratory conditions and construction sites. The initial results from laboratory condition appear to be promising but there are still obstacles to surmount. The system is robust and accurate in laboratory conditions and constitutes proof of concept. Improvements are made to accelerate the registration process of multiple scans, to reduce the noise data, to recognize objects of irregular shape, and to assess the practicality and economic feasibility of such a system when applying this system in real construction sites. Keywords: Construction scheduling, progress control, laser scanning
PH.D in Civil Engineering, May 2013
Show less
- Title
- SCALABLE RESOURCE MANAGEMENT SYSTEM SOFTWARE FOR EXTREMESCALE DISTRIBUTED SYSTEMS
- Creator
- Wang, Ke
- Date
- 2015, 2015-07
- Description
-
Distributed systems are growing exponentially in the computing capacity. On the high-performance computing (HPC) side, supercomputers are...
Show moreDistributed systems are growing exponentially in the computing capacity. On the high-performance computing (HPC) side, supercomputers are predicted to reach exascale with billion-way parallelism around the end of this decade. Scientific applications running on supercomputers are becoming more diverse, including traditional large-scale HPC jobs, small-scale HPC ensemble runs, and fine-grained many-task computing (MTC) workloads. Similar challenges are cropping up in cloud computing as data-centers host ever growing larger number of servers exceeding many top HPC systems in production today. The applications commonly found in the cloud are ushering in the era of big data, resulting in billions of tasks that involve processing increasingly large amount of data. However, the resource management system (RMS) software of distributed systems is still designed around the decades-old centralized paradigm, which is far from satisfying the ever-growing needs of performance and scalability towards extreme scales, due to the limited capacity of a centralized server. This huge gap between the processing capacity and the performance needs has driven us to develop next-generation RMSs that are magnitudes more scalable. In this dissertation, we first devise a general system software taxonomy to explore the design choices of system software, and propose that key-value stores could serve as a building block. We then design distributed RMS on top of key-value stores. We propose a fully distributed architecture and a data-aware work stealing technique for the MTC resource management, and develop the SimMatrix simulator to explore the distributed designs, which informs the real implementation of the MATRIX task execution framework. We also propose a partition-based architecture and resource sharing techniques for the HPC resource management, and implement them by building the Slurm++ real workload manager and the SimSlurm++ simulator. We study the distributed designs through real systems up to thousands of nodes, and through simulations up to millions of nodes. Results show that the distributed paradigm has significant advantages over centralized one. We envision that the contributions of this dissertation will be both evolutionary and revolutionary to the extreme-scale computing community, and will lead to a plethora of following research work and innovations towards tomorrow’s extremescale systems.
Ph.D. in Computer Science, July 2015
Show less
- Title
- INCORPORATING REACTIVE POWER MARKET INTO THE DAY-AHEAD ELECTRICITY MARKET
- Creator
- Al Ghamdi, Mohammed
- Date
- 2012-05-29, 2012-07
- Description
-
The research work presented in this thesis proposes the incorporation of the reactive power market into the day-ahead electricity market in...
Show moreThe research work presented in this thesis proposes the incorporation of the reactive power market into the day-ahead electricity market in order to compensate generation companies (GENCOs) and independent power producers (IPPs) for providing any additional reactive power support, which varies on an hourly basis based on the load demand, transmission network configuration, and any contingencies that might occur. This proposal would minimize the total payment burden on the independent system operator (ISO), which is related to the reactive power dispatch. The proposed model achieves the main objective of an ISO in a competitive electricity market, which is to provide the required reactive power support from generators at minimum cost while ensuring the secure operation of the power system. In this research, the reactive power price is the bidding-based price that is submitted by the GENCOs and IPPs to the ISOs during the day-ahead market. The proposal takes into the account both the technical and economic aspects associated with the active power and reactive power dispatch in the context of the new operating paradigms in competitive electricity markets. In this research, the Security Constrained Unit Commitment (SCUC) based on AC power flow modeling is considered as the drive engine for clearing the day-ahead electricity market based on the amount of information provided by the market participants. This proposed framework would provide appropriate reactive power support from service providers at minimum cost, while ensuring the secure and reliable operation of the electrical power system. In the research, the PQ capability curves of the generating units are modeled to ensure the practically of the SCUC solutions that are obtained. This proposal would be an essential step toward a fair electricity market while increasing the security of the power system and reducing transmission congestions. Also, it would pave the road for various renewable energy resources since the penetration of renewable energy resources would impact the commitment of the generating units. This would impact the available reactive power reserve margin and security of the network. In addition, incorporating the reactive power market into the day-ahead market would provide a clear signal for optimal private investment in the reactive power capacity. The framework that has been developed is general in nature and can be used for any electricity market structure.
Ph.D. in Electrical Engineering, July 2012
Show less
- Title
- INVESTIGATION OF NIOBIUM SURFACE STRUCTURE AND COMPOSITION FOR IMPROVEMENT OF SUPERCONDUCTING RADIO-FREQUENCY CAVITIES
- Creator
- Trenikhina, Yulia
- Date
- 2014, 2014-12
- Description
-
Nano-scale investigation of intrinsic properties of niobium near-surface is a key to control performance of niobium superconducting radio...
Show moreNano-scale investigation of intrinsic properties of niobium near-surface is a key to control performance of niobium superconducting radio-frequency cavities. Mechanisms responsible for the performance limitations and their empirical remedies needs to be justified in order to reproducibly control fabrication of SRF cavities with desired characteristics. The high field Q-slope and mechanism behind its cure (120◦C mild vacuum bake) were investigated by comparison of the samples cut out of the cavities with high and low dissipation regions. Material evolution during mild field Q-slope nitrogen treatment was characterized using the coupon samples as well as samples cut out of nitrogen treated cavity. Evaluation of niobium near-surface state after some typical and novel cavity treatments was accomplished. Various TEM techniques, SEM, XPS, AES, XRD were used for the structural and chemical characterization of niobium near-surface. Combination of thermometry and structural temperature-dependent comparison of the cavity cutouts with different dissipation characteristics revealed precipitation of niobium hydrides to be the reason for medium and high field Q-slopes. Step-by-step effect of the nitrogen treatment processing on niobium surface was studied by analytical and structural characterization of the cavity cutout and niobium samples, which were subject to the treatment. Low concentration nitrogen doping is proposed to explain the benefit of nitrogen treatment. Chemical characterization of niobium samples before and after various surface processing (Electropolishing (EP), 800◦C bake, hydrofluoric acid (HF) rinsing) showed the differences that can help to reveal the microscopic effects behind these treatments as well as possible sources of surface contamination.
Ph.D. in Physics, December 2014
Show less
- Title
- EXPLOITING NETWORK CODING IN DIFFERENT WIRELESS NETWORKS
- Creator
- Guo, Bin
- Date
- 2012-07-06, 2012-07
- Description
-
Wireless communication networks have been incorporated into our daily life and provide convenience anytime and anywhere. However, the wireless...
Show moreWireless communication networks have been incorporated into our daily life and provide convenience anytime and anywhere. However, the wireless medium is unreliable and unpredictable. Current wireless networks suffer from low throughput, low reliability, etc. Network coding, an alternative approach, has attracted more interests and has emerged as an important technology in wireless networks. It can provide significant potential throughput improvements and a high degree of robustness. This dissertation is built on the theory of network coding. In this dissertation, different network coding protocols are designed in varied wireless networks. The first part of this dissertation proposes a novel coding-ware routing protocol in wireless mesh networks. In particular, a generalized coding condition is formally established to identify the coding opportunities. Based on general coding conditions analysis, a novel routing metric FORM (Free-ride Optimal Routing Metric) and the corresponding routing protocol are developed with the objective to exploit the coding opportunities and maximize the benefit of “free-ride” in order to reduce the total number of transmissions and consequently to increase the network throughput. The results show the proposed protocol achieves significant throughput gain than existing approaches. The second part of this dissertation exploits network coding in wireless cooperative networks. Firstly, a Decode-and-Forward Network Coded (DFNC) protocol is proposed for multi-user cooperative communication system. In particular, DFNC develops an efficient construction method for coding coefficients and a novel decoding algorithm that combines network coding and channel coding. DFNC exploits both temporal and spatial diversities through multiple channels by allowing all the users to generate redundant network-coded packets in a distributed manner and it helps fully explore the redundancy provided by network coding to realize error correction. Theoretical analysis and simulation results demonstrate that DFNC outperforms other transmission schemes in terms of Symbol Error Rate (SER) and achieves higher diversity order. Secondly, the idea of DFNC is extended and Modified-DFNC (M-DFNC) is introduced for a more practical scenario: not all the users will be able to dedicate their resources to provide assistance for others. The throughput analysis shows that M-DFNC outperforms the conventional cooperative protocol in the low-SNR regime and it implies that an adaptive cooperation system should be adopted to optimize the performance. The simulation results validate the theoretical analysis.
Ph.D. in Electrical Engineering, July 2012
Show less
- Title
- SUSTAINABLE MULTILINGUAL COMMUNICATION: MANAGING MULTILINGUAL CONTENT USING FREE AND OPEN SOURCE CONTENT MANAGEMENT SYSTEMS
- Creator
- Kelsey, Todd
- Date
- 2011-05-03, 2011-05
- Description
-
Multilingual content management systems, combined with streamlined processes and inexpensive organizational tools, make it possible for...
Show moreMultilingual content management systems, combined with streamlined processes and inexpensive organizational tools, make it possible for educators, non-profit entities and individuals with limited resources to develop sustainable and accessible multilingual Web sites. The research included a review of what’s been done in the theory and practice of designing Web sites for multilingual audiences. On the basis of that review, a series of sustainable multilingual Web sites were created, and a series of approaches and systems were tested, including MediaWiki, Plone, Drupal, Joomla, PHPMyFAQ, Blogger, Google Docs and Google Sites. There was also a case study on “Social CMS”, which refers to emergent social networks such as Facebook. The case studies are reported on, and conclude with high-level recommendations that form a roadmap for sustainable multilingual Web site development.
Ph.D. in Technical Communication, May 2011
Show less
- Title
- THERMAL RESISTANCE OF SALMONELLA ENTERICA AND ESCHERICHIA COLI 0157:H7 IN PEANUT BUTTER
- Creator
- He, Yingshu
- Date
- 2014, 2014-05
- Description
-
Salmonella enterica is a frequent food contaminant and the leading cause of foodborne bacterial illnesses in the United States. Our study...
Show moreSalmonella enterica is a frequent food contaminant and the leading cause of foodborne bacterial illnesses in the United States. Our study demonstrated that a 5-strain S. enterica cocktail displayed increased heat resistance in peanut butter of low water activity (aw). Significant differences (P < 0.05) were found between the survival rates of Salmonella enterica and Escherichia coli O157:H7 in peanut butter with different formulations and water activity. High carbohydrate content in peanut butter and low incubation temperature resulted in higher levels of bacterial survival during storage but lower levels of bacterial resistance to heat treatment. Furthermore, we also compared the relative heat resistance of three individual strains of S. enterica representing serotypes Typhimurium, Enteritidis and Tennessee and the 3-strain cocktail treated at both 90oC and 126oC in two different peanut butter formulations with varied fat and carbohydrate contents and adjusted water activities (aw from 0.2 to 0.8). When treated at 90oC, increased water activity in peanut butter significantly (P < 0.05) reduced the heat resistance of desiccation-stressed S. enterica cells. Differences in heat resistance were also detected among the three S. enterica serotypes and between the two peanut butter formulations. When treated at 126oC, the differences in bacterial heat resistance among serotypes and adjusted water activities were less notable (P > 0.05). Based on the Weibull model, an average of 52 to 132 min was required to achieve a 5-log reduction of the 3-strain cocktail at 90oC in peanut butter with an aw of 0.2. When aw was increased to 0.6, to achieve the same 5-log reduction required only 23-27 min. At aw of 0.8, S. enterica could be completely killed in less than 10 min in peanut butter with a fat content of 48.49%. Using scanning electron microscopy, we observed minor morphological changes xiii of S. enterica cells during desiccation and rehydration processes in peanut oil, which was used as a surrogate for peanut putter. Results from this study collectively suggest that water activity plays a critical role in determining S. enterica heat resistance in peanut butter. The variability that exists among the heat resistance of different S. enterica serotypes in different peanut butter formulations should also be taken into consideration for developing and validating effective intervention and mitigation strategies in peanut butter production.
PH.D in Biology, May 2014
Show less
- Title
- LONG-TERM AEROBIC AND ANAEROBIC TRANSFORMATIONS OF ORGANIC MATTER IN ANAEROBICALLY DIGESTED BIOSOLIDS
- Creator
- Lukicheva, Irina
- Date
- 2012-12-05, 2012-12
- Description
-
Long-term anaerobic storage of biosolids in a lagoons type of system as a post-treatment to anaerobic digestion is a proven process for...
Show moreLong-term anaerobic storage of biosolids in a lagoons type of system as a post-treatment to anaerobic digestion is a proven process for further pathogen reduction to produce Class A biosolids. At the same time, final biosolids product could develop odors during storage and handling, limiting the flexibility of biosolids utilization. The goal of this research was to study properties of biosolids under different lengths of aging time to determine the stability of final product for its odor potential. Field lagoons of Metropolitan Water Reclamation District of Greater Chicago were sampled to estimate the spatial and temporal variations in the physical-chemical properties and biological stability indicators, namely, total solids, volatile solids, pH, electric conductivity, total Kjeldahl nitrogen, ammonia-N, nitrite/nitrate-N, accumulated oxygen uptake for the 20-hour respirometric test, soluble protein concentration and headspace concentrations of volatile sulfur compounds. The sampling campaign was performed in October 2009. Two types of lagoons were assessed in this study- high-solids lagoons that are loaded with sludge that was previously anaerobically digested and dewatered on the centrifuges, and low-solids lagoons that are loaded with sludge that was previously digested but not dewatered. The analysis of collected data suggested that for the high-solids lagoons the surface layer biosolids (depth of above 0.15 m) undergo long-term aerobic oxidation resulting in higher degree of final product stabilization. The subsurface layers (depth below 0.15 m) are subjected to anaerobic environment where the conditions allow only the initial rapid organic matter degradation approximately within the first year, followed xii by very slow degradation. In addition, microbiological analyses using Fluorescent in situ Hybridization did not indicate active microbial communities in aged biosolids. The performance of low-solids lagoons in the reduction of the biodegradability parameters was shown to be similar to that of the high-solids lagoons. Low-solids lagoons were shown to perform the dewatering function reducing moisture in the digested sludge from initial 2-3% TS to up to 16% TS. Although the lagoon aged biosolids were found to be stable in comparison with other products, such as composts, further aerobic processes taking place after lagoons, such as air-drying and stock-piling could induce further active biological activity. This could potentially result in the odor formation from the air-dried final product. For these reasons, more research is required on the mechanisms promoting further product degradation after lagoons aging.
PH.D in Environmental Engineering, December 2012
Show less
- Title
- ADVANCING DESIGN SIZING AND PERFORMANCE OPTIMIZATION METHODS FOR BUILDING INTEGRATED THERMAL AND ELECTRICAL ENERGY GENERATION SYSTEMS
- Creator
- Zakrzewski, Thomas
- Date
- 2017, 2017-07
- Description
-
Combined electrical and thermal energy systems (i.e., cogeneration systems) will play an integral role in future energy supplies because they...
Show moreCombined electrical and thermal energy systems (i.e., cogeneration systems) will play an integral role in future energy supplies because they can yield higher overall system fuel utilization and efficiency, and thus produce fewer greenhouse gas emissions, than traditionally separate systems. However, methods for both design sizing and performance optimization for cogeneration systems and commercial buildings lag behind the tremendous advancements that have been made in building performance simulation methods. Therefore, the overall goal of this research is to develop and apply novel cogeneration system modeling techniques for optimizing design sizing and dispatch of generation sets that reduce energy use, energy costs, and greenhouse gas emissions. This research is divided into four main research objectives: (1) generalizing cogeneration performance of lean burn natural gas spark ignition reciprocating engines, (2) developing a new Design and Optimization of Combined Heat and Power (DOCHP) systems optimization tool for improving design-sizing of building-integrated and grid-tied CHP systems, (3) demonstrating the utility of the DOCHP tool with several practical applications, and (4) integrating on-site intermittent renewable energy systems into the DOCHP tool to analyze micro-grid applications. This research leverages recent developments in multiple areas of building and system simulation methods. DOCHP advances design sizing and performance optimization methods for building integrated thermal and electrical energy generation systems through the application of an evolutionary artificial intelligence-based genetic algorithm and its ability to resolve to non-linear optimization with discrete constraints while considering non-linear part-load generation set performance curves.
Ph.D. in Civil Engineering, July 2017
Show less
- Title
- DEVELOPMENT OF AN IMPLICITLY COUPLED ELECTROMECHANICAL AND ELECTROMAGNETIC TRANSIENTS SIMULATOR FOR POWER SYSTEMS
- Creator
- Abhyankar, Shrirang
- Date
- 2011-11, 2011-11
- Description
-
The simulation of electrical power system dynamic behavior is done using tran- sient stability simulators (TS) and electromagnetic transient...
Show moreThe simulation of electrical power system dynamic behavior is done using tran- sient stability simulators (TS) and electromagnetic transient simulators (EMT). A Transient Stability simulator, running at large time steps, is used for studying rela- tively slower dynamics e.g. electromechanical interactions among generators and can be used for simulating large-scale power systems. In contrast, an electromagnetic transient simulator models the same components in finer detail and uses a smaller time step for studying fast dynamics e.g. electromagnetic interactions among power electronics devices. Simulating large-scale power systems with an electromagnetic transient simulator is computationally inefficient due to the small time step size in- volved. A hybrid simulator attempts to interface the TS and EMT simulators which are running at different time steps. By modeling the bulk of the large-scale power system in a transient stability simulator and a small portion of the system in an electromagnetic transient simulator, the fast dynamics of the smaller area could be studied in detail, while providing a global picture of the slower dynamics for the rest of power system. In the existing hybrid simulation interaction protocols, the two simulators run independently, exchanging solutions at regular intervals. However, the exchanged data is accepted without any evaluation, so errors may be introduced. While such an explicit approach may be a good strategy for systems in steady state or having slow variations, it is not an optimal or robust strategy if the voltages and currents are varying rapidly, like in the case of a voltage collapse scenario. This research work proposes an implicitly coupled solution approach for the combined transient stability and electromagnetic transient simulation. To combine the two sets of equations with their different time steps, and ensure that the TS and EMT solutions are consistent, the equations for TS and coupled-in-time EMT equations are solved simultaneously. While computing a single time step of the TS equations, a simultaneous calculation of several time steps of the EMT equations is proposed. Along with the implicitly coupled solution approach, this research work also proposes to use a three phase representation of the TS network instead of using a positive-sequence balanced representation as done in the existing transient stability simulators. Furthermore a parallel implementation of the three phase transient stability simulator and the implicitly coupled electromechanical and electromagnetic transients simulator, using the high performance computing library PETSc, is presented. Re- sults of experimentation with different reordering strategies, linear solution schemes, and preconditioners are discussed for both sequential and parallel implementation.
Ph.D. in Electrical Engineering, December 2011
Show less