Search results
(81 - 100 of 1,019)
Pages
- Title
- EMPIRICALLY KEYING PERSONALITY MEASURES TO MITIGATE FAKING EFFECTS AND IMPROVE VALIDITY: A MONTE CARLO INVESTIGATION
- Creator
- Tawney, Mark Ward
- Date
- 2012-12-05, 2012-12
- Description
-
Personality-type measures should be viable tools to use for selection. They have incremental validity over cognitive measures and they add...
Show morePersonality-type measures should be viable tools to use for selection. They have incremental validity over cognitive measures and they add this incremental validity while decreasing adverse impact (Hough, 1998; Ones, Viswesvaran & Schmidt, 1993; Ones & Viswesvaran, 1998a). However, personality measures are susceptible to faking; individual’s instructed to fake on personality measures are able to increase their scores (Barrick & Mount, 1996; Ellingson, Sackett & Hough, 1999; Hough, Eaton, Dunnette, Kamp, & McCloy, 1990). Further, personality measures often reveal less than optimal validity estimates as research continually finds meta-analytic coefficients near .2 (e.g., Morgeson, Campion, Dipboye, Hollenbeck, Murphy, & Schmitt, 2007). Some researchers have suggested that these two problems are linked as faking on personality measure may reduce their ability to predict job performance (e.g., Tett & Christansen, 2007). Empirically keyed instruments traditionally enhance prediction and have been found to mitigate the effects of faking (Kluger, Reilly & Russell, 1991; Scott & Sinar, 2011). Recently suggested as a means to key to personality measures (e.g., Tawney & Mead, In Prep), this dissertation further investigates empirical keying methods as a means to both mitigate faking effects and as a means to increase validity of personality-type measures. A Monte Carlo methodology is used due to the difficulties in obtaining accurate measures of faking. As such, this dissertation investigates faking issues under controlled and known parameters, allowing for more robust conclusions as compared to prior faking research.
PH.D in Psychology, December 2012
Show less
- Title
- TOWARDS THE EXACT CALCULATIONS OF THE FREE ENERGY FOR ENTANGLED SEMIFLEXIBLE POLYMER CHAIN
- Creator
- Pilyugina, Ekaterina
- Date
- 2015, 2015-05
- Description
-
This work consists of two separate projects unified by the idea to extend the Discrete Slip-Link Model, which has been being successfully...
Show moreThis work consists of two separate projects unified by the idea to extend the Discrete Slip-Link Model, which has been being successfully developed in this group to predict rheological behavior of entangled flexible polymers, to new applications. The first project was dedicated to application of the Discrete Slip-Link Model to dielectric relaxation in order to simultaneously predict linear rheology and dielectric relaxation experiments of entangled polyisoprenes. Linear monodisperse, linear bidisperse and star-branched monodisperse systems were studied. It was found that all circumstances save one are well described. Namely, dilute long chains in a sea of short chains can be predicted rheologically, but dielectric relaxation data show a reduction in the relaxation time of long chains greater than that predicted by either the DSM or the expected Rouse motion. The second project was focused on the derivation of the exact free energy expression for semiflexible chains in the presence of entanglements in order to implement the DSM for semiflexible polymers. The special cases of chains with one, two and three strands are examined. An additional implementation of obtained results for one and two strands to buckling instability was performed. It is believed that in two dimensional case the critical buckling force is increased by thermal fluctuations in comparison to classical Euler buckling. However, how the critical buckling force is influenced by thermal fluctuations in three dimensions remains unclear. Some research groups calculate the critical buckling force approximately and conclude that, in opposite to 2D case, in 3D the force is decreased by thermal fluctuations. In this work the critical buckling force for semiflexible chain under compression was calculated exactly. It was shown that thermal fluctuations significantly increase the critical force over classical Euler buckling force in both two and three dimensions.
Ph.D. in Chemical Engineering, May 2015
Show less
- Title
- URBAN SPRAWL AND SUSTAINABLE DEVELOPMENT IN CHINA
- Creator
- Wang, Xiaoxiao
- Date
- 2012-07-11, 2012-07
- Description
-
Compared to the rich literature on urban sprawl in Western cities, relatively little is known of the driving factors, processes, and future...
Show moreCompared to the rich literature on urban sprawl in Western cities, relatively little is known of the driving factors, processes, and future trends of urban sprawl in China. This research will analyze the socio-economic forces behind two parts of urban sprawl in China: urban decentralization (the creation of development zones and new towns) and urban renewal (infrastructural changes to existing urban fabrics) and reveal two basic characteristic for Chinese urban sprawl: a). de-densification; and b). expansion of urbanized areas (urban built-up areas). This proposal aims to use the term “urban sprawl” to consider the reasons behind urban land-use changes and urban pattern transformations on a regional level. It begins with definitions of sprawl in Western and Eastern countries, and follows an analysis of the social, political, and cultural factors of sprawl. Three case studies will focus on three urban centers in China: Beijing, Shanghai, and Guangzhou. Still another component is data analysis with the program SPSS based on related Index for urban sprawl and sustainable development for 15 top urban regions in China during 10 years. This research has explored causes of urban sprawl in China: a). the changing residential preferences of some residents: willing to move out of the core; and b). overcrowded, deteriorated, and old-fashioned structures in central cities becoming targets for demolition in pursuing a new era of modernity, prosperity, and renaissance. Then, this research has pointed out: a). uneven land reform is the key to understand Chinese-style urban sprawl and it is also the necessary condition to the paradox posed by development zones and urbanized villages; b). China’s urban sprawl is driven by both market and government forces; and c). there are a series of new conditions for urban sprawl in China, for example: rising private automobile ownership, rising demand for space and changing residential preference, local public policy, and the real-estate industry. This research intends to provide a comprehensive definition of “urban sprawl” in China, identify the patterns of urban sprawl and growth in three urban regions (Beijing, Shanghai, and Guangzhou), and illustrate the concepts and possible alternative strategies for green urban growth and change in China. Finally, it will offer suggestions on how to effectively control urban sprawl in China, as well as provide a pathway to achieving sustainable development.
Ph.D. in Architecture, July 2012
Show less
- Title
- INJECTION LOCKING BASED ULTRA LOW POWER RADIO FREQUENCY INTEGRATED CIRCUITS
- Creator
- Zhu, Qiang
- Date
- 2012-05-31, 2012-07
- Description
-
Recent advances in radio frequency integrated circuit (RFIC) technology enable various innovative and versatile applications through ultra-low...
Show moreRecent advances in radio frequency integrated circuit (RFIC) technology enable various innovative and versatile applications through ultra-low-power wireless link such as mesh sensor network, personal area network (PAN) and semi-active RFID. This thesis introduces energy efficient demodulator and transceiver design for wireless communications. At the receiver front end, an ultra-low-power BPSK demodulator based on injection locked oscillators (ILOs) is introduced. Two second harmonic ILOs are employed to convert BPSK signals to ASK signals, which are then demodulated by an envelope detector to baseband. For sub-GHz applications, the ILOs are implemented using ring oscillators to allow compact chip area and ultra-low power dissipation. Bit error rate (BER) analysis of this demodulator indicates erroneous polarity flipping of demodulated bits due to phase noise of the ILO. The prototype chip is fabricated in a 65nm CMOS technology that consumes 228μW of power and occupies 0.014mm2 of die area. Measurement results reveal the demodulation of 750MHz 5Mb/s differential BPSK signal with a sensitivity of -43dBm. Theoretical BER analysis has been verified with erroneous flipping observed in the measurement and its probability close to the prediction. Then, an innovative injecting locking based transceiver architecture for ultra low power operation is proposed. It applied the ILO based BPSK demodulator at the receiver side. The oscillating signal at one receiver ILO also injects to another transmitter ILO for accurate carrier generation. Thus local frequency synthesis circuit which consumes considerable portion of power in traditional transceiver is not required. This design is implemented in a 45nm CMOS SOI technology. Measurement results indicate that the transceiver achieves downlink demodulation of -35dBm BPSK signal at 5Mb/s data rate and uplink transmission of -23dBm ASK signal at 1Mb/s data rate with 0.93mA current consumption from 1V power supply.
Ph.D. in Electrical Engineering, July 2012
Show less
- Title
- A MULTI-CURVE LIBOR MARKET MODEL WITH UNCERTAINTIES DESCRIBED BY RANDOM FIELDS
- Creator
- Xu, Shengqiang
- Date
- 2012-12-19, 2012-12
- Description
-
The LIBOR (London Interbank Offered Rate) market model has been widely used as an industry standard model for interest rates modeling and...
Show moreThe LIBOR (London Interbank Offered Rate) market model has been widely used as an industry standard model for interest rates modeling and interest rate derivatives pricing. In this thesis, a multi-curve LIBOR market model, with uncertainty described by random fields, is proposed and investigated. This new model is thus called a multi-curve random fields LIBOR market model (MRFLMM). First, the LIBOR market model is reviewed and the closed-form formulas for pricing caplets and swaptions are provided. It is extended to the case when the uncertainty terms are modeled as random fields and consequently the closed-form formulas for pricing caplets and swaptions are derived. This is a new model called the random fields LIBOR market model (RFLMM). Second, local volatility models and stochastic volatility models are combined with the RFLMM to explain the volatility skews or smiles observed in market. Closedform volatility formulas are derived via the lognormal mixture model in local volatility case, while the approximation scheme for the stochastic volatility case is obtained by a stochastic Taylor expansion method. Moreover, the above work is further extended to a multi-curve framework, where the curves for generating future forward rates and the curve for discounting cash flows are modeled distinctly but jointly. This multi-curve methodology is recently introduced lately by some pioneers to explain the inconsistency of interest rates after the 2008 credit crunch. Both LIBOR market model and RFLMM mentioned above can be categorized as models in singe-curve framework. Third, analogous to the single-curve framework, the multi-curve random fields LIBOR market model is derived and caplets and swaptions are priced with closedform formulas that can be reduced to exactly the Black’s formulas. This model is called a multi-curve random fields LIBOR market model (MRFLMM). Meanwhile, xii local volatility and stochastic volatility models are also combined with the multi-curve LIBOR market model to explain the volatility skews and smiles in the market. Fourth, the calibration of the above models is considered. Taking two-curve setting as an example, four different models, single-curve LIBOR market model, single-curve RFLMM, two-curve LIBOR market model and two-curve RFLMM are compared. The calibration is based on the spot market data on one trading day. The four models are calibrated to European cap volatility surface and swaption volatilities, given the specified parameterized form of correlation and instantaneous volatility. The calibration results show that the random fields models capture the volatility smiles better than non-random fields models and has less pricing error. Moreover, multi-curve models perform better than single-curve models, especially during/after credit crunch. Finally, the estimation of these four models, including pricing and hedging performance, is considered. The estimation uses time series of forward rates in market. Given a time series of term structure, the parameters of the four models are estimated using unscented Kalman filter (UKF). The results show that the random fields models have better estimation results than non-random fields models, with more accurate in-sample and out-sample pricing and better hedging performance. The multi-curve models also over-perform the single-curve models. In addition, it is shown theoretically and empirically that the random fields models have advantages that it is unnecessary to determine the number of factors in advance and not needed to re-calibrate. The multi-curve random fields LIBOR market model has the advantages of both multi-curve framework and random fields setting.
PH.D in Applied Mathematics, December 2012
Show less
- Title
- ELECTROSPUN COLLAGEN/SILK TISSUE ENGINEERING SCAFFOLDS: FIBER FABRICATION, POST-TREATMENT OPTIMIZATION, AND APPLICATION IN NEURAL DIFFERENTIATION OF STEM CELLS
- Creator
- Zhu, Bofan
- Date
- 2017, 2017-05
- Description
-
Biocompatible scaffolds mimicking the locally aligned fibrous structure of native extracellular matrix (ECM) are in high demand in tissue...
Show moreBiocompatible scaffolds mimicking the locally aligned fibrous structure of native extracellular matrix (ECM) are in high demand in tissue engineering. In this thesis research, unidirectionally aligned fibers were generated via a home-built electrospinning system. Collagen type I, as a major ECM component, was chosen in this study due to its support of cell proliferation and promotion of neuroectodermal commitment in stem cell differentiation. Synthetic dragline silk proteins, as biopolymers with remarkable tensile strength and superior elasticity, were also used as a model material. Good alignment, controllable fiber size and morphology, as well as a desirable deposition density of fibers were achieved via the optimization of solution and electrospinning parameters. The incorporation of silk proteins into collagen was found to significantly enhance mechanical properties and stability of electrospun fibers. Glutaraldehyde (GA) vapor post-treatment was demonstrated as a simple and effective way to tune the properties of collagen/silk fibers without changing their chemical composition. With 6-12 hours GA treatment, electrospun collagen/silk fibers were not only biocompatible, but could also effectively induce the polarization and neural commitment of stem cells, which were optimized on collagen rich fibers due to the unique combination of biochemical and biophysical cues imposed to cells. Taken together, electrospun collagen rich composite fibers are mechanically strong, stable and provide excellent cell adhesion. The unidirectionally aligned fibers can accelerate neural differentiation of stem cells, representing a promising therapy for neural tissue degenerative diseases and nerve injuries.
Ph.D. in Chemistry, May 2017
Show less
- Title
- AUTOMATED PROGRESS CONTROL USING LASER SCANNING TECHNOLOGY
- Creator
- Zhang, Chengyi
- Date
- 2013, 2013-05
- Description
-
Assessing progress in different construction activities at the end of every payment period is time consuming and requires specialized...
Show moreAssessing progress in different construction activities at the end of every payment period is time consuming and requires specialized personnel employed by the contractor and the owner. Automatic progress control that requires a minimum amount of human involvement could reduce the time spent on this activity, reduce the number of personnel used, reduce the cost involved, reduce disagreements between contractor and owner, and add to the overall efficiency of project management. Attempts have been made in the past to resolve this issue using image processing and other techniques but the results have not been satisfactory. A new attempt was made to set up a system that can assess progress control with minimum human input and the results are presented in this paper. The experiment made use of laser scanning technology and was conducted both in laboratory conditions and construction sites. The initial results from laboratory condition appear to be promising but there are still obstacles to surmount. The system is robust and accurate in laboratory conditions and constitutes proof of concept. Improvements are made to accelerate the registration process of multiple scans, to reduce the noise data, to recognize objects of irregular shape, and to assess the practicality and economic feasibility of such a system when applying this system in real construction sites. Keywords: Construction scheduling, progress control, laser scanning
PH.D in Civil Engineering, May 2013
Show less
- Title
- SCALABLE RESOURCE MANAGEMENT SYSTEM SOFTWARE FOR EXTREMESCALE DISTRIBUTED SYSTEMS
- Creator
- Wang, Ke
- Date
- 2015, 2015-07
- Description
-
Distributed systems are growing exponentially in the computing capacity. On the high-performance computing (HPC) side, supercomputers are...
Show moreDistributed systems are growing exponentially in the computing capacity. On the high-performance computing (HPC) side, supercomputers are predicted to reach exascale with billion-way parallelism around the end of this decade. Scientific applications running on supercomputers are becoming more diverse, including traditional large-scale HPC jobs, small-scale HPC ensemble runs, and fine-grained many-task computing (MTC) workloads. Similar challenges are cropping up in cloud computing as data-centers host ever growing larger number of servers exceeding many top HPC systems in production today. The applications commonly found in the cloud are ushering in the era of big data, resulting in billions of tasks that involve processing increasingly large amount of data. However, the resource management system (RMS) software of distributed systems is still designed around the decades-old centralized paradigm, which is far from satisfying the ever-growing needs of performance and scalability towards extreme scales, due to the limited capacity of a centralized server. This huge gap between the processing capacity and the performance needs has driven us to develop next-generation RMSs that are magnitudes more scalable. In this dissertation, we first devise a general system software taxonomy to explore the design choices of system software, and propose that key-value stores could serve as a building block. We then design distributed RMS on top of key-value stores. We propose a fully distributed architecture and a data-aware work stealing technique for the MTC resource management, and develop the SimMatrix simulator to explore the distributed designs, which informs the real implementation of the MATRIX task execution framework. We also propose a partition-based architecture and resource sharing techniques for the HPC resource management, and implement them by building the Slurm++ real workload manager and the SimSlurm++ simulator. We study the distributed designs through real systems up to thousands of nodes, and through simulations up to millions of nodes. Results show that the distributed paradigm has significant advantages over centralized one. We envision that the contributions of this dissertation will be both evolutionary and revolutionary to the extreme-scale computing community, and will lead to a plethora of following research work and innovations towards tomorrow’s extremescale systems.
Ph.D. in Computer Science, July 2015
Show less
- Title
- CO2 CAPTURE AND HYDROGEN PRODUCTION IN SORBENT ENHANCED WATER-GAS SHIFT (SEWGS) PROCESS WITH REGENERABLE SOLID SORBENT
- Creator
- Zarghami Khanehsar, Shahin
- Date
- 2015, 2015-07
- Description
-
Carbon dioxide emission from fossil fuel combustion and its impact on global warming is one of the most critical environmental issues nowadays...
Show moreCarbon dioxide emission from fossil fuel combustion and its impact on global warming is one of the most critical environmental issues nowadays. Coal as a main source of produce energy is the most CO2-intensive fossil fuel. Advanced power generation processes that use gasification technology, such as Integrated Gasification Combined Cycle (IGCC), which offer higher efficiency, are among the leading contenders for power generation in the 21st century. In an IGCC process, because of high pressure, carbon dioxide in the fuel gas is at higher concentration, which can be captured and sequestered at lower costs. Utilization of regenerable MgO-based sorbents has been shown to be an effective method for capturing CO2 from gasification-based processes at elevated temperatures and pressures (i.e. p > 20 atm and 330° < T < 450°C). Low cost MgO based sorbent can be prepared through modification of natural dolomite. The reactivity of the sorbent in carbonation/regeneration cycles has a significant impact on the economics of the proposed regenerable process. Although the sorbent can be regenerated in successive cycles, the sorbent reactivity and capacity gradually decline during the cyclic process. Therefore, it is crucial to develop a better understanding on the role of the key parameters affecting the reactivity of the sorbent going through the cyclic carbonation/regeneration process. In this work, a systematic study on the sorbent preparation parameters (i.e., calcination temperature, calcination duration, calcination temperature ramp, potassium concentration, impregnation duration, drying temperature, re-calcination temperature, and re-calcination duration) was conducted to understand the effect of each parameter on the overall capacity and reactivity of the sorbent. The concentration of potassium additive (as carbonation reaction promoter) has the most significant effect on the reactivity of the sorbent and the optimum K/Mg molar ratio appears to be in the range of 0.1-0.16. The reactivity of the sorbent toward carbon dioxide at various operating conditions (i.e. temperature, CO2 concentration and steam concentration) was experimentally evaluated. The presence of steam significantly improves the reactivity of the sorbent which is attributed to formation of more favorable pore structure as well as the existence of a parallel carbonation reaction pathway involving the formation of a transient MgO.H2O* compound. The optimum carbonation reaction temperature in the presence of steam is around 380˚C. The effect of cycling on CO2 capture capacity of MgO-based sorbent was also experimentally investigated in this work. Series of carbonation/regeneration cycles (up to 25) have been carried out in a dispersed bed reactor to determine the effect of various variables on long term durability of the sorbent. The gradual loss of CO2 sorption capacity appears to be mainly due to loss potassium (a carbonation reaction promoter) in the cyclic process. Durability of the sorbents improves in the presence of steam, which is likely due to the favorable changes in the pore structure of the sorbents. A kinetic model was developed to fit the reactivity curves obtained from the dispersed bed tests at different operating conditions which was needed to predict the sorbent/catalyst performance in the regenerative process. Model parameters were defined and discussed for each of the operating conditions, as well as dispersed bed cyclic tests. Furthermore, the thermal behavior and the kinetics of partial decomposition of dolomite were studied in a dispersed-bed reactor to improve the reactivity of the sorbent. The microstructure and the nature of the solid products were found to be strongly dependent on the CO2 partial pressure near the reacting interface and on the decomposition temperature. A significant increase in the rate of the dolomite decomposition reaction was found in the presence of steam. Steam improves the kinetics of decomposition, modifies the radial distribution of the pores; and improves the connectivity of the pores inside the dolomite particles, which decreases the diffusion resistance of produced carbon dioxide inside the particle. A shrinking core model with variable product layer diffusivity was used to fit the experimental data and determine the kinetic parameters of the dolomite decomposition reaction. The results indicate that transport of CO2 across the reacting interface in the porous particle was the main limiting factor in the decomposition reaction at the experimental conditions investigated. A lab-scaled high-pressure/high-temperature packed-bed reactor was utilized to evaluate the performance of the sorbent in simultaneous water-gas shift reaction and sorbent carbonation environment. It was shown that the CO2 in the coal gas can be removed by regenerable MgO-based sorbents at temperatures around 350°C, and the CO2 removal can shift WGS reaction to enhance hydrogen production. Therefore, Sorbent Enhanced Water-Gas-Shift (SEWGS) can result in much higher hydrogen production without lowering the temperature, leading to higher overall process efficiency.
Ph.D. in Chemical Engineering, July 2015
Show less
- Title
- INCORPORATING REACTIVE POWER MARKET INTO THE DAY-AHEAD ELECTRICITY MARKET
- Creator
- Al Ghamdi, Mohammed
- Date
- 2012-05-29, 2012-07
- Description
-
The research work presented in this thesis proposes the incorporation of the reactive power market into the day-ahead electricity market in...
Show moreThe research work presented in this thesis proposes the incorporation of the reactive power market into the day-ahead electricity market in order to compensate generation companies (GENCOs) and independent power producers (IPPs) for providing any additional reactive power support, which varies on an hourly basis based on the load demand, transmission network configuration, and any contingencies that might occur. This proposal would minimize the total payment burden on the independent system operator (ISO), which is related to the reactive power dispatch. The proposed model achieves the main objective of an ISO in a competitive electricity market, which is to provide the required reactive power support from generators at minimum cost while ensuring the secure operation of the power system. In this research, the reactive power price is the bidding-based price that is submitted by the GENCOs and IPPs to the ISOs during the day-ahead market. The proposal takes into the account both the technical and economic aspects associated with the active power and reactive power dispatch in the context of the new operating paradigms in competitive electricity markets. In this research, the Security Constrained Unit Commitment (SCUC) based on AC power flow modeling is considered as the drive engine for clearing the day-ahead electricity market based on the amount of information provided by the market participants. This proposed framework would provide appropriate reactive power support from service providers at minimum cost, while ensuring the secure and reliable operation of the electrical power system. In the research, the PQ capability curves of the generating units are modeled to ensure the practically of the SCUC solutions that are obtained. This proposal would be an essential step toward a fair electricity market while increasing the security of the power system and reducing transmission congestions. Also, it would pave the road for various renewable energy resources since the penetration of renewable energy resources would impact the commitment of the generating units. This would impact the available reactive power reserve margin and security of the network. In addition, incorporating the reactive power market into the day-ahead market would provide a clear signal for optimal private investment in the reactive power capacity. The framework that has been developed is general in nature and can be used for any electricity market structure.
Ph.D. in Electrical Engineering, July 2012
Show less
- Title
- INVESTIGATION OF NIOBIUM SURFACE STRUCTURE AND COMPOSITION FOR IMPROVEMENT OF SUPERCONDUCTING RADIO-FREQUENCY CAVITIES
- Creator
- Trenikhina, Yulia
- Date
- 2014, 2014-12
- Description
-
Nano-scale investigation of intrinsic properties of niobium near-surface is a key to control performance of niobium superconducting radio...
Show moreNano-scale investigation of intrinsic properties of niobium near-surface is a key to control performance of niobium superconducting radio-frequency cavities. Mechanisms responsible for the performance limitations and their empirical remedies needs to be justified in order to reproducibly control fabrication of SRF cavities with desired characteristics. The high field Q-slope and mechanism behind its cure (120◦C mild vacuum bake) were investigated by comparison of the samples cut out of the cavities with high and low dissipation regions. Material evolution during mild field Q-slope nitrogen treatment was characterized using the coupon samples as well as samples cut out of nitrogen treated cavity. Evaluation of niobium near-surface state after some typical and novel cavity treatments was accomplished. Various TEM techniques, SEM, XPS, AES, XRD were used for the structural and chemical characterization of niobium near-surface. Combination of thermometry and structural temperature-dependent comparison of the cavity cutouts with different dissipation characteristics revealed precipitation of niobium hydrides to be the reason for medium and high field Q-slopes. Step-by-step effect of the nitrogen treatment processing on niobium surface was studied by analytical and structural characterization of the cavity cutout and niobium samples, which were subject to the treatment. Low concentration nitrogen doping is proposed to explain the benefit of nitrogen treatment. Chemical characterization of niobium samples before and after various surface processing (Electropolishing (EP), 800◦C bake, hydrofluoric acid (HF) rinsing) showed the differences that can help to reveal the microscopic effects behind these treatments as well as possible sources of surface contamination.
Ph.D. in Physics, December 2014
Show less
- Title
- EXPLOITING NETWORK CODING IN DIFFERENT WIRELESS NETWORKS
- Creator
- Guo, Bin
- Date
- 2012-07-06, 2012-07
- Description
-
Wireless communication networks have been incorporated into our daily life and provide convenience anytime and anywhere. However, the wireless...
Show moreWireless communication networks have been incorporated into our daily life and provide convenience anytime and anywhere. However, the wireless medium is unreliable and unpredictable. Current wireless networks suffer from low throughput, low reliability, etc. Network coding, an alternative approach, has attracted more interests and has emerged as an important technology in wireless networks. It can provide significant potential throughput improvements and a high degree of robustness. This dissertation is built on the theory of network coding. In this dissertation, different network coding protocols are designed in varied wireless networks. The first part of this dissertation proposes a novel coding-ware routing protocol in wireless mesh networks. In particular, a generalized coding condition is formally established to identify the coding opportunities. Based on general coding conditions analysis, a novel routing metric FORM (Free-ride Optimal Routing Metric) and the corresponding routing protocol are developed with the objective to exploit the coding opportunities and maximize the benefit of “free-ride” in order to reduce the total number of transmissions and consequently to increase the network throughput. The results show the proposed protocol achieves significant throughput gain than existing approaches. The second part of this dissertation exploits network coding in wireless cooperative networks. Firstly, a Decode-and-Forward Network Coded (DFNC) protocol is proposed for multi-user cooperative communication system. In particular, DFNC develops an efficient construction method for coding coefficients and a novel decoding algorithm that combines network coding and channel coding. DFNC exploits both temporal and spatial diversities through multiple channels by allowing all the users to generate redundant network-coded packets in a distributed manner and it helps fully explore the redundancy provided by network coding to realize error correction. Theoretical analysis and simulation results demonstrate that DFNC outperforms other transmission schemes in terms of Symbol Error Rate (SER) and achieves higher diversity order. Secondly, the idea of DFNC is extended and Modified-DFNC (M-DFNC) is introduced for a more practical scenario: not all the users will be able to dedicate their resources to provide assistance for others. The throughput analysis shows that M-DFNC outperforms the conventional cooperative protocol in the low-SNR regime and it implies that an adaptive cooperation system should be adopted to optimize the performance. The simulation results validate the theoretical analysis.
Ph.D. in Electrical Engineering, July 2012
Show less
- Title
- SUSTAINABLE MULTILINGUAL COMMUNICATION: MANAGING MULTILINGUAL CONTENT USING FREE AND OPEN SOURCE CONTENT MANAGEMENT SYSTEMS
- Creator
- Kelsey, Todd
- Date
- 2011-05-03, 2011-05
- Description
-
Multilingual content management systems, combined with streamlined processes and inexpensive organizational tools, make it possible for...
Show moreMultilingual content management systems, combined with streamlined processes and inexpensive organizational tools, make it possible for educators, non-profit entities and individuals with limited resources to develop sustainable and accessible multilingual Web sites. The research included a review of what’s been done in the theory and practice of designing Web sites for multilingual audiences. On the basis of that review, a series of sustainable multilingual Web sites were created, and a series of approaches and systems were tested, including MediaWiki, Plone, Drupal, Joomla, PHPMyFAQ, Blogger, Google Docs and Google Sites. There was also a case study on “Social CMS”, which refers to emergent social networks such as Facebook. The case studies are reported on, and conclude with high-level recommendations that form a roadmap for sustainable multilingual Web site development.
Ph.D. in Technical Communication, May 2011
Show less
- Title
- THERMAL RESISTANCE OF SALMONELLA ENTERICA AND ESCHERICHIA COLI 0157:H7 IN PEANUT BUTTER
- Creator
- He, Yingshu
- Date
- 2014, 2014-05
- Description
-
Salmonella enterica is a frequent food contaminant and the leading cause of foodborne bacterial illnesses in the United States. Our study...
Show moreSalmonella enterica is a frequent food contaminant and the leading cause of foodborne bacterial illnesses in the United States. Our study demonstrated that a 5-strain S. enterica cocktail displayed increased heat resistance in peanut butter of low water activity (aw). Significant differences (P < 0.05) were found between the survival rates of Salmonella enterica and Escherichia coli O157:H7 in peanut butter with different formulations and water activity. High carbohydrate content in peanut butter and low incubation temperature resulted in higher levels of bacterial survival during storage but lower levels of bacterial resistance to heat treatment. Furthermore, we also compared the relative heat resistance of three individual strains of S. enterica representing serotypes Typhimurium, Enteritidis and Tennessee and the 3-strain cocktail treated at both 90oC and 126oC in two different peanut butter formulations with varied fat and carbohydrate contents and adjusted water activities (aw from 0.2 to 0.8). When treated at 90oC, increased water activity in peanut butter significantly (P < 0.05) reduced the heat resistance of desiccation-stressed S. enterica cells. Differences in heat resistance were also detected among the three S. enterica serotypes and between the two peanut butter formulations. When treated at 126oC, the differences in bacterial heat resistance among serotypes and adjusted water activities were less notable (P > 0.05). Based on the Weibull model, an average of 52 to 132 min was required to achieve a 5-log reduction of the 3-strain cocktail at 90oC in peanut butter with an aw of 0.2. When aw was increased to 0.6, to achieve the same 5-log reduction required only 23-27 min. At aw of 0.8, S. enterica could be completely killed in less than 10 min in peanut butter with a fat content of 48.49%. Using scanning electron microscopy, we observed minor morphological changes xiii of S. enterica cells during desiccation and rehydration processes in peanut oil, which was used as a surrogate for peanut putter. Results from this study collectively suggest that water activity plays a critical role in determining S. enterica heat resistance in peanut butter. The variability that exists among the heat resistance of different S. enterica serotypes in different peanut butter formulations should also be taken into consideration for developing and validating effective intervention and mitigation strategies in peanut butter production.
PH.D in Biology, May 2014
Show less
- Title
- LONG-TERM AEROBIC AND ANAEROBIC TRANSFORMATIONS OF ORGANIC MATTER IN ANAEROBICALLY DIGESTED BIOSOLIDS
- Creator
- Lukicheva, Irina
- Date
- 2012-12-05, 2012-12
- Description
-
Long-term anaerobic storage of biosolids in a lagoons type of system as a post-treatment to anaerobic digestion is a proven process for...
Show moreLong-term anaerobic storage of biosolids in a lagoons type of system as a post-treatment to anaerobic digestion is a proven process for further pathogen reduction to produce Class A biosolids. At the same time, final biosolids product could develop odors during storage and handling, limiting the flexibility of biosolids utilization. The goal of this research was to study properties of biosolids under different lengths of aging time to determine the stability of final product for its odor potential. Field lagoons of Metropolitan Water Reclamation District of Greater Chicago were sampled to estimate the spatial and temporal variations in the physical-chemical properties and biological stability indicators, namely, total solids, volatile solids, pH, electric conductivity, total Kjeldahl nitrogen, ammonia-N, nitrite/nitrate-N, accumulated oxygen uptake for the 20-hour respirometric test, soluble protein concentration and headspace concentrations of volatile sulfur compounds. The sampling campaign was performed in October 2009. Two types of lagoons were assessed in this study- high-solids lagoons that are loaded with sludge that was previously anaerobically digested and dewatered on the centrifuges, and low-solids lagoons that are loaded with sludge that was previously digested but not dewatered. The analysis of collected data suggested that for the high-solids lagoons the surface layer biosolids (depth of above 0.15 m) undergo long-term aerobic oxidation resulting in higher degree of final product stabilization. The subsurface layers (depth below 0.15 m) are subjected to anaerobic environment where the conditions allow only the initial rapid organic matter degradation approximately within the first year, followed xii by very slow degradation. In addition, microbiological analyses using Fluorescent in situ Hybridization did not indicate active microbial communities in aged biosolids. The performance of low-solids lagoons in the reduction of the biodegradability parameters was shown to be similar to that of the high-solids lagoons. Low-solids lagoons were shown to perform the dewatering function reducing moisture in the digested sludge from initial 2-3% TS to up to 16% TS. Although the lagoon aged biosolids were found to be stable in comparison with other products, such as composts, further aerobic processes taking place after lagoons, such as air-drying and stock-piling could induce further active biological activity. This could potentially result in the odor formation from the air-dried final product. For these reasons, more research is required on the mechanisms promoting further product degradation after lagoons aging.
PH.D in Environmental Engineering, December 2012
Show less
- Title
- ADVANCING DESIGN SIZING AND PERFORMANCE OPTIMIZATION METHODS FOR BUILDING INTEGRATED THERMAL AND ELECTRICAL ENERGY GENERATION SYSTEMS
- Creator
- Zakrzewski, Thomas
- Date
- 2017, 2017-07
- Description
-
Combined electrical and thermal energy systems (i.e., cogeneration systems) will play an integral role in future energy supplies because they...
Show moreCombined electrical and thermal energy systems (i.e., cogeneration systems) will play an integral role in future energy supplies because they can yield higher overall system fuel utilization and efficiency, and thus produce fewer greenhouse gas emissions, than traditionally separate systems. However, methods for both design sizing and performance optimization for cogeneration systems and commercial buildings lag behind the tremendous advancements that have been made in building performance simulation methods. Therefore, the overall goal of this research is to develop and apply novel cogeneration system modeling techniques for optimizing design sizing and dispatch of generation sets that reduce energy use, energy costs, and greenhouse gas emissions. This research is divided into four main research objectives: (1) generalizing cogeneration performance of lean burn natural gas spark ignition reciprocating engines, (2) developing a new Design and Optimization of Combined Heat and Power (DOCHP) systems optimization tool for improving design-sizing of building-integrated and grid-tied CHP systems, (3) demonstrating the utility of the DOCHP tool with several practical applications, and (4) integrating on-site intermittent renewable energy systems into the DOCHP tool to analyze micro-grid applications. This research leverages recent developments in multiple areas of building and system simulation methods. DOCHP advances design sizing and performance optimization methods for building integrated thermal and electrical energy generation systems through the application of an evolutionary artificial intelligence-based genetic algorithm and its ability to resolve to non-linear optimization with discrete constraints while considering non-linear part-load generation set performance curves.
Ph.D. in Civil Engineering, July 2017
Show less
- Title
- DEVELOPMENT OF AN IMPLICITLY COUPLED ELECTROMECHANICAL AND ELECTROMAGNETIC TRANSIENTS SIMULATOR FOR POWER SYSTEMS
- Creator
- Abhyankar, Shrirang
- Date
- 2011-11, 2011-11
- Description
-
The simulation of electrical power system dynamic behavior is done using tran- sient stability simulators (TS) and electromagnetic transient...
Show moreThe simulation of electrical power system dynamic behavior is done using tran- sient stability simulators (TS) and electromagnetic transient simulators (EMT). A Transient Stability simulator, running at large time steps, is used for studying rela- tively slower dynamics e.g. electromechanical interactions among generators and can be used for simulating large-scale power systems. In contrast, an electromagnetic transient simulator models the same components in finer detail and uses a smaller time step for studying fast dynamics e.g. electromagnetic interactions among power electronics devices. Simulating large-scale power systems with an electromagnetic transient simulator is computationally inefficient due to the small time step size in- volved. A hybrid simulator attempts to interface the TS and EMT simulators which are running at different time steps. By modeling the bulk of the large-scale power system in a transient stability simulator and a small portion of the system in an electromagnetic transient simulator, the fast dynamics of the smaller area could be studied in detail, while providing a global picture of the slower dynamics for the rest of power system. In the existing hybrid simulation interaction protocols, the two simulators run independently, exchanging solutions at regular intervals. However, the exchanged data is accepted without any evaluation, so errors may be introduced. While such an explicit approach may be a good strategy for systems in steady state or having slow variations, it is not an optimal or robust strategy if the voltages and currents are varying rapidly, like in the case of a voltage collapse scenario. This research work proposes an implicitly coupled solution approach for the combined transient stability and electromagnetic transient simulation. To combine the two sets of equations with their different time steps, and ensure that the TS and EMT solutions are consistent, the equations for TS and coupled-in-time EMT equations are solved simultaneously. While computing a single time step of the TS equations, a simultaneous calculation of several time steps of the EMT equations is proposed. Along with the implicitly coupled solution approach, this research work also proposes to use a three phase representation of the TS network instead of using a positive-sequence balanced representation as done in the existing transient stability simulators. Furthermore a parallel implementation of the three phase transient stability simulator and the implicitly coupled electromechanical and electromagnetic transients simulator, using the high performance computing library PETSc, is presented. Re- sults of experimentation with different reordering strategies, linear solution schemes, and preconditioners are discussed for both sequential and parallel implementation.
Ph.D. in Electrical Engineering, December 2011
Show less
- Title
- IMPROVED SPATIAL-TEMPORAL RECONSTRUCTION FOR CARDIAC AND RESPIRATORY GATED SPECT
- Creator
- Qi, Wenyuan
- Date
- 2014, 2014-12
- Description
-
Myocardial perfusion single photon emission computed tomography (SPECT) is an important imaging technique for evaluating coronary artery...
Show moreMyocardial perfusion single photon emission computed tomography (SPECT) is an important imaging technique for evaluating coronary artery disease. It can provide information of both myocardial perfusion and ventricular function. However, SPECT images su er from both cardiac and respiratory motion blur. In order to reduce the motion degrading, cardiac and respiratory gated SPECT imaging is used. In gated SPECT imaging, due to the lowered counts, the gated images will be more noisy than the ungated ones. Spatiotemporal (4D) processing is often used to reduce the noise level in gated images. In this thesis, we aim to investigate spatial and temporal processing techniques for improving the quality in cardiac and respiratory gated SPECT imaging. First, we will investigate a piecewise spatial smoothing prior based on totalvariation (TV) in 4D cardiac SPECT image reconstruction. In previous studies, it was found that spatial smoothing could adversely a ect the accuracy of 4D reconstruction in cardiac gated SPECT when temporal smoothing was applied, even though it could suppress the noise level. Our goal is to explore whether a piecewise spatial smoothing prior will improve the image accuracy while reducing the noise. Toward this goal, we will compare TV based piecewise spatial smoothing with quadratic spatial smoothing with simulated imaging, in which we will evaluate the lesion detectability. Clinical data will also be used to compare the results as a preliminary test. Motion-compensated temporal smoothing is known to play a key role in 4D cardiac gated SPECT reconstruction. Next, we will investigate whether better motion estimation could further improve the accuracy of reconstructed images. We will consider two di erent motion estimation models and the known motion in simulated experiments. The motion estimation methods are the classic optical ow estimation (OFE) and a periodic motion estimation method. We will evaluate the reconstruction from di erent motion models using several numerical quanti cation metrics. Furthermore, we will demonstrate reconstruction with the two motion estimation models using clinical acquisitions. Respiratory motion is known to cause motion blur in SPECT image reconstruction, and respiratory gated SPECT imaging can be e ective to combat its e ect. We will develop reconstruction techniques in respiratory gated SPECT. We will consider two reconstruction schemes for respiratory gated SPECT. The rst scheme is a post motion compensated reconstruction, in which images at di erent respiratory phases are reconstructed seperately, and afterwards are averaged over all the respiratory gates by motion compensation. The second scheme is a model based motion compensated reconstruction approach, in which one reference gate is used to describe the acquisition data of all the respiratory gates. Due to irregular respiratory motion, the data acquisition in each respiratory gate is not uniformly distributed among the acquisition angles, which would lead to limited-angle artifacts. To correct such artifacts, we propose an angle compensation method in the reconstruction. In order to deal with both cardiac and respiratory motion, we will investigate a 4D reconstruction approach for dual cardiac-respiratory gated SPECT reconstruction. This approach can accommodate the acquired data simultaneously from di erent cardiac and respiratory gates. It can exploit the correlation in the signal component among both the cardiac and respiratory phases. Both simulated experiments and clinical reconstruction will be used for evaluating this reconstruction approach. Due to the radiation risk of myocardial perfusion imaging (MPI) scans, there is an urgent need to lower the radiation dose used in SPECT. However, lower radiation dose will lead to more noisy reconstruction, which is even more serious in gated SPECT. We would explore the potential of using 4D reconstruction for lowering the dose in dual cardiac-respiratory gated SPECT.
Ph.D. in Electrical and Computer Engineering, December 2014
Show less
- Title
- ROLE OF EXTRACELLULAR MATRIX IN CELLULAR BEHAVIOR AND TISSUE FUNCTION
- Creator
- Sridharan, Indumathi
- Date
- 2012-04-22, 2012-05
- Description
-
Matrix-dictated control of stem cell differentiation and tissue status are of considerable interest to cell biologists and tissue engineers....
Show moreMatrix-dictated control of stem cell differentiation and tissue status are of considerable interest to cell biologists and tissue engineers. To create suitable biological scaffolds for tissue engineering and cell therapeutics, it is essential to understand the matrix mediated specification of cell lineage. Our study examines the role of matrix properties on cellular behavior and tissue mechanics. To this end, we studied the effect of collagen type I on stem cell differentiation and its mechanical properties within a live tissue. We altered the properties of collagen type 1 by incorporating CNT. The collagen-carbon nanotube (collagen-CNT) composite material was stiffer with thicker fibers and longer D-period. We find that the enhanced mechanical and structural properties of collagen-CNT allow for rapid and efficient derivation of neural progenitors from human decidua parietalis placental stem cells (hdpPSC). Both structure and stiffness of the matrix are important determinants of neural differentiation rate. Strikingly, the collagen-CNT matrix, unlike collagen, imposes the neural fate by an alternate mechanism that is independent of beta-1 integrin and beta-catenin. The study demonstrates the sensitivity of stem cells to subtle changes in the matrix and the utilization of a novel biocomposite material for efficient and directed differentiation of stem cells. Investigation of connective tissue disorders has led to the understanding of the important role played by collagen. So far, native collagen fibers within an intact tissue have not been examined. In this study, we employed a unique approach- histochemical staining guided high-resolution elasticity mapping- to study collagen and smooth muscle in fresh vaginal wall connective tissue. The comparative study of tissues collected from healthy pre-menopausal (pre-M) and post-menopausal (post-M) women suggest that during menopause, collagen’s structure and elasticity are subtly altered. The systematic analysis enables detection of minute changes in collagen in non-fatal conditions such as pelvic organ prolapse and other genitor-urinary disorders, where the initial symptoms are subtle and multivariate and where early diagnosis will allow non-invasive interventions and reduce incidence of surgical correction for these common disorders.
Ph.D. in Molecular Biochemistry and Biophysics, May 2012
Show less
- Title
- THE EXAMINATION OF EFFORT TESTS: IDENTIFYING AN EFFICIENT APPROACH TO THE ASSESSMENT OF MALINGERING
- Creator
- Van De Kreeke, Diana
- Date
- 2013, 2013-05
- Description
-
Malingering is an important issue in neuropsychology. A person can malinger both cognitive and psychological symptoms and it is important for...
Show moreMalingering is an important issue in neuropsychology. A person can malinger both cognitive and psychological symptoms and it is important for a clinician to assess for this possibility as malingering invalidates test findings. Several embedded and standalone effort tests exist for the purpose of malingering classification. This study looked to assess the effectiveness of embedded cognitive effort measures as compared to standalone effort measures. Additionally, the effectiveness of a smaller set of measures versus a larger set was analyzed for both cognitive and psychological measures. The likelihood of a person malingering both cognitive and psychological symptoms was assessed. Lastly, exploratory analyses were conducted to assess for differences between malingerers and non-malingerers. It was discovered that the California Verbal Learning Test-Second Edition-Forced Choice and the Victoria Symptom Validity Test were poor estimators of malingering classification. Therefore, the cognitive effort measures included in the analyses were the Reliable Digit Span, Rey 15-Item Test, Word Memory Test, and the Test of Memory Malingering. Psychological measures included the F and FBS indices from the Minnesota Multiphasic Personality Inventory-2 and the Negative Impression Management and Malingering Index scales from the Personality Assessment Inventory. Findings revealed that stand-alone tests add a significant amount of variance to malingering classification over and above embedded measures in a cognitive test battery. The most effective set of cognitive effort tests included the Reliable Digit Span, Rey 15-Item Test, and Word Memory Test. The Test of Memory Malingering did not add significant additional variance to the classification of malingering. Results showed that a vii person is not likely to malinger both cognitive and psychological symptoms. Lastly, the F and Negative Impression Management indices were just as effective at classifying malingering as when the FBS and Malingering Index scales were also used. Future research should further assess the actual sensitivities of the California Verbal Learning Test-Second Edition-Forced Choice and the Victoria Symptom Validity Test. Also, research should assess whether different cut off scores for the Test of Memory Malingering lead to increased efficiency of the measure for malingering classification.
PH.D in Psychology, May 2013
Show less