Search results
(1,341 - 1,360 of 4,483)
Pages
- Title
- ADAPTIVE QUASI-MONTE CARLO CUBATURE
- Creator
- Jimenez Rugama, Lluis Antoni
- Date
- 2016, 2016-12
- Description
-
In some definite integral problems the analytical solution is either unknown or hard to compute. As an alternative, one can approximate the...
Show moreIn some definite integral problems the analytical solution is either unknown or hard to compute. As an alternative, one can approximate the solution with numerical methods that estimate the value of the integral. However, for high dimensional integrals many techniques suffer from the curse of dimensionality. This can be solved if we use quasi-Monte Carlo methods which do not suffer from this phenomenon. Section 2.2 describes digital sequences and rank-1 lattice node sequences, two of the most common points used in quasi-Monte Carlo. If one uses quasi-Monte Carlo, there is still another problem to address: how many points are needed to estimate the integral within a particular absolute error tolerance. In this dissertation, we propose two automatic cubatures based on digital sequences and rank-1 lattice node sequences that estimate high dimensional problems. These new algorithms are constructed in Chapter 3 and the user-specified absolute error tolerance is guaranteed to be satisfied for a specific set of integrands. In Chapter 4 we define a new estimator that satisfies a generalized tolerance function and includes a relative error tolerance option. An important property of quasi-Monte Carlo methods is that they are effective when the function has low effective dimension. In [1], Sobol’ defined the global sensitivity indices, which measure what part of the variance is explained by each dimension. We can use these indices to measure the effective dimensionality of a function. In Chapter 5 we extend our digital sequences cubature to estimate first order and total effect Sobol’ indices.
Ph.D. in Applied Mathematics, December 2016
Show less
- Title
- THERMAL INACTIVATION OF SALMONELLA AGONA IN LOW-MOISTURE FOOD SYSTEMS AS INFLUENCED BY WATER ACTIVITY
- Creator
- Jin, Yuqiao
- Date
- 2016, 2016-07
- Description
-
Salmonella can survive in low-moisture, high-protein and high-fat foods for several years. Despite nationwide recalls for Salmonella in low...
Show moreSalmonella can survive in low-moisture, high-protein and high-fat foods for several years. Despite nationwide recalls for Salmonella in low-moisture products, information on survival of Salmonella during high-protein and high-fat food processing is limited. This project evaluated Salmonella enterica serovar Agona 447967 thermal inactivation kinetics in a high-protein and a high-fat matrix using a defined matrix composition, varying water activities and process conditions. A high-protein matrix, composed of 60:6:25 weight ratio of flour: oil: protein, and a high-fat matrix, composed of 60:25:6 weight ratio of flour: oil: protein was studied. Each matrix was inoculated with Salmonella enterica serovar Agona 447967 at activities of 0.5, and 0.9. Samples were packed in aluminum test cells and heat treated over a range of temperatures and time intervals. Survival of Salmonella Agona was detected on trypticase soy agar with 0.6% yeast extract. The average z-values for the high-protein matrix at the water activity (aw) of 0.5 and 0.9 were 9.01ºC, and 7.51ºC, respectively. The average z-values for the high-fat matrix was 11.91ºC at aw 0.5, and 7.08ºC at aw 0.9. Results showed that the z-value at aw 0.5 was significantly different from the z-value at aw 0.9 (p < 0.05) in both the highprotein and high-fat matrices. Critical process factors associated with pathogen destruction were identified during thermal treatments in this project. Results indicated that a correlation existed between temperature and water activity and must be accounted for when predicating inactivation of Salmonella enterica in these model matrices under dynamic process conditions.
M.S. in Food Process Engineering, July 2016
Show less
- Title
- BELIEFS AND CONTEXTUAL MEDIATORS AND MODERATORS OF DISCRETIONARY WORKPLACE BEHAVIOR
- Creator
- Raad, Jason H.
- Date
- 2014, 2014-07
- Description
-
The Theory of Planned Behavior (TPB) has been successfully used to link attitudes, subjective norms, and perceived behavioral control to the...
Show moreThe Theory of Planned Behavior (TPB) has been successfully used to link attitudes, subjective norms, and perceived behavioral control to the enactment of various behaviors in numerous situations; however, the TPB in not frequently used in organizational settings. Similarly, contextual factors may represent important moderating and mediating effects that have not been fully explored in prior TPB research. The current study employs the TPB in a healthcare setting to assess the use of Outcome Measures (OMs) by practicing clinicians. Two contextual mediators and a one contextual moderator were added to the standard TPB framework in an attempt to better explain the enactment of discretionary workplace behavior. Results suggest that TPB components are related to the discretionary use of Outcome Measures in clinical practice; however, results also suggest that hypothesized relationships between TPB factors may diverge significantly from those proposed in the original theory. Implications, limitations, and future directions are also discussed.
Ph.D. in Psychology, July 2014
Show less
- Title
- GUARANTEED ADAPTIVE MONTE CARLO METHODS FOR ESTIMATING MEANS OF RANDOM VARIABLES
- Creator
- Jiang, Lan
- Date
- 2016, 2016-05
- Description
-
Monte Carlo is a versatile computational method that may be used to approximate the means, μ, of random variables, Y , whose distributions are...
Show moreMonte Carlo is a versatile computational method that may be used to approximate the means, μ, of random variables, Y , whose distributions are not known explicitly. This thesis investigates how to reliably construct fixed width confidence intervals for μ with some prescribed absolute error tolerance, "a, relative error tolerance, "r or some generalized error criterion. To facilitate this, it is assumed that the kurtosis, , of the random variable, Y , does not exceed a user specified bound max. The key idea is to confidently estimate the variance of Y by applying Cantelli’s Inequality. A Berry-Esseen Inequality makes it possible to determine the sample size required to construct such a confidence interval. When relative error is involved, this requires an iterative process. This idea for computing μ = E(Y ) can be used to develop a numerical integration method by writing the integral as μ = E(f(x)) = RRd f(x)⇢(x)dx, where x is a d dimensional random vector with probability density function ⇢. A similar idea is used to develop an algorithm for computing p = E(Y) where Y is a Bernoulli random variable. All of the algorithms have been implemented in the Guaranteed Automatic Integration Library (GAIL).
Ph.D. in Applied Mathematics, May 2016
Show less
- Title
- RISK SHIFTING, MANAGER SENTIMENT AND NEW INVESTMENT EFFICIENCY IN MANAGED FUTURES
- Creator
- Jiang, Cheng
- Date
- 2017, 2017-05
- Description
-
This dissertation focuses on a subset of hedge fund, Commodity Trading Advisors (CTAs), which has grown in the past 35 years and highlighted...
Show moreThis dissertation focuses on a subset of hedge fund, Commodity Trading Advisors (CTAs), which has grown in the past 35 years and highlighted by its diversification benefit to traditional asset classes. I will study the risk-taking, market timing and market capacity of this type of hedge fund. I study the volatility of an extensive sample of live and defunct Commodity Trading Advisor funds from 1994 to 2013. Utilizing the gross-of-fee return, I document significant mean-reversion in volatility in the time series of CTA funds. I further examine the impact of performance on volatility shift, and find consistent evidence of risk tournament behavior, especially when the CTA industry is performing well. Moreover, the risk shifting of CTA managers depend upon both relative and absolute fund performance. The practice of this conditional risk shifting has benefitted the fund managers at the cost of fund investors. I estimate the average benefit to manager's return income and the average cost to investor's Sharpe ratio. My findings provide a first comprehensive evidence on the risk strategy of CTA funds, suggesting that managerial career concerns do not eliminate the moral hazard problem in the CTA space. The asymmetric nature of performance-based compensation in hedge funds produces a strong incentive for risk-shifting, but empirical research presents mixed evidence of risk-seeking behavior. The driver of the change in risk can also be related to other reasons other than incentive fees. I introduce a behavioral regime-switching model of fund manager sentiment in which Bayesian learning is used to update beliefs about market environment in an effort to predict future performance and anticipate market moves. I use a subset of hedge funds in the managed futures industry between 1994 and 2014 and find that the risk-taking behavior of fund managers is influenced by human emotions but in two distinctly different ways. The capital flow to hedge funds has well-known price pressure and smart money effect. This paper studies the capital flows impact on CTA future performance. It had been observed both in mutual funds and hedge funds that mangers scale their existing holding up or down by using new capital inflow rather than trade new positions. This strategy will generate positive returns for the funds due to the price pressure effect. It is interesting whether it will exist in managed future space. I use Vector auto-regression (VAR) to evaluate a system of 2 variables: capital inflow and future performance. If the relationship is negative, one possible reason could be the market impact that erodes the profit generated by price pressure. Therefore, I will implement a market impact test that investigate the market capacity in terms of Sharpe ratio and t-statistics of alpha.
Ph.D. in Management Science, May 2017
Show less
- Title
- SIMULATION OF CENTERLINE DEFECT CLOSURE IN OPEN DIE FORGING
- Creator
- Zhou, Jie
- Date
- 2012-11-11, 2012-12
- Description
-
Open die forging technique is mainly used to achieve desired product shape and refine the product mechanical quality. Large ingots tend to...
Show moreOpen die forging technique is mainly used to achieve desired product shape and refine the product mechanical quality. Large ingots tend to have internal defects such as shrinkage cavities and porosity, which have to be closed during the primary forging stage to ensure sound internal quality of forged parts. In this work, the finite element method was used to simulate the open die forging process, varying different process parameters that affect the void closure behavior profoundly. Numerical models were constructed in FORGE 2011® using practical forging parameters and material rheological data obtained experimentally from Gleeble compression tests. Forging variables including die design, operational practice and boundary conditions were studied thoroughly. Parameters such as die width, die geometry, die overlap, reduction amount per pass were studied with intense attention paid on the specific mechanical properties of H13 steel, so that this study can be applied to solve real world problems. Also, the temperature gradient and friction condition between billet and die were investigated. Physical experiment validation was carried out with a miniature billet sample. The experiment results were compared with the simulations, showing there was good agreement between the two, giving confidence to the simulation results. Based on the simulation results the optimum forging parameters were proposed to ensure full closure of internal defects.
M.S. in Materials Science and Engineering, December 2012
Show less
- Title
- ANALYSIS OF THE APPLICATION OF THE LIAR MACHINE TO THE Q-ARY PATHOLOGICAL LIAR GAME WITH A FOCUS ON LOWER DISCREPANCY BOUNDS
- Creator
- Williamson, James W
- Date
- 2011-12-12, 2011-12
- Description
-
The binary pathological liar game, as described by Ellis and Yan in [Ellis and Yan, 2004], is a variation of the original liar game, as...
Show moreThe binary pathological liar game, as described by Ellis and Yan in [Ellis and Yan, 2004], is a variation of the original liar game, as described by Berlekamp, R enyi, and Ulam in [Berlekamp, 1964], [R enyi, 1961], and [Ulam, 1976]. This two person, questioner/ responder, game is played for n rounds for a set of M messages. The game begins by the responder selecting a message from the set M. Each round the questioner partitions the messages into two distinct subsets. The responder selects one subset, and elements not in the selected subset each accumulate a lie. Elements accumulating more than e lies are eliminated. The questioner wins the original game provided after the completion of n rounds there is at most one surviving message. The questioner wins the pathological game provided there is at least one surviving message. The focus here will be to generalize the pathological game from two subsets to q subsets with a focus on providing a winning condition for the questioner. The q-ary variant of the pathological liar game has been studied, with rst results in [Ellis and Nyman, 2009]. We let the number of rounds the game is played go to in nity, with e a linear fraction of n, and present an upper bound on the number of messages required by the questioner to win the q-ary Pathological Liar Game. The liar machine and linear machine as discussed by Cooper and Ellis in [Cooper and Ellis, 2010] have been adapted to t this generalization and are used to track the approximate progression of the game. We provide an upper bound on the initial number of chips by bounding the discrepancy between the actual progression of the game and the approximate progression of the game as described by the linear and liar machines respectively. A similar upper bound can be found in [Tietzer, 2011], with di erent elements in the argument. Using methods similar to those found in [Cooper and Ellis, 2010], we provide a partial order argument to show that the winning condition bound for one response strategy by the questioner transfers to all possible response strategies.
M.S. in Applied Mathematics, December 2011
Show less
- Title
- NETWORK CONGESTION / RESOURCE ALLOCATION GAME
- Creator
- Shin, Junghwan
- Date
- 2013, 2013-12
- Description
-
We first consider the K-user(player) resource allocation problem when the resources or strategies are associated with homogeneous functions....
Show moreWe first consider the K-user(player) resource allocation problem when the resources or strategies are associated with homogeneous functions. Further, we consider the K-user(player) matroid resource allocation problem satisfying the specified requirements of the users, which are maximal independent sets of a matroid. The objective is to choose strategies so as to minimize the average maximum cost incurred by a user where the cost of a strategy is the sum of the costs of the elements comprising the strategy. For k commodity networks with heterogeneous latency functions, we consider the price of anarchy (PoA) in multi-commodity selfish routing problems where the latency function of an edge has a heterogeneous dependency on the flow commodities, i.e. when the delay is dependent on the flow of individual commodities, rather than on the aggregate flow. Further we consider the price of anarchy (PoA) in multi-commodity atomic flows where the latency function of an edge has a heterogeneous dependency on the flow commodities, i.e. when the delay is dependent on the flow of individual commodities, rather than on the aggregate flow. Lastly, we show improved bounds on the price of anarchy for uniform latency functions where each edge of the network has the same delay function. We prove bounds on the price of anarchy for the above functions. Our bounds illustrate how the PoA is dependent on θ and the coefficients gij . At the end, we consider security aspects of network routing in a game-theoretic framework where an attacker is empowered with the ability for intrusion into edges of the network; on the other hand, the goal of the designer is to choose routing paths.
PH.D in Computer Science, December 2013
Show less
- Title
- The Relationship Between Default and Volatility and Its Impact on Counterparty Credit Risk
- Creator
- Yang, Jiarui
- Date
- 2012-07-16, 2012-07
- Description
-
This thesis presents a uni ed framework for studying the impact of the correlation between interest rate volatility and counterparty default...
Show moreThis thesis presents a uni ed framework for studying the impact of the correlation between interest rate volatility and counterparty default probability on the credit risk of collateralized interest-rate derivative contracts. A defaultable term structure model is proposed in which the default risk is correlated with interest rate volatility. In particular, an existence and uniqueness theorem of this model is proved. The pricing formula of credit derivatives under the proposed model is derived and the stochastic interest rate model and credit model are calibrated together . Finally, given all the parameters calibrated by the unscented Kalman lter, a sensitivity analysis of the impact of the correlation between interest rate volatility and a counterparty's default probability on the credit risk of collateralized interest-rate derivative contracts is presented.
Ph.D. in Applied Mathematics, July 2012
Show less
- Title
- FPGA IMPLEMENTATION OF ULTRASONIC FLAW DETECTION ALGORITHM BASED ON SUPPORT VECTOR MACHINE CLASSIFICATION
- Creator
- Jiang, Yiyue
- Date
- 2016, 2016-12
- Description
-
In this study, a Support Vector Machine (SVM) classification method used for analyzing Ultrasound signals is implemented by FPGAs based on...
Show moreIn this study, a Support Vector Machine (SVM) classification method used for analyzing Ultrasound signals is implemented by FPGAs based on Xilinx Zynq SoC. The SVM processor aims at classifying A-scan data obtained by an ultrasonic sensor. For reducing development time, hardware software co-design tools such as Xilinx System Generator and Vivado have been used. SVM kernel function is implemented by DSP slices and block RAMs. Advanced Extensible Interface bridges the ARM core and FPGAs for more convenient communication. The main objective of this study is to achieve robust detection of ultrasonic flaw echoes in real-time using an SVM algorithm. The implementation on the FPGA shows that the architecture can be realized with a Xilinx Zedboard FPGA. It runs at 100MHz clock frequency and can calculate the SVM classification for 1024 feature space points under 0.02ms.
M.S. in Electrical Engineering, December 2016
Show less
- Title
- APPROXIMATION OF STOCHASTIC DIFFERENTIAL EQUATIONS WITH NON-GAUSSIAN NOISE AND APPLICATION TO A VOLATILITY MODEL
- Creator
- Jianhua, Wang
- Date
- 2015, 2015-05
- Description
-
In recent decades, stochastic processes with non-Gaussian noise are widely utilized in financial models. The a-stable Levy motion, one type of...
Show moreIn recent decades, stochastic processes with non-Gaussian noise are widely utilized in financial models. The a-stable Levy motion, one type of non-Gaussian noise processes, provides robust data ts and events simulations in financial world. Due to "heavy" tails and path jumps property, the a-stable Levy motion modeling becomes extremely popular among financial decision makers and risk hedgers. The a-stable Levy motion, however, usually has neither closed form of probability density function nor the higher moments, which raises implement obstacles. We exhibited distributions of a-stable random variables by different values. In contrast to the Gaussian distribution, the a-stable distribution illustrated the "heavy" tails and shape skews with various parameters. We analyzed jump behaviors along with calculating tails probabilities. We exploited scenario simulation method to solve stochastic differential equations with a-stable Levy motions. Except Euler scheme, we derived two strong convergence 1.0 order numerical schemes via the Wagner-Platen expansion. After we executed the schemes on the Merton Jump-Diffusion model, we roughly proved the convergence order of the schemes. We successfully applied the derived schemes to simulate a sophisticated stochastic volatility model with skewed a-stable Levy motions. With the approximated underlying asset process, we priced an european call option value and visualized implied volatility curve. As the result, we concluded the logarithm of underlying asset follows a skewed distribution rather than a symmetric one.
M.S. in Applied Mathematics, May 2015
Show less
- Title
- THE EVALUATION OF THERMAL INACTIVATION OF COXIELLA BURNETII NINE MILE PHASE II IN SKIM MILK BY INTEGRATED CELL CULTURE-POLYMERASE CHAIN REACTION (ICC-PCR) ASSAY
- Creator
- Zheng, Jiaojie
- Date
- 2014, 2014-07
- Description
-
Coxiella burnetii (C. burnetii) is an obligate intracellular bacterium and replicates exclusively in an acidified, lysosome-like vacuole which...
Show moreCoxiella burnetii (C. burnetii) is an obligate intracellular bacterium and replicates exclusively in an acidified, lysosome-like vacuole which means the analysis of C. burnetii is difficult than other bacterial which can growth on regular liquid medium. An Integrated Cell Culture-Polymerase Chain Reaction (ICC-PCR) assay has been developed as a potential alternative to animal bioassays for evaluating C. burnetii inactivation in milk. This thesis research is to demonstrate the usefulness of this assay for evaluating C. burnetii inactivation in skim milk and comparing the results found for whole milk which was completed by another researcher. Before the thermal studies, the thermal kinetics of heating skim milk in glass vials and the polymerase chain reaction (PCR) detection limit were determined. For thermal treatments, Ultra High Temperature (U.H.T.) skim milk containing C. burnetii at ~7.2 log10 genome equivalents/ml (ge/ml) was treated in submerged vials at 60 °C, 62 °C and 64 °C for various times. After serial dilution of milk to 10-6, triplicate Vero cell monolayers were infected at each level for 48 hours followed by 9 days incubation after inoculum removal and addition of fresh RPMI + 1% FBS media. Infected cells were freeze-thawed followed by deoxyribonucleic acid (DNA) extraction and real- time PCR (RT-PCR) for the C. burnetii IS1111a gene. C. burnetii in samples was considered as viability if the Day 9 post infection (p.i.) level increased by ≥0.5 log10 C. burnetii ge/ml over the most concentrated Day 0 p.i. sample. The numbers of positive wells from each dilution were used to calculate the remaining viable C. burnetii/ml by MPN method. The thermal kinetics profile for heating the skim milk showed that the come up and cool down time would not adversely affect the thermal x treatment at 60 °C and 62°C. The qPCR could detect the propagation of C. burnetii in skim milk containing as low as 120 C. burnetii ge/ml. The ICC-PCR assay demonstrated that the thermal inactivation of C. burnetii in skim milk was faster than in whole milk at 62 °C and 64 °C. For the 62 °C treatment, the infectious C. burnetii in skim milk was reduced by 1.3 log10 ge/ml at 10 minutes and was no longer infectious after 20 minutes, whereas C. burnetii in whole milk had no obvious reduction after 10 minutes, 3.7 log10 ge/ml after 20 minutes, and was no longer infectious after 26 minutes. After 6 minutes treatment at 64 °C, infectious C. burnetii was reduced by 6.2 log10 ge/ml for skim milk vs. 3.8 log10 ge/ml for whole milk with complete inactivation after 9 minutes for both milk types. This ICC-PCR assay is a specific and sensitive method to detect the inactivation of C. burnetii in skim milk and allows differentiation of the thermal inactivation kinetics of different types of milk, and may be useful for the evaluation of thermal and novel non-thermal processes for C. burnetii inactivation in milk.
M.S. in Food Safety and Technology, July 2014
Show less
- Title
- STUDIES ON CONNECTIVE AND NEUROLOGICAL TISSUES IN RELATION TO DISEASE
- Creator
- Madhurapantula, Rama Sashank
- Date
- 2015, 2015-12
- Description
-
The structure of connective tissue is of great importance for homeostasis of the cells present within it. Pathologies leading to changes in...
Show moreThe structure of connective tissue is of great importance for homeostasis of the cells present within it. Pathologies leading to changes in the structure of the extracellular matrix (ECM), in particular collagen have been shown to play a pivotal role in the progression of various diseases. Similarly, changes in the structure of specific elements in neurological tissues, such as myelin, have been shown to elicit adverse responses to injury. This thesis explores two main aspects: 1) the structural changes brought about by high sugar concentrations, much similar to that found in diabetic patients, to the structure of type I collagen and 2) possible effects of traumatic brain injury (TBI) to the structure of neurons in rat brains. Specific changes in the structure and packing of collagens in various tissues could be potential therapeutic targets to control the progression of related diseases. However, the information available on the nature, specificity and the relevance of these changes at a molecular level are largely unknown and have been explored only sparsely. The result of non-enzymatic glycosylation i.e. glycation, is the formation of sugar- mediated crosslinks within the native structure of type I collagen. The chemistry behind these crosslinks, also known as Advanced Glycation Endproducts (AGEs), has been known for decades. However, the exact locations or regions of high propensity for the formation of these crosslinks within the packing structure of collagen are largely unknown. The results presented in this thesis inform on the location of possible crosslinks using the principle of Multiple Isomorphic Replacement (MIR) to and correlate the effects of crosslinks to the structural and functional sites present on the D-periodic arrangement of collagen into fibrils. An extension to this is the study of the effects of povidone-iodine on the packing structure of collagen. Iodine is used as a common disinfectant in surgery and first aid. Prolonged treatment with iodine is detrimental to the structure of collagen underlying the wound site (surgical or otherwise). This is particularly important in large surface area wounds, as seen in open-heart, hip and joint replacement surgeries and amputations. Diabetic patients are more prone to injuries to limb extremities and a common procedure to stop infections from spreading to the rest of the body is amputation of the limb and constant treatment with low doses of iodine immediately following surgery for a certain length of time. The results presented in this thesis demonstrate specific disintegration of collagen fibrils in rat tail tendons, from a short iodine treatment. This is detrimental for cellular activity, more so in processes like wound healing. TBI results in the loss of neurological control and/or function of various parts of the body, governed by this region. The results presented herein, inform and support the finding that neuroplasticity, in the hemisphere opposite to that where injury was delivered, compensates for the functional deficits as a result of TBI. The data presented here can be used in developing rehabilitation regimens for TBI patients on case-to-case basis to restore most of the functional deficits observed thereof, and also as a factor of predicting the onset of secondary neurological disorders (for instance amyloid related pathologies) at a later stage in life.
Ph.D. in Biology, December 2015
Show less
- Title
- THE SINGLE BUILDING AS THE URBAN CATALYST
- Creator
- La Serna, Matias S.
- Date
- 2012-03-28, 2012-05
- Description
-
An identified strip of land in Chicago’s South Side has left an unmistakably large void within the grid of the city. Current city plans call...
Show moreAn identified strip of land in Chicago’s South Side has left an unmistakably large void within the grid of the city. Current city plans call for single-use and low density spaces to eventually fill the enormous void bounded by State Street to the East, and Federal Street to the West. Resisting the current pattern of architectural and urban segregation, this alternative proposes an ambitious plan to fill an entire block with a select and diverse range of program to invigorate a depleted urban area while simultaneously creating an identifiable architectural landmark. The sudden interruption of single-use occupation reclaims the architectural potential of a site burdened by its troubled past and serves as the catalyst to stimulate ambitious and diverse urban growth. Necessarily occupying the entire site for the urban development of the city, the building is faced with the challenge of expanding to fill the tremendous void imposed by the grid with as few program members as possible, all the while preserving the richness of urban overlaps otherwise afforded in tighter urban settings. The result is a single building that is both mindful of the independent needs of its occupants while simultaneously creating and maximizing shared spaces within the overlaps, generating program opportunities and interactions not otherwise afforded in a system of architectural fragmentation.
M.S. in Architecture, May 2012
Show less
- Title
- Thermoelectric Power Systems and the Energy-Water Nexus
- Creator
- Walker, Michael Edward
- Date
- 2012-04-26, 2012-05
- Description
-
The goal of this Thesis is the development of a comprehensive methodology to evaluate the total cost of water use in the recirculating cooling...
Show moreThe goal of this Thesis is the development of a comprehensive methodology to evaluate the total cost of water use in the recirculating cooling loops of thermoelectric power plants. This methodology expands upon the work presented in the literature to improve estimations of the economic impact of condenser fouling. The methods developed in this Thesis are incorporated into a user friendly Combined Cost Model (CCM) interface that will allow future researchers, students and plant personnel to perform the same comparative analyses presented herein. The objective of this Thesis is the application of the CCM to determine the economic viability of treated municipal wastewater (MWW) use to replace freshwater for cooling in power plants with recirculating cooling systems. To accomplish this objective, a set of case study evaluations are included to (1) evaluate the sensitivity of the economic impact of fouling to condenser design and operation, (2) determine the cost of treated MWW use in pulverized coal power plants, and (3) compare the relative cost of degraded water use in advanced power systems such as IGCC and oxy-combustion. The results of these evaluations show that current freshwater prices do not provide an economic incentive to switch to the use of treated MWW water. However, results indicate that the breakeven differential price of freshwater, at which the total costs of using freshwater and treated MWW are equal, is only 0.52 $/1000Gal. (USD 2009). In addition, the use of treated MWW for cooling is shown to be a better economic alternative to dry air cooling technology (DACT) for the conservation of freshwater resources. Cost-to-conservation estimates of treated MWW use are 1.1 $/1000 Gal., in contrast to 5.6 $/1000 Gal. for DACT. This Thesis also presents a novel, hybrid coal conversion concept, the dry gasification oxy-combustion (DGOC) power cycle. This process is similar to oxycombustion, in that it maintains a concentrated CO2 flue stream and does not utilize a complex separation step. However, coal conversion and sulfur removal are performed within a gasification unit. It is estimated to achieve CCS goals with a higher efficiency than the leading alternative strategies.
Ph.D. in Chemical Engineering, May 2012
Show less
- Title
- METHODOLOGY FOR VEHICLE EMISSION IMPACTS ANALYSIS FROM SIGNAL TIMING OPTIMIZATION OF AN URBAN STREET NETWORK
- Creator
- Lu, Pu
- Date
- 2017, 2017-05
- Description
-
The pace of urban street capacity expansion is much slower than the growth of vehicle travel, leading to several traffic congestions. To...
Show moreThe pace of urban street capacity expansion is much slower than the growth of vehicle travel, leading to several traffic congestions. To mitigate traffic congestion expanding capacity is not feasible for many cases due to the high cost and space restriction. Improving the efficient use of the available capacity becomes the solution. Traffic signal optimization is one of the most widely used ways of efficient capacity utilization. Concurrent to traffic signal optimization, more smooth traffic operations in term of reasonably higher speed and a reduced traffic delay will in turn change vehicle emissions. This research aims to quantify changes in vehicle emissions resulted from traffic signal optimization by introducing a new methodology for quantifying network wide vehicle emissions and real world application in of the Chicago urban network for validation. The proposed methodology considers undersaturation and oversaturation of traffic conditions and urban street segments with varying speeds for different types of vehicles and pollutants by hour of the day and location within the network. It begins with information collection and research through a review of existing methods for urban street network vehicle emission estimation, intersection vehicle emission evaluation, and the running vehicle emission modeling. The proposed methodology focuses on three elements: estimation of emissions from vehicles stopped at intersections and for vehicles cruising along segments, as well as analysis of network wide vehicle emissions and changes in overall network vehicle emissions by time of the day and by areas. Major steps of methodology application included the use of Chicago TRANSIMS model implementing optimized signal timing plans to obtain refined traffic volumes at intersections and on segments, increased vehicle operating speeds, changed green splits, and vehicle compositions for all intersections and segments in the urban street network, the application of an intersection vehicle emission model for stopped vehicles and a segment vehicle emission model for vehicles cruising on segments, and the network wide analysis of vehicle emission changes by vehicle type and pollutant type in a 24-hour period within an urban street network, respectively. The proposed methodology for intersection vehicle emission estimation was successfully applied to a dense urban street network in Chicago for each approach per cycle and then extended for intersections in hours of the day to analyze the impacts of traffic changes at intersections on exhaust changes. In order to develop the network vehicle emission analysis method, it is essential to evaluate the segment vehicle emissions. This is achieved by using the concept of vehicle specific power which is used to estimate emissions of cruising vehicles considered along with vehicle speeds and speed changes and hence analyzing changes in segment vehicle emissions affected by traffic volume changes derived from signal timing optimization. The decreased number of vehicles stopped at intersections by applying signal timing optimization will reduce intersection emissions, hence reducing overall network vehicle emissions. In addition to have vehicle emissions got reduced at intersections, the increasing vehicle speed for vehicles on segments could further reduce vehicle emissions on segments.
Ph.D. in Civil Engineering, May 2017
Show less
- Title
- TOWARD THE DEVELOPMENT OF USABILITY GUIDELINES FOR SINGLE-WINDOW WEB INTERFACES
- Creator
- Maciukenas, James
- Date
- 2013, 2013-05
- Description
-
Since the early 1990s, usability research has guided development of web interfaces used to interact with content available on the Internet....
Show moreSince the early 1990s, usability research has guided development of web interfaces used to interact with content available on the Internet. Following these guidelines has resulted in web pages that in many characteristics are quite similar and are identified here as Conventional Web Interfaces (CWIs). An emergent genre of web interface, the Single Window Interface (SWI), differs in many ways from CWIs. Most importantly, SWIs differ from CWIs in the type of tasks expected of their users and in the visual strategies used to facilitate these tasks. Namely, SWIs facilitate open-ended discovery tasks by using strong visual cues to convey meta-information to the user and encourage both the exploration and perusal of content. This dissertation will demonstrate that the differences between SWIs and CWIs require revisiting current usability guidelines in order to determine how to guide future development of SWIs. If SWI visual strategies can be shown to be effective in conveying meta-information qualities to users, the groundwork will be prepared for future research investigating the effectiveness of these strategies in facilitating open-ended exploration and discovery within SWIs. These efforts will lead to more useful experiences for users of SWIs and inform the fields of technical communication as well as human-computer interaction and usability research, to name just a few of the affected fields of study.
PH.D in Technical Communication, May 2013
Show less
- Title
- INVESTIGATION INTO USE OF GEARLESS PMSG-BASED WIND FARM FOR GRID SUPPORT
- Creator
- Cui, Yinan
- Date
- 2011-12-05, 2011-12
- Description
-
Wind energy has become the world’s fastest growing energy source,as environmental concerns have focused attention on the generation of...
Show moreWind energy has become the world’s fastest growing energy source,as environmental concerns have focused attention on the generation of electricity from clean and renewable sources. New capacity from wind turbines has been growing fast since 2004. Installed capacity reached 196630 Megawatt in 2010 worldwide. 2011 will also see good growth. Reliability and quality of the electrical power supply is of great importance for all grids. A well designed wind-turbine power source can help balancing the unpredictable power changes caused by the Load-side of the grid (Due to the meteorological nature of long-lasting wind at the sea, the offshore wind turbines are more stable in their power production aspect). Alone more and more R&D efforts, PMSG-based direct drive wind turbine generator has become a trend in the industry, its full scale back-to-back converters can achieve fast response of power factor tuning, which directly offers an option of generating a certain amount of reactive supporting power in solving short-term voltage stability problem in the local grid and a desired quantity of active power for mitigating the frequency oscillation in the system. Topics in this thesis includes: (1) analysis of the wind turbine generator model equipped with full scale back-to-back converters and their control schemes are proposed; (2) the low voltage ride-through test of a wind farm with the modeled wind turbine generator integrated into a finite 3-bus system is provided for further short term voltage stability studies; (3) the comparison is rendered between an offshore wind farm with no support on reactive power and the one with automatically reactive power support on voltage drop response in an 8-bus system connected with HVAC submarine cable; (4) study on dynamic active power compensation from wind farm for improving the frequency stability when large disturbance introduced. Keywords: PMSG, full scale converters, reactive power, short term voltage stability, active power, dynamic compensation
M.S. in Electrical Engineering, December 2011
Show less
- Title
- DESIGN AND IMPLEMENTATION OF A POWER ASSISTED DRIVETRAIN FOR A WHEELCHAIR
- Creator
- Hou, Ruoyu
- Date
- 2012-04-06, 2012-05
- Description
-
Over the last two decades, the number of people who have difficulty walking and need wheelchairs has been found to be increasing due to an...
Show moreOver the last two decades, the number of people who have difficulty walking and need wheelchairs has been found to be increasing due to an aging population caused by a low birth rate and advances in medical treatment. Based on a recent survey, a power assisted wheelchair is the latest one in the commercial wheelcair market. The power assisted wheelchair offers users an opportunity for physical activity, but it is often too expensive for customers. This has led to the design of more advanced and economical power assisted drivetrain systems for wheelchairs. In this thesis, a novel controller has been designed. Instead of using a torque sensor for measuring and amplifying human force, the proposed controller uses two infrared sensors to trigger two motors. Using this information in addition to the information from a motion sensor that detects the road angle variation, appropriate torque command is generated. The drivetrain requires the embedded controller has a strong I/O control function, but also high speed signal processing ability for realizing real time control. Therefore, a DSP (Digital Signal Processor) which integrates flexbile multiple PWM signal generator to drive two motors, two Hall sensors for motor position and speed feedback is considered as one of the strongest controllers for power assisted drivetrain implementation. This thesis has two main contributions: a) it presents a novel power assisted motor control strategy, including six-step motor control, Environmental Adaptive control and Push-Go control method; and b) it develops an embedded controller not only on the testbench, also on the wheelchair to realize this control strategy. The designed controller is low cost and compact.
M.S. in Electrical Engineering, May 2012
Show less
- Title
- INFILL HOUSE – HANOI, VIETNAM
- Creator
- Luu, Dung Q
- Date
- 2017, 2017-05
- Description
-
In 1986, the ‘Economic Reform’ had brought significant economic success to Vietnam. The cities such as Hanoi, HoChiMinh City, and Danang,...
Show moreIn 1986, the ‘Economic Reform’ had brought significant economic success to Vietnam. The cities such as Hanoi, HoChiMinh City, and Danang, expanded enormously, and building activities increased to accommodate population growth and housing demand. The rapidly increased income allowed middle-class and upper-class families to pursue their dreams to own a private home. However, most housing projects were built without any city guidelines and lacked thoughtful design. [5] Because of high land prices and valuable frontage for business uses, most new private buildings and houses, 3 to 5 stories, were built to maximize their footprint, on very long and narrow frontage properties. Many of these infill houses were constructed, however they had limited daylight and poor natural ventilation. [4] For my thesis, I have studied typologies of the Vietnamese infill house. The study analyzes 4 types of infill sites based on different site access. In response to the analysis, six house schemes were developed on two of the types of long and narrow infill sites, in the high-density area of Hanoi, Vietnam. The design investigates different site strategies, and applies suitable building techniques to create viable living spaces that improve natural daylight and ventilation.
M.S. in Architecture, May 2017
Show less