Search results
(8,121 - 8,140 of 10,083)
Pages
- Title
- The Impact of a Parent Education Program on Parent Expectations for Children with Autism Spectrum Disorder
- Creator
- James, Megan
- Date
- 2019
- Description
-
The purpose of this study was to determine if parent psychoeducation about Autism Spectrum Disorder (ASD) impacts parents’ short-term and/or...
Show moreThe purpose of this study was to determine if parent psychoeducation about Autism Spectrum Disorder (ASD) impacts parents’ short-term and/or long-term expectations for their children with ASD. The impact of parent psychoeducation on parents' expectations for their children with ASD was assessed. Research has demonstrated that a parent’s engagement in their child’s therapy impacts the effectiveness of that intervention. Given the importance of parents’ engagement in therapy, it is clinically relevant to identify the variables that may influence a parent’s level of engagement. Working from the framework of Bandura’s expectation theory, the ability to assess and intervene on expectations would likely have a positive influence on parental engagement. Participants attending parent psychoeducation and parents on a waitlist for parent psychoeducation completed expectations questionnaires before and after parent psychoeducation. Results of a 2x2 (group x time) mixed factorial ANOVA did not support the hypotheses that expectations changed over time as a function of completing parent psychoeducation. These results are discussed in terms of potential adjustments to current parent psychoeducation curriculum or additional variables that may be of interest to better improve (i.e. increase) parent expectations.
Show less
- Title
- DAMAGE ASSESSMENT OF CIVIL STRUCTURES AFTER NATURAL DISASTERS USING DEEP LEARNING AND SATELLITE IMAGERY
- Creator
- Jones, Scott F
- Date
- 2019
- Description
-
Since 1980, millions of people have been harmed by natural disasters that have cost communities across the world over three trillion dollars....
Show moreSince 1980, millions of people have been harmed by natural disasters that have cost communities across the world over three trillion dollars. After a natural disaster has occurred, the creation of maps that identify the damage to buildings and infrastructure is imperative. Currently, many organizations perform this task manually, using pre- and post-disaster images and well-trained professionals to determine the degree and extent of damage. This manual task can take days to complete. I propose to do this task automatically using post-disaster satellite imagery. I use a pre-trained neural network, SegNet, and replaced its last layer with a simple damage classification scheme. This final layer of the network is re-trained using cropped segments of the satellite image of the disaster. The data were obtained from a publicly accessible source, the Copernicus EMS system. They provided three channel (RGB) reference and damage grading maps that were used to create the images of the ground truth and the damaged terrain. I then retrained the final layer of the network to identify civil structures that had been damaged. The resulting network was 85% accurate at labelling the pixels in an image of the disaster from typhoon Haiyan. The test results show that it is possible to create these maps quickly and efficiently.
Show less
- Title
- ACCELERATING I/O USING DATA LABELS: A CONTENTION-AWARE, MULTI-TIERED, SCALABLE, AND DISTRIBUTED I/O PLATFORM
- Creator
- Kougkas, Antonios
- Date
- 2019
- Description
-
Parallel file systems (PFS) have been the dominant storage solution in High-Performance Computing (HPC) for several years. However, as we move...
Show moreParallel file systems (PFS) have been the dominant storage solution in High-Performance Computing (HPC) for several years. However, as we move towards the exascale era, PFS have several limitations, such as scalability, complexity, metadata, data synchronization, and access latency, which can seriously affect storage's performance. These challenges along with the unprecedented data explosion accentuated the research conundrum known as I/O bottleneck. Moreover, the extreme computing scale, that exascale machines promise, brings forward another important limitation of the existing I/O path. Multiple large scientific applications will be accessing shared storage resources at the same time, and thus, will be competing. This phenomenon is known as cross-application I/O interference and is one of the most challenging performance degradation factors, even in today's petascale. To address some of the above issues, modern system designs have introduced a new memory and storage hierarchy, filled with novel special hardware technologies, that aims to ease, in a sense, the I/O bottleneck. However, software for management, I/O scheduling, and efficient data movement in this new complicated landscape of multi-tiered I/O infrastructure is limited at best. The added complexity of data access using buffering resources needs to be addressed and is of the utmost priority of several scientific sites and communities. This study makes steps towards I/O acceleration in HPC by proposing: a) a new subsystem for the I/O convergence between HPC and BigData storage ecosystems, b) a new subsystem equipped with several advanced I/O buffering techniques for the deep memory and storage hierarchy, and c) a new subsystem that implements several I/O scheduling algorithms to prevent the negative effects of I/O contention, and d) a new storage system that relies on a novel abstract notion of a data label that allows the I/O system to provide storage flexibility, versatility, agility, and malleability. The proposed work has been evaluated and results suggest that substantial improvements in I/O performance have been achieved.
Show less
- Title
- Superior Reversible Hydrogen Storage of the LiBH4 + MgH2 System Enabled by High-Energy Ball Milling with In-Situ Aerosol Spraying
- Creator
- Ding, Zhao
- Date
- 2019
- Description
-
The prospect of LiBH4 + MgH2 mixture has been limited by its sluggish kinetics, despite its excellent hydrogen storage capacity theoretically....
Show moreThe prospect of LiBH4 + MgH2 mixture has been limited by its sluggish kinetics, despite its excellent hydrogen storage capacity theoretically. We have designed a novel process termed as high-energy ball milling of MgH2 at ambient temperature along with aerosol spraying of LiBH4 dissolved in tetrahydrofuran (THF) solution (BMAS) to improve the thermodynamic and kinetic performance of LiBH4 + MgH2 hydrogen storage materials. Through this BMAS process, we have demonstrated that, for the first time, the reaction between LiBH4 + MgH2 can take place near ambient temperature, and the in-situ formation of LiH and MgB2 during BMAS is achieved through a new reaction pathway in which nano-LiBH4 decomposes to Li2B12H12 first and the newly formed Li2B12H12 reacts with MgH2 to form LiH and MgB2.Using the newly designed automated BMAS apparatus, we have successfully produced a BMAS mixture containing 1 mole of MgH2 + 0.5 mole of LiBH4, i.e., with 25% LiBH4 in the mixture for the stoichiometric reaction. The BMAS powder with 25% LiBH4 can release and absorb ~5.7 wt.% H2 at 265 oC, which is the highest one ever reported for the LiBH4 + MgH2 system at temperature ≤ 265 oC. It is found that the unusually high reversible hydrogen storage is accomplished through two parallel reaction pathways. One is nano-LiBH4 decomposes to form Li2B12H12 and H2 first and then Li2B12H12 reacts with MgH2 to form MgB2, LiH and H2. The other is nano-MgH2 decomposes to form Mg and H2 first and then Mg reacts with LiBH4 to form MgB2, LiH and H2. These reaction pathways become possible because of the presence of nano-LiBH4 and nano-MgH2 and their intimate mixing, enabled by the BMAS process. We have also revealed that the solid-state dehydrogenation kinetics of the BMAS powder with 25% LiBH4 at 265 oC is nucleation-and-growth controlled. The rate-limiting step for dehydrogenation via the two parallel reaction pathways has been identified through examination of the elementary reactions as the nucleation and growth of reaction products LiH and MgB2. Given the significantly improved hydrogen storage capacity for the LiBH4 + MgH2 system obtained via BMAS, investigation on increasing the LiBH4 content in the BMAS powder from 25% to 50% is performed. It is shown that Mg(BH4)2 can be produced during the BMAS process and it contributes to H2 release at temperature ≤ 265 oC. Three parallel H2 release mechanisms have been identified from the BMAS powder. These include (i) nano-LiBH4 decomposes to form Li2B12H12 and H2 first and then Li2B12H12 reacts with MgH2 to form MgB2, LiH and H2, (ii) nano-Mg(BH4)2 decomposes to form MgH2, B and H2, and (iii) nano-MgH2 decomposes to Mg and H2. Together these three mechanisms result in 4.11 wt.% H2 release in the solid state at temperature ≤ 265 oC. Furthermore, the predicted property of Fe3B in absorbing more H2 than releasing it is confirmed experimentally for the first time in this study. Varied models have been identified to describe the kinetic of solid-state dehydrogenation of the BMAS powder with 50% LiBH4 at 265 oC with increasing cycles. Additionally, the geometries of the solid particles involving with the dehydrogenation have also been estimated.
Show less
- Title
- DEEP LEARNING IN ENGINEERING MECHANICS: WAVE PROPAGATION AND DYNAMICS IMPLEMENTATIONS
- Creator
- Finol Berrueta, David
- Date
- 2019
- Description
-
With the advent of Artificial Intelligence research in the 1960s, the need for intelligent systems that are able to truly comprehend the...
Show moreWith the advent of Artificial Intelligence research in the 1960s, the need for intelligent systems that are able to truly comprehend the physical world around them became relevant. Significant milestones in the realm of machine learning and, in particular, deep learning during the past decade have led to advanced data-driven models that are able to approximate complex functions from pure observations. When it comes to the application of physics-based scenarios, the vast majority of these models rely on statistical and optimization constructs, leaving minimal room in their development for the physics-driven frameworks that more traditional engineering and science fields have been developing for centuries. On the other hand, the more traditional engineering fields, such as mechanics, have evolved on a different set of modeling tools that are mostly based on physics driven assumptions and equations, typically aided by statistical tools for uncertainty handling. Deep learning models can provide significant implementation advantages in commercial systems over traditional engineering modeling tools in the current economies of scale, but they tend to lack the strong reliability their counterparts naturally allow. The work presented in this thesis is aimed at assessing the potential of deep learning tools, such as Convolutional Neural Networks and Long Short-Term Memory Networks, as data-driven models in engineering mechanics, with a major focus on vibration problems. In particular, two implementation cases are presented: a data driven surrogate model to a Phononic eigenvalue problem, and a physics-learning model in rigid-body dynamics scenario. Through the applications presented, this work that shows select deep learning architectures can appropriately approximate complex functions found in engineering mechanics from a system’s time history or state and generalize to set expectations outside training domains. In spatio-temporal systems, it is also that shown local learning windows along space and time can provide improved model reliability in their approximation and generalization performance
Show less
- Title
- Toward a Comprehensive Atomistic View of Bacterial Outer Membrane Remodeling and Antimicrobial Peptide Susceptibility
- Creator
- Rice, Amy
- Date
- 2019
- Description
-
The cell membrane is arguably one of the most important and complex features of a cell, serving to demarcate “self” from “environment” and...
Show moreThe cell membrane is arguably one of the most important and complex features of a cell, serving to demarcate “self” from “environment” and selectively controlling the flow of material in and out of the cell. Bacterial cell membranes are of additional interest since they serve as the first point of contact for antibiotic drugs or other antimicrobial agents. In this work, I use atomistic molecular dynamics simulations to study factors that alter bacterial susceptibility to antimicrobial agents and their environment, with the goal of better understanding how bacteria are able to modulate their sensitivity.First, I present results from a series of simulations of antimicrobial peptides (AMP) interacting with phospholipid bilayers to elucidate the observed interaction differences between arginine and lysine-containing AMPs. Simulations show that the extensive interaction with arginine is due in part to arginine's strong atomic charge distribution, rather than being purely an effect of the greater hydrogen bond capacity. These results highlight the role of charge and hydrogen bond strength in peptide bilayer insertion, and offer potential insights for designing more potent analogues in the future.Next, the effects of bacterial lipopolysaccharide (LPS) modifications are examined, first to understand study how three key modifications observed in S. enterica affect bilayer properties, as well as to examine the role these modifications play in AMP resistance. We hypothesize that defects at the crystalline/liquid-ordered boundary facilitate LL-37 intercalation into the outer membrane, whereas LPS modification protects against this process by having already increased crystallinity and packing efficiency. These results further improve our understanding of outer membrane chemical properties and help elucidate how outer membrane modification systems are able to alter bacterial virulence and susceptibility. Lastly, I investigate the effects of ion type and phosphate charge on four distinct LPS types. Simulations show that bilayer properties are highly influenced by the choice of cation type, ion parameterization, and phosphate group charges. Additional free energy perturbation simulations predict that the protonated LPS state should dominate at physiological pH, in contrast to the deprotonated state utilized by many LPS force fields. Overall, these results reveal inaccuracies in the existing LPS force fields and establish guidelines to better reproduce experimental LPS membrane properties.
Show less
- Title
- CYBER PHYSICAL SYSTEM WITH COUPLED NETWORKS: SECURITY AND PRIVACY
- Creator
- Zhao, Jing
- Date
- 2019
- Description
-
With the development of cyber physical systems, people and electronic devices are connected via various networks. In many scenarios, different...
Show moreWith the development of cyber physical systems, people and electronic devices are connected via various networks. In many scenarios, different networks are strongly coupled with each other, e.g. power grid is strongly coupled with the communication network in smart grid. On one hand, such coupling brings benefits such as improved efficiency and quick response to system service exceptions. However, the coupling of different networks also brings security and privacy problems. In this thesis we study two scenarios: the the secure coupling of visual connection with short range pairwise communication and privacy aware coupling of smart home with smart grid. For the first scenario, we propose SCsec, a secure screen-camera communication system, which achieves secure one-way communication. The throughput of SCsec is comparable to current screen communication systems. For the second scenario, we propose a novel randomized battery load hiding algorithm which ensures differential privacy for smart homes with smart meters.
Show less
- Title
- TWO ESSAYS ON CORPORATE FINANCE AND CAPITAL MARKET
- Creator
- Zhao, Tianyu
- Date
- 2019
- Description
-
Two of the most important credit suppliers on the capital market of the United States are commercial banks and online peer-to-peer (P2P)...
Show moreTwo of the most important credit suppliers on the capital market of the United States are commercial banks and online peer-to-peer (P2P) lending platforms. Relaxing restrictions on interstate banking and intrastate branching in banking industry bring more competitions and better efficiency in the banking industry, which provides a much lower cost for firms to raise external funds. We find bank deregulations decrease investment-cash flow sensitivity of firms, which explains the declining trend of investment-cash flow sensitivity since the late 1970s. With the development of internet and transaction technology, online P2P lending platforms become more and more important in the capital market. However, the borrowers on P2P platforms are private individuals with limited official information. We document that when borrower’s official information is limited, social capital, a factor contains region based soft information, plays an important role in effecting online P2P lending activities.
Show less
- Title
- POWER PROFILING, ANALYSIS, LEARNING, AND MANAGEMENT FOR HIGH-PERFORMANCE COMPUTING
- Creator
- Wallace, Sean
- Date
- 2017, 2017-05
- Description
-
As the field of supercomputing continues its relentless push towards greater speeds and higher levels of parallelism the power consumption of...
Show moreAs the field of supercomputing continues its relentless push towards greater speeds and higher levels of parallelism the power consumption of these large scale systems is steadily transitioning from a burden to a serious problem. While the machines are highly scaleable, the buildings, power supplies, etc. are not. Even the most power efficient systems today consume one to two megawatts per peata op/s. Multiplying that by 1,000 to reach the next generation of supercomputer (i.e., exascale) and the power necessary just to turn the machine on is simply impractical. Thus, power has become a primary design constraint for future supercomputing system designs. As such, it has become a matter of paramount importance to understand exactly how current generation systems utilize power and what implications this has on future systems. As the saying goes, you can't manage what you don't measure. This work addresses several large hurdles in fully understanding the power consumption of current systems and making actionable decisions based on this understanding. First, by leveraging environmental data collected from runs of real leadership class applications we analyze power consumption and temperature as it pertains to scale on a production IBM Blue Gene/Q supercomputer. Then, through development of a new power monitoring library, MonEQ, we quantitatively studied how power is consumed in major portions of the system (e.g., CPU, memory, etc.) through profiling of microbenchmarks. Expanding on this, we then studied how scale and network topology affect power consumption for several well-known benchmarks. Wanting to increase the effectiveness of our power monitoring library, we extended it to work with many of the most common classes of hardware available in today's HPC landscape. In doing so, we provided an in-depth analysis of what data is obtainable, what the process of obtaining it is like, and how data from different systems compares. Next, utilizing the knowledge gained from these experiences, we developed a new scheduling approach which utilizing power data can effectively keep a production system's power consumption under a user-specified power cap without modification to the applications running on the system. Finally, we extend this scheduling approach to be applicable to more than just one objective. In doing so, the scheduler can now optimize on multiple criteria instead of simply considering system utilization.
Ph.D. in Computer Science, May 2017
Show less
- Title
- UNDERSTANDING REACTION MECHANISMS AND CONTROLLING REACTIVE SURFACE SPECIES DURING ATOMIC LAYER DEPOSITION OF METAL CHALCOGENIDES
- Creator
- Weimer, Matthew S.
- Date
- 2016, 2016-07
- Description
-
Atomic layer deposition (ALD) is a thin film growth technique that relies on self-limiting reactions between vapor precursors and a surface....
Show moreAtomic layer deposition (ALD) is a thin film growth technique that relies on self-limiting reactions between vapor precursors and a surface. Significant progress has been made in the scope of materials grown by ALD, enabled by precursor development and investigation of surface reaction mechanisms; progress is still necessary. This dissertation has two portions. The first is the development of organometallic compounds to engender new material growth by ALD. Second is the development of X-ray absorption spectroscopy (XAS) tools and techniques for observations of the metal coordination environment during ALD. One material that can be difficult to make is doped materials. Composition of a doped material determines the film properties. One example is vanadium doped indium sulfide. This material was purposed as an intermediate bandgap solar absorbing material which can absorb multiple ranges of light. Two new precursors were developed for the growth of indium sulfide and vanadium sulfide. The indium sulfide material grown has shown superior film photocurrent qualities and doping of vanadium into indium sulfide with fine control over atomic placement was achieved. Films were grown that showed secondary absorptions which aligned with theoretical calculations. The flexibility of the V(III) precursor is shown by a short study on the control of the difficult vanadium-oxygen system in the deposition of stoichiometric oxides. To illustrate how XAS can be used in conjunction with other techniques tin dioxide growth from a stannylene precursor and hydrogen peroxide was studied. From ex situ XAS measurements nucleation, growth and termination reaction mechanism were discovered. Next, a mobile ALD reactor allowed for in situ XAS measurements to be performed on “flat” substrates for the first time. Specifically, the local coordination environment and surface reactions were followed while erbium was doped into alumina and coordination environment of manganese was investigated in the growth of manganese-doped zinc oxide, with the ability to probe in-plane versus out-of-plane bonds. These unique in situ XAS experiments allow for greater understand of metal and non-metal precursors as they interact with surfaces. This enhanced understanding enables new precursor development which leads to better control over surface chemistry and new materials.
Ph.D. in Chemistry, July 2016
Show less
- Title
- DISTRIBUTED VIDEO CODING FOR RESOURCE CONSTRAINED VIDEO APPLICATIONS
- Creator
- Liu, Wenhui
- Date
- 2014, 2014-05
- Description
-
Video coding technology has played a key role in the explosion of current multimedia society with increasing resolution and quality. Such big...
Show moreVideo coding technology has played a key role in the explosion of current multimedia society with increasing resolution and quality. Such big success is largely built on the conventional video coding paradigm where motion estimation and compensation are performed at the encoder. This asymmetry in complexity is well-suited for the applications where the video sequence is encoded once and decoded many times. However, some new emerging applications such as wireless video surveillance, wireless PC cameras and multimedia sensor networks require a low complexity encoding, while possibly a ording a high complexity decoding. Therefore, a challenging problem emerges with the new type of visual communication system is how to achieve low complexity encoding video compression while maintaining good coding e ciency. Distributed video coding (DVC) provides low complexity encoding solutions for video communication with limited computational power or energy constraints. In DVC, the source video information is independently encoded at lightweight encoders. At the decoder, all the received bitstreams are jointly exploited their statistical dependencies between them. In such a way, motion estimation and its computational complexity is shifted from the encoder to the decoder. However, DVC also has its own restrictions. The low coding e ciency remains a challenging issue for DVC compare to the conventional video coding. Although DVC is robust to channel loss due to its intrinsic feature of independent encoders and joint decoder, the error resiliency for medium to large transmission errors is weak. In this dissertation, previously proposed low-complexity DVC (LC-DVC) architecture is rstly introduced. After that, a continued work is presented to further improve quality of SI. The proposed method is called spatio-temporal joint bilateral upsampling (STJBU) based SI generation, where geometric closeness of pixels and their photometric similarity is exploited to reduce the noise while preserving the edge xiv information. Moreover, a distributed multiple description coding (DMDC) scheme is proposed by combining the multiple description (MD) coding into LC-DVC to improve its error resiliency. All the proposed schemes are well described and the ratedistortion analyses are presented in this dissertation. All these features have made the LC-DVC a great solution for resource constraints applications.
PH.D in Electrical Engineering, May 2014
Show less
- Title
- THE RELATIONSHIPS OF BIOLOGICAL, PSYCHOLOGICAL, AND SOCIAL FACTORS TO PERCEIVED SEIZURE SEVERITY IN INDIVIDUALS WITH TEMPORAL LOBE EPILEPSY
- Creator
- Fong, Wing Man
- Date
- 2012-12-17, 2012-12
- Description
-
Epilepsy is a chronic neurological condition that can cause negative consequences in all areas of life for people with the condition. Research...
Show moreEpilepsy is a chronic neurological condition that can cause negative consequences in all areas of life for people with the condition. Research evidence has led to the goal of treatment to gradually shift from targeting only reducing the frequency of seizures to decreasing individual’s perception of seizure severity. While several studies have suggested that psychological and social factors, in addition to epilepsy-related biological factors, can also impact seizure severity perception, this area has received much less attention in relation to individuals’ perception of seizure severity. Therefore, this study examined individuals’ perception of seizure severity using a biopsychosocial framework which takes into account biological, psychological, and social factors to develop a comprehensive understanding about the perception of seizure severity. 57 individuals from a neurology clinic at a tertiary medical center with a primary diagnosis of localization-related temporal lobe epilepsy participated in this study. They were interviewed for demographic information and were assessed using Quality of Life in Epilepsy-89, Beck Depression Inventory, Stressful Life Events Scale, and Liverpool Seizure Severity Scale. A Multiple regression analysis indicated that the number of antiepileptic drugs, perceived memory function, the number of stressful life events, and social support were all significant predictors of perceived seizure severity. The current findings supported a biopsychosocial approach in understanding the perception of seizure severity among people with epilepsy. Clinical implications of these findings are discussed.
M.S. in Psychology, December 2012
Show less
- Title
- POWER ANALYSIS AND CONTROL OPTIMIZATION OF A HIGH GAIN RESONANT DC-DC CONVERTER
- Creator
- Wu, Haowen
- Date
- 2017, 2017-05
- Description
-
Applications of DC-DC converter require a high power, high efficiency performance, especially for automotive applications and renewable energy...
Show moreApplications of DC-DC converter require a high power, high efficiency performance, especially for automotive applications and renewable energy applications. A desired converter should have a fast response, stable output in different conditions. This thesis studies a novel high gain DC-DC resonant converter and its control. Based on the circuit analysis and operation evaluation, feed-forward and feedback control methods are proposed in order to achieve stability under variety of changes. Moreover, circuit topology is investigated in response to the efficiency improvement. Simulations in PSIM, PLECS and MATLAB Simulink have been carried out to validate the controller design and topology development. The simulation results are in consistence to the theoretical analysis. The overall high efficiency at nominal operating point is further increased.
M.S. in Electrical Engineering, May 2017
Show less
- Title
- A METHODOLOGY FOR MAJOR BUS ROUTE RESCHEDULING TO IMPROVE TRANSIT PERFORMANCE
- Creator
- Lu, Xi
- Date
- 2012-08-01, 2012-07
- Description
-
In a period of growing transit operating deficits, increasing attention and concern is being directed at both the decreasing levels of...
Show moreIn a period of growing transit operating deficits, increasing attention and concern is being directed at both the decreasing levels of productivity of transit systems in general and the broad differences in measured service performance compiled for various transit systems. The purpose of this study is an assist public transit managers by providing them with a mathematical technique to analyze the efficiency with which serviced is produced and the effectiveness with which it is consumed. By providing the analysis model which not requires an extensive amount of data, the operator can have tool for decision-making of profit and market growth. Thus, transit managers will be able to access the performance of the agency and areas where improvements can be realistically achieved. Using the urban operation of the Chicago Transit Authority, a transit performance model is developed to analysis efficiency and effectiveness of the city's bus system. As the system exists, there are 144 bus routes start and end, frequency of dispatching buses and ridership changes from Weekdays and Weekend, peak hour and off-peak hour. In many of these routes, vehicles are used inefficiently and ineffectively, only a few passengers carried in some routes in off-peak hour, yet in other routes through the Central Business District (CBD) area, there are always not enough buses in peak hour. Also, some routes cannot provide enough on-time arrival service and some special group like elderly and handicapped cannot receive enough service in certain area. The goal of case research is to determine to understand the transit performance of Chicago Transit Authority and to select the optimal improvement projects to maximize the performance in term of economy and social equity.
M.S. in Civil Engineering, July 2012
Show less
- Title
- Structural Uncertainty Analysis of Nuclear Reactor Core Load Pads
- Creator
- Wozniak, Nicholas
- Date
- 2019
- Description
-
In fast spectrum nuclear reactors, reactivity is directly related to the capability of the reactor to sustain a fission chain reaction for...
Show moreIn fast spectrum nuclear reactors, reactivity is directly related to the capability of the reactor to sustain a fission chain reaction for power production. Historically, mechanical/structural analysis and design have been driven primarily by deterministic methods. However, reactivity is extremely sensitive to the location of the fuel within the reactor; which is subject to uncertainties. This makes deterministic models unstable and can allow manufacturing errors to contribute to uncertainties in analysis, resulting in potential safety concerns and incorrect reactor lifetime prediction. One potential means to address this challenge is the use of stochastic analysis. A framework is presented which introduces uncertainty analysis through the use of Monte Carlo Simulation. Latin Hypercube Sampling is used to reduce the number of sample runs and the computational effort and storage space requirements for the results. Geometric parameters such as the gaps at the load pad contact points, the location of the above core load pad (ACLP), and even temperature gradient profiles, that are important to the design of nuclear reactors are varied, and their effects on the overall performance are studied through sensitivity analysis. The main focus was to quantify the effects of the variation of these parameters directly on the variation of the contact forces and deformations of the fuel assemblies which house and control the movement of the fuel. Based on the results of the sensitivity study, this study found that the ACLP location has the largest effect on contact forces. And as such, any uncertainty in this parameter results in a rather large variation in the intensity of the contact force. Furthermore, specific recommendations are given to help control these variations as well as for further investigations on other parameters that may be significant to the design of fuel assemblies.
Show less
- Title
- SITUATING FANS' SOCIAL MEDIA PRACTICES IN THE CONTEXT OF FAN STUDIES
- Creator
- Rao, Xi
- Date
- 2015, 2015-05
- Description
-
This paper explores how TV fans use social media. By revtewmg academic perspectives on fandom ofTV shows and social media in past several...
Show moreThis paper explores how TV fans use social media. By revtewmg academic perspectives on fandom ofTV shows and social media in past several decades, this paper discusses the way that audiences react to TV shows using the affordance of social media. At the begirming, this article traces the image, activities, and meaning of fandom. Fans were portrayed as deviant creatures until the late 80s. This image was clarified when people looked into fans and their creative works. Scholars realized that fans productively generated creative products based on the given materials, which reflected the social experience of the creators. Therefore, fandom was soon understood as a way of fans to resist the dominant culture in the society. Later on, social media offer fans an immediate, widespread platform to contribute to the TV conversation. With this platform, normal fans show up. Different from productive fandom, median fans are less active in making new meanings of the TV program. Interestingly, a large number of TV audiences are using social media during the broadcast, participating in the discussion of shows. This evidences the convergence of the traditional media, TV, and the new media, social media. It is also found that comments of TV viewers on social media in turn, influence the content of the TV show. Rather than merely spectators, fans are a part of the TV production. As a special case, gender is discussed from the perspective of fandom. Female composers dominant the fan-writing world. Study the producers and products of fan fiction may provide valuable insights into gender dilemmas in the real world. At the end of this essay, 1 summarize how social media atTect the participation of TV fans and discuss remaining questions worthy of future research. Since fandom becomes ubiquitous in the social media age, it is possible to explore fandom as a way to better understand our daily life.
M.S. in Information Architecture, May 2015
Show less
- Title
- TRANSMISSION RESERVE DESIGN IN ELECTRICITY MARKETS CONSIDERING RAMPING CONSTRAINTS AND LOAD UNCERTAINTY
- Creator
- Xiao, Xuli
- Date
- 2016, 2016-12
- Description
-
With increasing penetration of renewable energy, uncertainty challenges ISOs to keep power balance in real-time. As ramping issues draw public...
Show moreWith increasing penetration of renewable energy, uncertainty challenges ISOs to keep power balance in real-time. As ramping issues draw public attention, many ISOs have instituted flexible ramping products to ensure ramping reserve at generation side. However, not all the ramping reserves are deliverable when a transmission line is already congested. In the real-time market, if an uncertain load estimation is known at peak time t+10mins previously, SCUC/SCED is able to spare transmission reserve by changing the dispatch at time t with additional uncertain load constraints at t. To spare transmission reserve under uncertainty, this research proposes an uncertain load estimation to generate an estimated uncertain load and uncertainty constraints at t+10 in SCUC/SCED: with the help of a stochastic optimization model, uncertainties are quantified as a random actual load y and utilized in a modified stochastic model for undeliverable ramping reserve issues; once the optimal total system generation x is obtained, treated as an estimated uncertain load, uncertainty constraints are added at t+10mins in SCUC/SCED to obtain a secure dispatch at t. Therefore, transmission ramping reserve is ensured by a change in dispatch at t. Numerical results show that this design enhances the economy and scalability of power systems. In addition, scalability analysis proves it works for any scale of power systems with multiple local peak loads.
M.S. in Electrical Engineering, December 2016
Show less
- Title
- IN-PACKAGE DECONTAMINATION OF SALMONELLA MONTEVIDEO ON BLACK PEPPER USING PULSED LIGHT TECHNOLOGY
- Creator
- Xu, Xinran
- Date
- 2016, 2016-07
- Description
-
This study evaluated the efficacy of three pulsed light systems (1.8, 3 and 100 Hz) for inactivation of Salmonella Montevideo on the surface...
Show moreThis study evaluated the efficacy of three pulsed light systems (1.8, 3 and 100 Hz) for inactivation of Salmonella Montevideo on the surface of black pepper. The impact of frequency, treatment time and distance from the lamp on the rate of inactivation was studied. A reduction of 4.21 1og10 CFU/g was achieved with a 35 s treatment at 16.6-cm from lamp at 1.8 Hz. Similarly, reductions of 4.72 CFU/g and 4.70 CFU/g were obtained at 14.1-cm after 35 s treatment at 3 Hz and 100 Hz, respectively. Temperature measurement of packed and unpacked black pepper indicated that packaging material absorbed a small amount of energy. Measurement of surface and internal temperature suggested that temperature below the surface increased far less than surface temperature. For instance, temperature on the surface of the black pepper can go up to 200 °C as measured by the infrared camera within 30 to 40 s treatment. However, the temperature at 1 mm below the surface was up to 100 °C depending upon the distance from the lamp. The color of the black pepper did not change after pulsed light treatment (p>0.05). These results suggest that pulsed light can potentially be used to inactivate pathogens on black pepper surface without adversely affecting its quality.
M.S. in Food Safety and Technology, July 2016
Show less
- Title
- Functionalized Polyoxovanadates: Synthesis, Characterization and Properties
- Creator
- Zheng, Yujuan
- Date
- 2012-04-30, 2011-12
- Description
-
The compound [NaVIV6O6{(OCH2CH2)2NCH2CH2OH}6]Cl.H20, designated C0 in this report, an Anderson type structure, contains a single nitrogen atom...
Show moreThe compound [NaVIV6O6{(OCH2CH2)2NCH2CH2OH}6]Cl.H20, designated C0 in this report, an Anderson type structure, contains a single nitrogen atom in place of a single oxygen atom in each of the six vanadium-oxygen octahedra. Attached to each nitrogen are two alkoxide arms (-O-C-C-) which terminate on separate oxygen atoms of the octahedra. A third ethoxy group (-CH2-CH2-OH) dangles freely from each nitrogen atom. The functionalized cationic core [NaVIV6O6{(OCH2CH2)2NCH2CH2OH}6]+ is generated in situ and is cationic with a charge of +1. Chloride is the counteranion. Each vanadium addenda atom is the +4 oxidation state, and the six unpaired electrons result in interesting magnetic properties. C0 is obtained by reaction of a vanadium oxide source triethanolamine. The purpose of the research described here was to adapt the synthetic protocol associated with triethanolamine to other similar ligands. Ligand choice was motivated by the broader goal of this research group to form (extended) hybrid electronic and magnetic materials by chemically bonding POM cores to chemically interesting ligands, such as dyes or other POM cores. In this regard, the original triethanolamine derivative has a number of shortcomings; chief among them is the inaccessibility of the alcoholic functional group chemistry---due to its low pH---that otherwise might have been used to form linkages through the dangling alkoxy arms. Additionally, linkages containing paraffinic carbon (-C-C-), which is not a good conduit of electronic interaction, are chemically undesirable where the synthetic goal is hybrid structures with interesting electronic and magnetic properties. The results associated with the three ligands, diethanolamine, N,N'-Bis(2-hydroxyethyl)ethylenediamine, and N,N-Bis(2-hydroxyethy) ethylenediamine, are described herein. Diethanolamine was chosen because---based on a structural comparison to triethanolamine---it was expected to produce a structure having a hydrogen atom in place of the dangling ethoxy arms, which was indeed the result. We considered this structure highly desirable becasue of the potential lability of the hydrogen atom (-N-H), which would allow us to functionalist the core directly. The resulting compounds are characterized by single-crystal X-ray diffraction, elemental analysis, mass spectroscopy, thermogravimetric analysis, FT-IR and UV-Vis spectroscopy, maganometric titration, temperature-dependent magnetic susceptibility measurements, and cyclic voltammetry. The results of the cyclicvoltammetric investigation are consistent with the labile hydrogen atom (-N-H).
M.S. in Chemistry, December 2011
Show less
- Title
- MEASUREMENT OF 013 IN THE DOUBLE CHOOZ EXPERIMENT
- Creator
- Yang, Guang
- Date
- 2016, 2016-12
- Description
-
Neutrino oscillation has been established for over a decade. The mixing angle 013 is one of the parameters that is most difficult to measure...
Show moreNeutrino oscillation has been established for over a decade. The mixing angle 013 is one of the parameters that is most difficult to measure due to its small value. Currently, reactor antineutrino experiments provide the best knowledge of 013, using the electron antineutrino disappearance phenomenon. The most compelling advantage is the high intensity of the reactor antineutrino rate. The Double Chooz experiment, located on the border of France and Belgium, is such an experiment, which aims to have one of the most precise 013 measurements in the world. Dou- ble Chooz has a single-detector phase and a double-detector phase. For the single- detector phase, the limit of the 013 sensitivity comes mostly from the reactor flux. However, the uncertainty on the reactor flux is highly suppressed in the double- detector phase. Oscillation analyses for the two phases have different strategies but need similar inputs, including background estimation, detection systematics evalua- tion, energy reconstruction and so on. The Double Chooz detectors are filled with gadolinium (Gd) doped liquid scintillator and use the inverse beta decay (IBD) sig- nal so that for each phase, there are two independent 013 measurements based on different neutron capturer (Gd or hydrogen). Multiple oscillation analyses are per- formed to provide the best 013 results. In addition to the 013 measurement, Double Chooz is also an excellent playground" to do diverse physics research. For example, a 252Cf calibration source study has been done to understand the spontaneous decay of this radioactive source. Further, Double Chooz also has the ability to do a sterile neutrino search in a certain mass region. Moreover, some new physics ideas can be tested in Double Chooz. In this thesis, the detailed methods to provide precise 013 measurement will be described and the other physics topics will be introduced.
Ph.D. in Pyhsics, December 2016
Show less