Search results
(8,341 - 8,360 of 10,083)
Pages
- Title
- THE STRUCTURAL AND MAGNETIC STABILITY OF SELECT FERROUS HEUSLER SYSTEMS
- Creator
- Hasier, John J.
- Date
- 2017, 2017-05
- Description
-
Heusler based functional or smart materials are a deep well of solutions to future energy, heat transport and mechanization problems. The half...
Show moreHeusler based functional or smart materials are a deep well of solutions to future energy, heat transport and mechanization problems. The half-metallic ferromagnetic nature of these crystalline intermetallic compounds is the source of their extraordinary properties. The loss of this magnetic ordering places limits on the range of application temperatures making knowledge of the Curie point of these novel materials essential for understanding of their limitations. High throughput continuous wavelet transform spectrum analysis of magnetic balance data generated on a custom modified Setaram Setsys Evolution 16/18 Differential Scanning Calorimeter- Differential Thermal Analyzer with simultaneous Thermogravimetric Analyzer was performed on select Fe, Co and Mn based Heusler compounds. The phase stability of Co-Fe-Si compounds is explored in relation to the high-Curie Temperature Co2FeSi and Fe2CoSi compounds via generation of equilibrium ternary isothermal phase diagrams at 1160 C and 800 C to enable greater control of the microstructure for future thermomechanically processed bulk smart device fabrication.
Ph.D. in Materials Science and Engineering
Show less
- Title
- ANALYSIS OF MECHANICAL NOISE GENERATION IN WIND TURBINE DRIVE TRAIN
- Creator
- Patel, Hirenkumar J.
- Date
- 2012-05-01, 2012-05
- Description
-
The research work presented here is a part of a project, funded by U.S. Department of Energy to study mechanical noise generated by a wind...
Show moreThe research work presented here is a part of a project, funded by U.S. Department of Energy to study mechanical noise generated by a wind turbine drive train. In our study a Viryd 8 kW wind turbine drive train test bed located at the Illinois Institute of Technology was used. Various wind speeds and turbulence levels could be simulated using a computer program that is used to control the test bed. Acoustic measurements were carried out using a single microphone and a microphone array. The microphone array was used to localize noise sources on the drive train. Various beamforming algorithms such as FDBF, DAMAS2, CLSC, DAS and TIDY were used to study the noise sources. Quali cation experiments using synthetic sources showed that \Clean based on spatial coherence" beamforming algorithm localizes noise sources very accurately for narrowband frequency analysis and TIDY was found to work best for broadband analysis. The resolution of the beamform maps improved for higher frequencies of interest (>700 Hz). The continuous variable planetary (CVP) gearbox, which is a proprietary gearbox by Viryd was used in the drive train to optimize the generator rotational speed. An interesting trend was observed in active power generated for the wind speeds greater than 10 m/s, where the power does not increase signi cantly as it is regulated at 6000 Watts. CVP speed ratio, ratio of input rotational speed to output rotational speed of CVP, was also found to be having similar e ect after wind speed reaches a value of 10 m/s. Vibrations of the drive train test bed were studied using accelerometers. It was observed that the test bed was vibrating at a fundamental frequency of 120 Hz, with harmonics of decreasing amplitude at 240 and 360 Hz. Vibrations in all degrees of motion were found to be occurring at similar frequencies. Acoustic beamforming using a microphone array showed that the test bed was a dominant noise source at the same frequencies. Initially the entire test bed was covered by a Plexi-glass casing for safety reasons. It was found that the glass casing a ected the microphone array measurements as the noise produced by the components had no direct path to the array. Almost all the measured noise was refracted through the gaps between the glass casing and the stretcher holding it, that led to spurious microphone array results. As a result of this, the experiments were conducted without the glass casing. It was discovered after the experiments that the glass casing not only a ects the path of sound but the amplitude is also a ected. The components of the drive train namely gearbox, brake, CVP and generator, were found to be emitting sound at various discrete frequencies ranging from 165 to 3885 Hz. They were also found to be emitting broadband noise, where gearbox and generator were found to be most dominant noise sources. We were able to separate each noise source on a complex wind turbine drive train that contributed to the mechanical noise generation from a wind turbine.
M.S. in Mechanical and Aerospace Engineering, May 2012
Show less
- Title
- FROM FIREPLACE TO STEAM: DOMESTIC HEATING TECHNOLOGY IN NORTHEASTERN UNITED STATES, 1840-1890
- Creator
- Morais, Caroline
- Date
- 2012-12-07, 2012-12
- Description
-
Why study nineteenth-century domestic heating technology? Besides its pop- ular appeal and utilitarian value, domestic heating technology is...
Show moreWhy study nineteenth-century domestic heating technology? Besides its pop- ular appeal and utilitarian value, domestic heating technology is one of the most signi cant yet least explored subjects in American history. American processes of industrialization, manufacturing, and transportation are well known, however, the impact of technological changes on the home is less familiar. Understanding past everyday lives is crucial to recognize the processes of adjusting to new technologies, particularly those technologies essential to today's American lives that have been overlooked. This dissertation examines the shift in domestic heating modes in North- eastern United States between the decades of 1840 and 1890. After carefully reviewing the literature on the subject of nineteenth-century heating technology, I asked myself why the domestic setting has received little schol- arly or historical attention in comparison to those of industrial and commercial set- tings. The answer lies on the fact that, traditionally, historians have been more interested in public events than in those within the private environment. The signi - cance of domestic heating technology has also been reduced duo to divisions between scholarly elds and disciplines. Also, the interpretation of artifacts has been more the eld of anthropologists than that of historians. Thus, few studies have narrowed their focus to a speci c topic of technology and the di erences in form, function, and cultural settings of its development. Investigating a historically obsolete technology and understanding the way people used it can be challenging. Domestic heating technology has advanced, and attitudes towards it have changed over time. It has been di cult to nd physical evidence of early examples in the form in which people originally used it daily. Addi- tionally, few people took the time to record their everyday-life interactions with the equipment and actual models of the old technology are hard to nd. Mid-nineteenth iv century American household heating apparatuses are a clear example of that. The ine ciency of systems previous to central heating challenged inventors and manufac- tures to search for and invest in more convenient and economical options to improve the quality of life. With the development of household heating technology, people gradually abandoned replaces and stoves and adopted furnaces and central heating as their primary source of heating. My goal was to explore the evolving meaning of domestic heating systems as a technological symbol. By analyzing the changing responses from one technological development to the other, I was able to identify some main points that made appro- priate domestic heating and ventilation a necessity for both comfort and health of Americans who lived in the focused period. I chose the ve decades between 1840 and 1890 because most of the modern conveniences were introduced into American homes for the rst time during those years and for their signi cance to the country's technological history. The Northeastern region was chosen as the geographical focus because the dispersion of knowledge began there, especially knowledge in heating and ventila- tion technology and apparatus manufacturing. The sources for the study included domestic advice manuals, architectural pattern books, engineering and architectural periodicals, patent records, manufacturers' and dealers' sales catalogs, and census schedules. The technological development of heating and ventilation systems culmi- nated with the advent of central heating, which currently represents the technology of domestic heating methods. In the design of American homes, central heating systems have superseded previous apparatuses such as replaces and stoves. They have become an option rather than a necessity of a comfortable and convenient domestic life. This disserta- tion is a brief study of that moment of transition. My intention was to expand on v basic assumptions about the technological development of the American home, not to challenge them. There has already been a considerable amount of attention given to the study of the American home and home life. Therefore, I see my research as an addition to the growing knowledge of the history of American domestic technology and the people and innovations that enabled its development.
PH.D in Philosophy in Architecture, December 2012
Show less
- Title
- BOND STRENGTH COMPARISONS OF DIFFERENT SUBSTRATE PREPARATIONS AND VARIOUS BONDING AGENTS USED IN CONCRETE OVERLAYS AND ENLARGEMENTS
- Creator
- Eberhardt, Keith
- Date
- 2017, 2017-05
- Description
-
The bond interface line of concrete overlays and enlargements has been the focus of engineers, contractors, and manufacturers for many years....
Show moreThe bond interface line of concrete overlays and enlargements has been the focus of engineers, contractors, and manufacturers for many years. Many products and procedures have been developed to help the contractor and engineering community achieve the highest bond strength of the repair material to the host material, or substrate, to provide a quality, long-lasting repair. It is well known that the difference between a successful overlay or enlargement and one that fails can be directly linked to the preparation of the bonding surface. The objective of this study is to compare what concrete removal techniques, or surface preparations, in conjunction with bonding conditions and agents have on achieving the best direct tensile and guillotine test results. A unique step of this research is to use an overlay material with the same mix design as the base slab. Most overlay materials are stronger and do not use the same coarse aggregate and cement. By having the substrate and the overlay concrete the same material, a true test of the bond can be accomplished as the core samples should fail in both the substrate and overlay. This study consisted of four concrete slabs that were poured and allowed to naturally cure for a full year. Afterwards, each slab used a different method to remove up to two and one-half inches of the substrate concrete to prepare for the overlay. Removal methods used were hydrodemolition, pneumatic impact hammers, abrasive blasting and a control slab that consisted of a light broom finish. Each panel was then divided into four sections and a different surface condition or bonding agent was applied to the substrate just prior to the placement of the overlay. The surface conditions and agents used were dry, surface-saturated dry (SSD), sand/cement slurry and an epoxy and cementitious material. After the overlay was placed, the overlaid test panels were allowed to remain in place and naturally cure for an additional year before test samples were taken. Results supported that any impact blow force to the surface yielded the worst results by almost 50% for both direct tensile and guillotine tests. Even a change in bonding agents could not overcome the damage to the surface of the base slab. These results support years of similar test reporting. There was no increase in test results using a bonding agent on any of the prepared slabs. The results were similar, and in some cases, less than a surface that was dry or SSD. The highest and most consistent results came from the slab that was either dry or SSD. Results indicate that a dry surface that is prepared with either abrasive blasting or hydrodemolition may yield the most consistent results as all other bonding conditions and agents are subject to difficulty in measuring the application accurately and can be highly susceptible to evaporation rates and variations in multiple mixing operations. A well prepared, clean, dry surface will yield the greatest and most consistent failure results and is the easiest to monitor and duplicate in field conditions.
M.S. in Civil Engineering, May 2017
Show less
- Title
- THEORETICAL ANALYSIS OF REAL-TIME SCHEDULING ON RESOURCES WITH PERFORMANCE DEGRADATION AND PERIODIC REJUVENATION
- Creator
- Hua, Xiayu
- Date
- 2017, 2017-07
- Description
-
In 1973, Liu and Layland [81] published their seminal paper on schedulability analysis of real-time system for both EDF and RM schedulers. In...
Show moreIn 1973, Liu and Layland [81] published their seminal paper on schedulability analysis of real-time system for both EDF and RM schedulers. In this work, they provide schedulability conditions and schedulability utilization bounds for both EDF and RM scheduling algorithms, respectively. In the following four decades, scheduling algorithms, utilization bounds and schedulability analyses for real-time tasks have been studied intensively. Amongst those studies, most of the research rely on a strong assumption that the performance of a computing resource does not change during its lifetime. Unfortunately, for many long standing real-time systems, such as data acquisition systems (DAQ) [74, 99], deep-space exploration programs [120, 119] and SCADA systems for power, water and other national infrastructures [121, 26], the performance of computational resources suffer notably performance degradations after a long and continuous execution period [61]. To overcome the performance degradation in long standing systems, countermeasures, which are also called system rejuvenation approaches in the literature [123, 61, 126], were introduced and studied in depth in the last two decades. Rejuvenation approaches recover system performance when being invoked and hence benefit most long standing applications [30, 102, 11, 12, 39]. However, for applications with real-time requirements, the system downtime caused by rejuvenation process, along with the decreasing performance during the system’s available time, makes the existing real-time scheduling theories difficult to be applied directly. To address this problem, this thesis studies the schedulability issues of a realtime task set running on long standing computing systems that suffers performance degradation and uses rejuvenation mechanism to recover. Our first study in the thesis focus on a simpler resource model, i.e. the periodic resource model, which only considers periodic rejuvenations. We introduce a method, i.e., Periodic Resource Integration, to combine multiple periodic resources into a single equivalent periodic resource and provide the schedulability analysis based on the combined periodic resource for real-time tasks. By integrating multiple periodic resources into one, existing real-time scheduling researches on single periodic resource can be directly applied on multiple periodic resources. In our second study, we extend the periodic resource mode to a new resource model, the P2-resource model, in our second work to characterize resources with both the performance degradation and the periodic rejuvenation. We formally define the P2-resource and analyze the schedulability of real-time task sets on a P2-resource. In particular, we first analyze the resource supply status of a given P2-resource and provide its supply bound and linear supply bound functions. We then developed the schedulability conditions for a task set running on a P2-resource with EDF or RM scheduling algorithms, respectively. We further derive utilization bounds of both EDF and RM scheduling algorithms, respectively, for schedulability test purposes. With the P2-resource model and the schedulability analysis on a single P2- resource, we further extend our work to multiple P2-resources. In this research, we 1) analyze the schedulability of a real-time task set on multiple P2-resources under fixedpriority scheduling algorithm, 2) introduce the GP-RM-P2 algorithm and 3) provide the utilization bound for this algorithm. Simulation results show that in most cases, the sufficient bounds we provide are tight. As the rejuvenation technology keeps advancing, many systems are now able to perform rejuvenations in different system layers. To accommodate this new advances, we study the schedulability conditions of a real-time task set on a single P2-resource with both cold or warm rejuvenations. We introduce a new resource model, the P2-resource with duel-level rejuvenation, i.e., P2D-resource, to accommodate this new feature. We first study the supply bound and the linear supply bound of a given P2D-resource. We then study the sufficient utilization bounds for both RM and EDF scheduling algorithms, respectively.
Ph.D. in Computer Science, July 2017
Show less
- Title
- THE PERSPECTIVE GRID: MUSEUM AND PARK FOR THE SOUTH LOOP
- Creator
- Idrovo Orellana, Santiago Javier
- Date
- 2015, 2015-12
- Description
-
The diverse and infinity of forces that are shaping the cities are luckily unknown. The bigness of the forces may not have a beginning nor an...
Show moreThe diverse and infinity of forces that are shaping the cities are luckily unknown. The bigness of the forces may not have a beginning nor an end. The questions and the answers are always late. Cities are always running behind, trying to code and survive through the economic condition. The speed of the fluids are always mutating, transforming and taking different shapes. We are constantly proposing and putting in practice perception in between the endless spiral of the present. Our human behavior and perceptions are being built by others and by different matters. History in the end is the only parameter from which we can rely. Our mastery of the knowledge technics keep holding the population and transforming the way we live by processing and turning resources into elements of survival. The 21st century characterizes by the transformation of matter and energy into technological tools that we unconsciously rely on while they are also being misused and manipulated, giving non-consistent answers. The thesis travels through different subjects in an attempt to construct a general perspective that will define the decisions on the urban programmatic plans and direct them into educational and cultural performances. Due to economical forces and a projected demographic growth, the near south region of Chicago is looking for a transformation and a direction to travel. Through the general perspective the proposal for the region firstly study to connect cultural collective open spaces injected on a particular area of the South Loop. The area is mainly influenced by industrial infrastructure that can be replaced or reprogrammed according to a considered value. The study will explore the cultural connection from the neighborhood to the Museum Campus as catalyzers for the future interventions of the zone. Secondly, the near south region of Chicago is looking for a transformation and a direction to travel. Through the general perspective the proposal for the region firstly study to connect cultural collective open spaces injected on a particular area of the South Loop. The area is mainly influenced by industrial infrastructure that can be replaced or reprogrammed according to a considered value. The study will explore the cultural connection from the neighborhood to the Museum Campus as catalyzers for the future interventions of the zone. Secondly, the near south region of Chicago is looking for a transformation and a direction to travel. Through the general perspective the proposal for the region firstly study to connect cultural collective open spaces injected on a particular area of the South Loop. The area is mainly influenced by industrial infrastructure that can be replaced or reprogrammed according to a considered value. The study will explore the cultural connection from the neighborhood to the Museum Campus as catalyzers for the future interventions of the zone. Secondly,, the near south region of Chicago is looking for a transformation and a direction to travel. Through the general perspective the proposal for the region firstly study to connect cultural collective open spaces injected on a particular area of the South Loop. The area is mainly influenced by industrial infrastructure that can be replaced or reprogrammed according to a considered value. The study will explore the cultural connection from the neighborhood to the Museum Campus as catalyzers for the future interventions of the zone. Secondly,the general perspective will go more specific into the museum campus and the appropriation of the abandoned McCormick Lake Site Center. An infrastructure located in a zone where the ecological systems and the cultural concern are the roots of its creation. The proposal will face the parameters to reorient the building into a filtered cultural source and as a natural connection infrastructure.
M.S. in Architecture, December 2015
Show less
- Title
- NEW DIRECTIONS IN POST-EARTHQUAKE FIRE HAZARD ANALYSIS WITH APPLICATIONS TO MIDWESTERN UNITED STATES
- Creator
- Farshadmanesh, Pegah
- Date
- 2017, 2017-05
- Description
-
Post-earthquake fire ignition (PEFI) can lead to severe structural damage following an earthquake. Estimating the risk of such ignitions in...
Show morePost-earthquake fire ignition (PEFI) can lead to severe structural damage following an earthquake. Estimating the risk of such ignitions in buildings and identifying methods to abate it are essential steps in an overall effort to mitigate the impact of post-earthquake fires in urban areas. While several models have been developed for areas with available historical PEFI data, such as the Western United States, no such models have been developed for areas with little or no data specific for post-earthquake fires. Examples of such areas are seismic regions in the Midwestern United States. The lack of PEFI data for these areas is due to the fact that at the time of several significant earthquakes in the early nineteenth century, most earthquake-stricken communities where rural. With the growth of urban areas in the region, a need exists for a methodology that can be effectively used in estimating PEFI risks when no or little historical data is available. In this research, it was found that models for PEFI risk estimation may indeed be developed using available data on ignitions for normal conditions as a basis and then using some type of a modification factor to account for the significance of future earthquakes. This modification factor depends on the characteristics of the region in terms of seismic activities and the type and distribution of buildings and their potential in promoting ignitions. The term “normal condition ignition” (NCI) refers to an ignition that occurs due to everyday activities and routine operations in a building. In a residential building, such activities include, for example, operating heating units and burners, cooking, and mechanical malfunction of appliances. In this research, it was found that four factors specifically affect PEFI risk and can be used to develop models for risk estimation. These are (1) spatial characteristics such as geographic concentration of particular building types; (2) ignitability characteristics, such as the sources of ignitions in a particular building type, (3) earthquake characteristics (such as the peak ground acceleration); and (4) temporal characteristics, such as the time of the earthquake and seasonality. Accordingly, models for estimating the risk of post-earthquake fire ignition occurrence are developed. These models are tested, and the model parameters calibrated, using information in areas for which both the NCI and PEFI data are available (such as in the Western United States). To illustrate the applicability of the models developed and proposed in this study, St. Louis City is considered. This constitutes a major urban area vulnerable to potential future seismic activities because of proximity to the New Madrid Fault Zone. Using the NCI data for this area, PEFI risk values are estimated based on probable future seismic activities in the region. The results are presented in terms of estimated annual risk of post-earthquake fires for the area specifically for residential buildings (such as single or multifamily dwellings). The study further discusses the significance of PEFI models, their limitations and also provides suggestions for the future continuation of the research.
Ph.D. in Civil Engineering, May 2017
Show less
- Title
- ASSESSMENT OF STRUCTURAL INTEGRITY AND SEISMIC RETROFIT OF MASONRY BRIDGES USING MICROPILES
- Creator
- Caklr, Ferit
- Date
- 2011-07, 2011-07
- Description
-
Masonry arch bridges are regarded as the oldest examples of engineered structures in the world; and they reflect the previous civilizations in...
Show moreMasonry arch bridges are regarded as the oldest examples of engineered structures in the world; and they reflect the previous civilizations in the world with their various sizes, styles, and spans. The preservation of these structures is receiving a great deal of attention in the structural engineering community. And as such, restoration, strengthening and reinforcement of historical masonry bridges have become a challenge for civil engineers. At the present time, most important problems of masonry arch bridges are heavy traffic loads and destructive natural disasters. These bridges were constructed centuries ago and they addressed the load-carrying problems of old times. With the passing of the time, the traffic changed and many natural disasters occurred. Correspondingly, loads on masonry arch bridges have increased and the traffic on the bridges have become more intense in the course of time. However, the bridges keep up to their initial performance; and this shows the complexity of their structural behaviors. This study provides the background information about masonry arch bridges and their components. It also helps us better understand the construction materials, structural properties and structural behavior of masonry arch bridges. In light of this background information, this study presents an overview of seismic retrofit for masonry arch bridges and a comprehensive study on the type, mechanism of failure and structural integrity of masonry arch bridges.
M.S. in Civil Engineering, July 2011
Show less
- Title
- CLONING AND CHARACTERIZATION OF EXON EDITED HOTSPOT 1 ROD REGION DYSTROPHIN PROTEINS
- Creator
- Mahajan, Aayushi
- Date
- 2013, 2013-12
- Description
-
Duchenne Muscular Dystrophy, DMD, one of the most common fatal genetic diseases, is caused by the absence of dystrophin protein. This protein...
Show moreDuchenne Muscular Dystrophy, DMD, one of the most common fatal genetic diseases, is caused by the absence of dystrophin protein. This protein is coded by the largest gene in the human genome, which has 79 exons and spans 2.4 Mbp. DMD affects 1 in 3600 boys and the average life expectancy of patients is 25 years. The most common type of defects leading to DMD are large exonic deletions that juxtapose remaining exons of incompatible phases, causing a frameshift. Inevitably this introduces a nonsense mutation (i.e. a stop codon) in the frame shifted exons and thus protein truncation. A related but milder condition called Becker’s Muscular Dystrophy, BMD, results when deletions are in-frame and so allow for the production of some, albeit modified, dystrophin protein. While there is no treatment available for DMD the recent clinical trials involving exon skipping suggest this will be highly promising treatment option. Exon skipping therapy is a strategy that uses anti-sense oligonucleotide analogs, AONs, to induce skipping of additional exons during mRNA maturation and thus restore the reading frame. This allows dystrophin translation to continue to its normal C-terminus, albeit with an internal deletion edit corresponding to both the original deletion, and newly skipped exon, which results in expression of protein thus converting the severe form of DMD to a milder form BMD. Frameshift causing deletions require exons that begin and end in different phases, and DMD deletions thus are clustered where such exons are located in, two so called ‘hotspots’ in the gene. The first of these occur in the region from exon 11-22 and is known as Hotspot 1. In many cases, DMD defects that are amenable to exon skip repair can be repaired in two or more alternative fashions by skipping of alternative exons. It is thought that the differences in severity of BMD, can range from being nearly as debilitating as DMD, to xi nearly benign, are at least in part related to the nature of the defect and its impact on the protein’s structure. Thus we are producing 5 different exon edited proteins from hotspot 1 rod region and assessing them for structure and stability as compared to unskipped fully functional dystrophin, to determine which edits hold the most promise of producing a well formed and stable edit.
M.S. in Biology, December 2013
Show less
- Title
- MECHANISMS OF FOAMING, EFFECTS, PREVENTION, AND CONTROL IN ANAEROBIC DIGESTION
- Creator
- Subramanian, Bhargavi
- Date
- 2015, 2015-05
- Description
-
Anaerobic digestion (AD) is an essential step to generate energy in the form of biogas from waste. Foaming during AD (AD foaming) is...
Show moreAnaerobic digestion (AD) is an essential step to generate energy in the form of biogas from waste. Foaming during AD (AD foaming) is widespread phenomenon and leads to deterioration of the AD process and operation. In extreme conditions, AD foaming poses a significant safety risk and considerable economic impacts. It is, therefore, necessary to understand the fundamentals of AD foaming to develop effective strategies that can help minimize and prevent the foaming impacts. Several aspects of AD foaming have attracted considerable research attention, however, the focus has been mainly on site specific causes and prevention. The work leading to this thesis was aimed to provide a better understanding of the AD foaming problem, to identify the underlying mechanisms, causes and contributors of foaming and to come up with foam management strategies for full-scale plants. Full-scale cylindrical digester investigations did not identify non-biological factors such as organic loading rate (OLR), mixing, and primary to waste activated sludge (PS:WAS) solids ratio as primary causes of foaming, but foam-causing filaments such as G. amarae and M. parvicella were determined to be primary causes. No foaming was observed over the duration of the study, indicating absence of a primary foaming cause even though the suspected contributors to AD foaming were present. In the case of full-scale egg-shaped digesters (ESD), foaming and foam collapse events were observed over the duration of the study over both during filamentous foaming and non-foaming seasons, indicating that the primary foaming cause requires the contributors to be present. The results of this study demonstrate that ESDs foamed due to high mixing and G. amarae counts above the threshold level (log #6 intersections/mg VSS) in mixed liquor. In both types of digesters, total solids and temperature profiles showed that reducing mixing frequency did not significantly impact digester performance or the homogeneity of the digester contents. Hence, mixing intensity optimization could be an effective strategy in addition to primary cause reduction of foam causing filaments.
Ph.D. in Environmental Engineering, May 2015
Show less
- Title
- DESIGN AND ANALYSIS OF DATAPATH CIRCUITS USING MULTI-GATE TRANSISTORS
- Creator
- Garcia Martin, Martin
- Date
- 2015, 2015-07
- Description
-
Multi-Gate Field-E ect Transistors are transistors with more than one gate that allows continuation of Moore's Law and performance increases...
Show moreMulti-Gate Field-E ect Transistors are transistors with more than one gate that allows continuation of Moore's Law and performance increases for CMOS tran- sistors. Introduction of multi-gate devices has been a turning point for the semi- conductor industry in facilitating transition from planar to 3D structures. Intel rst introduced commercial products using 3D structures (called Tri-Gate transistors) in late 2011 with Ivy Bridge CPUs using 22nm processes. Signi cant performance gains have been reported; i.e., 37% performance increase at low voltage and 50% power reduction. Multi-gate transistors based on 3D structures can vary greatly in their con guration and architectures leading to ambiguity in their design. It is necessary to investigate the performance of datapath circuits when multi-gate and independent- gate devices replace the conventional planar transistors. Therefore, key objective of this work has been to analyze these transistors' performance and to design new dat- apath circuits to leverage the inherent qualities of multi-gate transistor structures. Multiple-gate devices can be modeled using the BSIM-CMG (Common Multi- Gate) and BSIM-IMG (Independent Multi-Gate) compact models from University of Berkeley Device Group. In this research, both device types have been characterized for a variety of parameters to study their basic properties, functionality and to build a foundation for improved circuit designs. In particular, BSIM-CMG devices have been compared with CMOS planar technology demonstrating signi cant advantages in all design metrics, meanwhile the BSIM-IMG have been used to design new gates and improve datapath designs. In the rst part of this study, essential logic gates, i.e. Inverter, NAND and NOR, have been implemented using BSIM-CMG devices. After being analyzed and compared with the CMOS technology, a 32% reduction on dynamic power consump- tion and 82% reduction for the leakage current has been obtained. For a compre-hensive look on full adder designs, several novel adder architectures have been im- plemented including ultra low power and minimum number of transistor (10T) de- signs. The analysis of these implementations shows 54% dynamic power reduction, 98% static current reduction and 26% delay reduction. These results lead to a 68% improvement on the Power-Delay product comparing with the 32nm CMOS planar technology. In order to investigate dynamic logic circuits with multi-gate transistors, two recent dynamic circuit techniques have been implemented with novel enhancements to reduce the leakage current. Data Driven Dynamic Logic (D3L) and Split-Path Data Driven Dynamic Logic (SPD3L) have been used to analyze the dynamic logic circuits resulting in 11% reduced dynamic power, 52% reduced leakage current and 33% reduced delay. Second part of this study deals with the independent gate devices. Using the BSIM-IMG model, new XOR/XNOR logic gate designs are introduced for im- plementing novel low-power adders. With these new adder architectures, the average improvement on Dynamic power is an 8% and the designs are 6% faster. Furthermore, a new design technique is proposed combining the possible modes (Short Gate-SG, Low-Power-LP, Independent Gate-IG) that the BSIM-IMG provides. Using this novel mixed design, the Power-Delay product is improved on average 7.2% and 54%, com- pared to the Short-Gate (SG) and Low-Power (LP) modes, respectively. The properties of the BSIM-IMG logic have been applied to improve the Dy- namic logic designs as well. The Domino and SPD3L design techniques have been implemented and enhancements such as merging the pull-up transistors have been proposed for sleep and power-gating techniques. With these enhancements, the Dy- namic power is reduced 13% in average and the designs are 18% faster. The trade-o is an increase on leakage current of 8%. Another major contribution of the work has been the development of shell script les for generating a custom toolbox for datapath designs with multi-gate and independent-gate transistors.
Ph.D. in Electrical and Computer Engineering, July 2015
Show less
- Title
- EMBEDDED SYSTEM DESIGN FOR TRAFFIC SIGN RECOGNITION USING MACHINE LEARNING ALGORITHMS
- Creator
- Han, Yan
- Date
- 2016, 2016-12
- Description
-
Traffic sign recognition system, taken as an important component of an intelligent vehicle system, has been an active research area and it has...
Show moreTraffic sign recognition system, taken as an important component of an intelligent vehicle system, has been an active research area and it has been investigated vigorously in the last decade. It is an important step for introducing intelligent vehicles into the current road transportation systems. Based on image processing and machine learning technologies, TSR systems are being developed cautiously by many manufacturers and have been set up on vehicles as part of a driving assistant system in recent years. Traffic signs are designed and placed in locations to be easily identified from its surroundings by human eyes. Hence, an intelligent system that can identify these signs as good as a human, needs to address a lot of challenges. Here, ―good‖ can be interpreted as accurate and fast. Therefore, developing a reliable, real-time and robust TSR system is the main motivation for this dissertation. Multiple TSR system approaches based on computer vision and machine learning technologies are introduced and they are implemented on different hardware platforms. Proposed TSR algorithms are comprised of two parts: sign detection based on color and shape analysis and sign classification based on machine learning technologies including nearest neighbor search, support vector machine and deep neural networks. Target hardware platforms include Xilinx ZedBoard FPGA and NVIDIA Jetson TX1 that provides GPU acceleration. Overall, based on a well-known benchmark suite, 96% detection accuracy is achieved while executing at 1.6 frames per seconds on the GPU board.
Ph.D. in Computer Engineering, December 2016
Show less
- Title
- A NANO-STRUCTURED CERAMIC/POLYMER COMPOSITE FILM FOR ELECTRONIC INTERCONNECTIONS
- Creator
- Harwath, Frank
- Date
- 2016, 2016-05
- Description
-
Separable electrical interconnections are a ubiquitous part of modern life and for technical reasons are currently based on the use of gold....
Show moreSeparable electrical interconnections are a ubiquitous part of modern life and for technical reasons are currently based on the use of gold. Since gold is a commodity and subject to significant price fluctuations there is a need for separable interconnects not based on gold. Polymer/ceramic films were produced from various polymer precursors with loadings of multi-wall nanotubes (MWNT) and inert fillers. A variety of applications means were employed with the best success being achieved by means of a modified doctor blade. Pyrolysis was conducted in an inert atmosphere at 1 bar at a range of temperatures in a tube furnace. Pyrolysis was also conducted using a fiber laser. The modulus of the film is estimated to be 71.8 MPa with an ultimate tensile strength of 179 MPa based on hardness tests and anisotropic crack dimensions which developed as a result of uniaxial stress induced during application of the precursor. Uniaxial stress improved film adhesion regardless of filler type or level. Modification of film characteristics after pyrolysis was attempted using spark plasma sintering (SPS). Electrical testing displayed a percolation threshold above loadings of 1% (wt) of MWNTs where there is a significant drop in electrical resistivity. Further reductions in contact resistance were demonstrated up to 2% loading of MWNTs. The level of contact resistance achieved (<10) for a separable contact, in conjunction with a gold plated contact representative of most electronic connectors, indicates that an acceptable level of contact resistance may be achieved using these materials. Characterization of the film using attenuated total reflectance (ATR), xray diffraction (XRD), x-ray photoelectron spectroscopy (XPS), and Raman spectroscopy point to a morphology which is dominated by crystallites joined by regions of aliphatic carbon chains. Work function measurements were consistent with highly ordered pyrolytic graphite. (HOPG)
Ph.D. in Materials Science and Engineering, May 2016
Show less
- Title
- EARLY CHILDHOOD RISK FACTORS FOR EXECUTIVE DYSFUNCTION IN A SAMPLE OF SCHOOL-AGED CHILDREN
- Creator
- Grahovec, Morgan Carey
- Date
- 2014, 2014-12
- Description
-
The purpose of the present study was to explore whether early childhood factors influence executive function scores, as determined by...
Show moreThe purpose of the present study was to explore whether early childhood factors influence executive function scores, as determined by objective neuropsychological tests and subjective parent and teacher ratings, in a diverse sample of school age children. Data was collected longitudinally over four different visits that corresponded to childhood development (7.76 months, 20 months, 38 months, and 7 years of age). The independent variables examined in the present study included environmental, sociodemographic, and neuropsychological data from the first three time points. At Time 1, the independent variables were SES at Time 1, infant birth weight, maternal body mass index, parental stress at Time 1, and the psychomotor development score at Time 1. At Time 2 and Time 3, the independent variables were SES, parental stress, the sleep problems composite, the DSM-IV ADHD composite, and the psychomotor development index score. Results indicated that overall, maternal pre-pregnancy body mass index, the psychomotor development index score, and family socioeconomic status were the only significant predictors from the first three time points of variance in the Time 4 executive functions as measured by the neuropsychological assessments. The findings showed an inverse relationship between maternal BMI and neuropsychological executive functions, indicating that as BMI increased, executive functioning decreased. A positive relationship between family SES and neuropsychological EF was found, indicating that children from higher SES families performed better on measures of executive functions, as expected. Similarly, a positive relationship was found between psychomotor functions at Time 3 and executive functions at Time 4, which was also in the expected direction. In contrast, subjective parent stress and the DSM-IV ADHD scores were the only significant predictors of the Time 4 executive functions as measured by the parent and teacher ratings. An inverse relationship between parent stress and executive functions was found at all three initial time points, revealing that parents who experience more subjective stress also have children with lower executive functions per parent and teacher report at age 7. A positive relationship was shown between the DSM-IV ADHD composite at Time 3 and the parent/teacher composite score of executive functions at Time 4. This means that children with low executive functions per the parent/teacher composite at Time 4 also had parents and teachers who endorsed greater ADHD symptomatology at Time 3. Subjective parent stress was particularly notable because it was the only independent variable for either of the two dependent variables that was significant across all three of the initial time points. The longitudinal design of this study allowed for the confirmation of the hypothesis that there are significant variables in early childhood that are associated with executive functions later in life. This knowledge has important implications because the more that is understood about executive functioning in children, the more it will be possible to provide meaningful interventions that can maximize a child’s development of this critical skill set.
Ph.D. in Psychology, December 2014
Show less
- Title
- AN OVERVIEW OF THE APPLICATION OF FIBER REINFORCED POLYMER COMPOSITES IN STRUCTURAL REHABILITATION
- Creator
- Elhassan Abdelrahman, Aymen Mohamed
- Date
- 2014, 2014-05
- Description
-
Seismic bridge design practice started in the United States in the mid-1970s. Bridges designed according to pre-1970 design codes may be...
Show moreSeismic bridge design practice started in the United States in the mid-1970s. Bridges designed according to pre-1970 design codes may be seismically vulnerable. This vulnerability is generally because of inadequate reinforcement detailing, which may compromise the strength and ductility of bridge piers, columns and bents. The lack of adequate strength and ductility may lead to bridge failure under seismic loads. Several rehabilitation methods have been used to retrofit the deficient columns and piers so that their performance under potential seismic loads can be improved. Among these methods includes using: (1) concrete jackets; (2) steel jackets; and (3) fiber reinforced polymer composites. The latter method has attracted bridge engineers in recent years because of its easy application and it versatility to be used in bents and columns with non-circular cross sections. The objective of this thesis is to provide a summary of the current practice in using this type of material in bridge retrofit applications. The thesis presents an overview of fiber reinforced polymer (FRP) composite materials behavior and properties, and their composition. Furthermore, the behavior of reinforced concrete columns, when confined with FRP jackets, is reviewed. Available design methods used for determining the FRP jacket thickness and other design properties needed for application in seismic retrofitting of reinforced concrete columns are also reviewed, presented and discussed.
M.S. in Civil Engineering, May 2014
Show less
- Title
- FRACTURE TOUGHNESS EVALUATION OF FIVE MICROALLOYED PLATE STEELS FOR WIND TOWER APPLICATIONS
- Creator
- Gaisina, Vladilena
- Date
- 2015, 2015-05
- Description
-
Five microalloyed plate steels were evaluated for impact and fracture tough- ness. Results at room and low temperatures were compared. These...
Show moreFive microalloyed plate steels were evaluated for impact and fracture tough- ness. Results at room and low temperatures were compared. These steels are com- monly ordered to meet ASTM A572/A709 Grade 50 or EN 10025-2 Grade S355 requirements in the normalized condition for wind tower and other structural ap- plications. One of the ve steels was in the normalized condition, while the rest were left as-rolled. Furthermore, the e ects of carbon content and alloy additions are investigated, with niobium and vanadium as the principal strengthening elements. Impact toughness testing using Charpy V-notch specimens was performed to deter- mine the ductile-to-brittle transition temperature (DBTT). Fracture toughness was measured using the J-integral method. The critical fracture energy Jc was converted to the critical plane strain stress intensity factor KIc if validity requirements were met. Compact tension samples of each steel were tested at room temperature and at -40oC. The results of room temperature tests are compared to those obtained in previous work. The steels with niobium demonstrate signi cantly lower DBTT than vanadium steels. At room temperature, fracture toughness performance is comparable for low carbon grades. Among medium carbon steels, the vanadium one fares slightly better than the as-rolled niobium, with the normalized niobium steel o ering the highest room temperature fracture toughness of the three. Overall, the lower carbon content is noted to provide a signi cant increase in toughness for a modest trade-o in tensile strength.
M.S. in Material Science Engineering, May 2015
Show less
- Title
- CYCLIC THERMAL TREATMENT
- Creator
- Gu, Sijie
- Date
- 2015, 2015-12
- Description
-
Cyclic thermal treatment has the potential to improve energy efficiency of thermal processing. It has been shown that in some cases, the...
Show moreCyclic thermal treatment has the potential to improve energy efficiency of thermal processing. It has been shown that in some cases, the productivity was enhanced by the cyclic thermal treatment operation. In order to investigate the cyclic thermal treatment effect, Copper-Nickel interdiffusion couples were investigated. When the Cu-Ni interdiffusion couple showed positive results, the cyclic thermal treatment was applied to pack carburization and gas carburization of steel. The Cu-Ni interdiffusion couples were annealed with different time-temperature profiles for 5 days. There are three types of time-temperature profile; isothermal, symmetric, and asymmetric cyclic thermal treatment. After thermal treatment, concentration-distance profiles were. Based on the concentration-distance profile, the interdiffusion coefficients of different time-temperature profiles were calculated. The interdiffusion coefficient of the diffusion couple with a ramp rate of 1°C/min had a higher diffusion coefficient than that of the diffusion couple annealed isothermally at the equivalent temperature, 863°C, which means that cyclic thermal treatment has the effect of accelerating diffusion. When the ramp rate was 5ºC/min interdiffusion coefficients were higher than that of the diffusion couple annealed isothermally at the maximum temperature. However, when the ramp rate was increased to 10°C/min, the diffusion coefficient decreased to almost the same as the interdiffusion coefficient of the diffusion couple at the equivalent temperature. After achieving a promising result for the Cu-Ni diffusion couples, we expanded the cyclic thermal treatment to carburizing. The temperature range for cyclic pack carburization was 850° to 950°C. Increasing the cyclic ramp rate resulted in an increase in the case depth. Due to the setup of the pack carburization, the maximum cooling rate achievable is 5°C/min. In order to reach a higher ramp rate, an induction heating gas carburization system was setup. The temperature range for the cyclic induction heat gas carburization was 850°C to 950°C. For the cyclic induction heat gas carburization with increase in ramp rate, the case depth increased. The sample induction gas carburized at a ramp rate of 20°C/min had a deeper case depth than the sample induction gas carburized isothermally at 904.4°C, the equivalent temperature. The first test showed the sample induction gas carburized with a ramp rate of 20°C/min had a deeper case depth than the sample induction gas carburized isothermally at 950° C. With this we draw the conclusion that the cyclic induction gas carburization can achieve a deeper case depth than the isothermal at equivalent temperature induction gas carburization.
Ph.D. in Materials Science and Engineering, December 2015
Show less
- Title
- POTENTIAL EXPOSURE TO SUBSTANCES IN POLYMER COMPOSITES USED AS FOOD PACKAGING MATERIALS
- Creator
- Shah, Saloni S.
- Date
- 2021
- Description
-
In the food manufacturing, preservation, supply, and distribution chain, packaging plays a critical role. The fundamental goal of any...
Show moreIn the food manufacturing, preservation, supply, and distribution chain, packaging plays a critical role. The fundamental goal of any packaging method is to keep food contained and protected. There is an increasing demand for natural and "fresh-like" foods that are less processed and have a longer shelf life, necessitating a variety of packing strategies. With increasing demand, the biggest developments in the field of packaging technology have been innovative food packaging approaches, such as active packaging, intelligent packaging, and bioactive packaging, which include deliberate contact with the food or its surroundings and its effect on consumer health. Several research studies in the past few years have shown that nanocomposite materials have significant improvement in the strength, barrier characteristics, antimicrobial capabilities, and heat and cold stability of food packaging materials, but various studies have reported that these composites might be a source of engineered nanomaterials in the human diet or environment. It has also been reported in numerous studies that nanocomposites can migrate into the food during long-term storage. These studies use food simulants like acetic acid and water to mimic the food matrix. However, they raise issues regarding how ingredients in real foods could affect exposure. This research focuses on the migration of silver (Ag) ions into food matrix-like commercial beverages and demonstrating if the ingredients present in commercial food and beverages influence the migration process. For the study, polymer composites films and dogbones were made. Polymer composite films with 0.2%, 1%, and 5% of silver zeolite concentration in polylactic acid (PLA) were produced, and different media like water, Domino sugar, and Squirt were stored in packages manufactured from this material under accelerated room-temperature conditions. Polymer composite dogbones were made with low-density polyethylene (LDPE) and polypropylene (PP) with 1.25% and 2.51% of graphene and graphite. Further, these materials were characterized with the help of Thermogravimetric analysis (TGA), Fourier Transform Infrared Spectroscopy-Attenuated Total Reflection (FTIR-ATR), and inductively coupled plasma mass spectrometry (ICPMS). This hypothesis of this study was that, when polymer composites are employed in packaging applications, food and beverage components may impact dietary exposure to these particles, and the use of food simulants may underpredict the quantity of the migration in some cases
Show less
- Title
- Two Essays on Corporate Finance
- Creator
- Wang, Bo
- Date
- 2021
- Description
-
This dissertation is comprised of two essays on finance. In the first chapter, I investigate whether and to what extent unionization would...
Show moreThis dissertation is comprised of two essays on finance. In the first chapter, I investigate whether and to what extent unionization would influence the compensation to the non-executive employees. In the second chapter, I explore how social capital would impact regional innovation performance by private firms.In the first chapter, I examine the effects of unionization on stock options granted to non-executive employees. Adopting a regression discontinuity design, I find that employees receive more stock options after the union election wins. The positive association is more pronounced when unions have more bargaining power and when free-riding problems are less severe. Further, I provide evidence that employees receive more stock options when CEOs are entrenched. Finally, I show that stock options provide risk-taking incentives to non-executive employees. This work provides a potential explanation to the union wage premium puzzle that unions utilize stock options to increase non-executive employees’ total compensation. In the second chapter, I investigate whether and to what extent social capital may affect regional innovation by private firms in the U.S. I document that regional social capital is positively associated with the quantity, quality, and novelty of county-level innovation by private firms. This effect is more prominent in regions with a lower supply of financial capital. My findings further suggest that social capital is complementary to investment in research and development. Using a Spatial Durbin Model, I report that regional social capital has significant spillover effects in boosting the innovation of neighboring counties.
Show less
- Title
- IRREGULAR GROWTH AND INTERFACIAL EFFECT IN THIN FILM MULTILAYER STRUCTURES FOR USES IN PHOTOCATHODE APPLICATIONS
- Creator
- Lee, ZhengRong
- Date
- 2021
- Description
-
Improving photocathode performance by increasing the electron density while lowering the angular spread of emitted electrons can improve...
Show moreImproving photocathode performance by increasing the electron density while lowering the angular spread of emitted electrons can improve particle accelerator performance, expanding the reach of both fundamental and applied science. Materials science expertise is needed to design new photocathodes with these desired properties. Nemeth, et al, determined that a multilayered photocathode structure consisting of MgO/Ag/MgO could be engineered for higher brightness and lower dispersion [Nemeth, et al, Phys. Rev. Lett. 104, 046801 (2010)]. The dispersion of the surface bands impacts the angular spread of the emitted beam, and the model predicted that the bands could be tuned by precisely controlling the layer thicknesses of the multilayer structure. We synthesized and probed this MgO/Ag/MgO system experimentally. We measured the work function, emittance, and quantum efficiency of multilayer photocathodes with different MgO layer thicknesses to compare with theoretical predictions. We observed that although the general trend was as predicted, the measurements and the model were not in exact agreement [Velasquez, et al, Appl. Surf. Sci. 360, 762 (2016)]. In this work, we have undertaken a study of the electronic structure of the interfaces to explore how these observed deviations may have originated. It is possible that the fabrication process leads to non-ideal interfaces compared to those constructed in the simulations. To study how the fabrication affects the interfaces, hard X-ray photoemission spectroscopy(HAXPES) was used to probe the chemistry of the buried interfaces within the thin film multilayer structure of Ag and MgO. In these multilayer structures, we observed that the silver layers were predominantly metallic. A small high binding energy (ΔE = 0.69 eV) peak was also observed in the Ag 3d core level in the samples. This peak is shifted in the opposite direction of the binding energy shift in silver oxides, suggesting that this peak is not due to the formation of silver oxides at the interfaces with the MgO. Two possible explanations for the origin of this peak then are charge transfer at the interface from the Ag to the oxide monolayer or the formation of silver nanoparticles during the growth process. Based upon simple depth profiling analysis, we postulate the former is the more likely explanation. In addition, the O 1s and Mg 1s core level indicated the presence of Mg(OH)2. The MgO layers react with H2O in the vacuum chamber or ideal gas used as a buffer during sample transfer. Since the theory predicts a strong dependence upon the number of MgO layers surrounding the Ag, the formation of Mg(OH)2 likely contributes to the non- ideal behavior, even given the similarity in the electronic structure to MgO (large band gap insulator) and Mg(OH)2. The speed at which this reaction occurs would significantly limit the lifetime and the utility of the MgO/Ag multilayer photocathodes. In order to custom engineer multilayer photocathodes, complete control over the growth process will be needed to ensure that the ideal surfaces are formed. Using non-reactive materials would greatly increase the lifetime of the engineered photocathodes.
Show less