Search results
(101 - 120 of 996)
Pages
- Title
- SUSTAINABLE MULTILINGUAL COMMUNICATION: MANAGING MULTILINGUAL CONTENT USING FREE AND OPEN SOURCE CONTENT MANAGEMENT SYSTEMS
- Creator
- Kelsey, Todd
- Date
- 2011-05-03, 2011-05
- Description
-
Multilingual content management systems, combined with streamlined processes and inexpensive organizational tools, make it possible for...
Show moreMultilingual content management systems, combined with streamlined processes and inexpensive organizational tools, make it possible for educators, non-profit entities and individuals with limited resources to develop sustainable and accessible multilingual Web sites. The research included a review of what’s been done in the theory and practice of designing Web sites for multilingual audiences. On the basis of that review, a series of sustainable multilingual Web sites were created, and a series of approaches and systems were tested, including MediaWiki, Plone, Drupal, Joomla, PHPMyFAQ, Blogger, Google Docs and Google Sites. There was also a case study on “Social CMS”, which refers to emergent social networks such as Facebook. The case studies are reported on, and conclude with high-level recommendations that form a roadmap for sustainable multilingual Web site development.
Ph.D. in Technical Communication, May 2011
Show less
- Title
- THERMAL RESISTANCE OF SALMONELLA ENTERICA AND ESCHERICHIA COLI 0157:H7 IN PEANUT BUTTER
- Creator
- He, Yingshu
- Date
- 2014, 2014-05
- Description
-
Salmonella enterica is a frequent food contaminant and the leading cause of foodborne bacterial illnesses in the United States. Our study...
Show moreSalmonella enterica is a frequent food contaminant and the leading cause of foodborne bacterial illnesses in the United States. Our study demonstrated that a 5-strain S. enterica cocktail displayed increased heat resistance in peanut butter of low water activity (aw). Significant differences (P < 0.05) were found between the survival rates of Salmonella enterica and Escherichia coli O157:H7 in peanut butter with different formulations and water activity. High carbohydrate content in peanut butter and low incubation temperature resulted in higher levels of bacterial survival during storage but lower levels of bacterial resistance to heat treatment. Furthermore, we also compared the relative heat resistance of three individual strains of S. enterica representing serotypes Typhimurium, Enteritidis and Tennessee and the 3-strain cocktail treated at both 90oC and 126oC in two different peanut butter formulations with varied fat and carbohydrate contents and adjusted water activities (aw from 0.2 to 0.8). When treated at 90oC, increased water activity in peanut butter significantly (P < 0.05) reduced the heat resistance of desiccation-stressed S. enterica cells. Differences in heat resistance were also detected among the three S. enterica serotypes and between the two peanut butter formulations. When treated at 126oC, the differences in bacterial heat resistance among serotypes and adjusted water activities were less notable (P > 0.05). Based on the Weibull model, an average of 52 to 132 min was required to achieve a 5-log reduction of the 3-strain cocktail at 90oC in peanut butter with an aw of 0.2. When aw was increased to 0.6, to achieve the same 5-log reduction required only 23-27 min. At aw of 0.8, S. enterica could be completely killed in less than 10 min in peanut butter with a fat content of 48.49%. Using scanning electron microscopy, we observed minor morphological changes xiii of S. enterica cells during desiccation and rehydration processes in peanut oil, which was used as a surrogate for peanut putter. Results from this study collectively suggest that water activity plays a critical role in determining S. enterica heat resistance in peanut butter. The variability that exists among the heat resistance of different S. enterica serotypes in different peanut butter formulations should also be taken into consideration for developing and validating effective intervention and mitigation strategies in peanut butter production.
PH.D in Biology, May 2014
Show less
- Title
- LONG-TERM AEROBIC AND ANAEROBIC TRANSFORMATIONS OF ORGANIC MATTER IN ANAEROBICALLY DIGESTED BIOSOLIDS
- Creator
- Lukicheva, Irina
- Date
- 2012-12-05, 2012-12
- Description
-
Long-term anaerobic storage of biosolids in a lagoons type of system as a post-treatment to anaerobic digestion is a proven process for...
Show moreLong-term anaerobic storage of biosolids in a lagoons type of system as a post-treatment to anaerobic digestion is a proven process for further pathogen reduction to produce Class A biosolids. At the same time, final biosolids product could develop odors during storage and handling, limiting the flexibility of biosolids utilization. The goal of this research was to study properties of biosolids under different lengths of aging time to determine the stability of final product for its odor potential. Field lagoons of Metropolitan Water Reclamation District of Greater Chicago were sampled to estimate the spatial and temporal variations in the physical-chemical properties and biological stability indicators, namely, total solids, volatile solids, pH, electric conductivity, total Kjeldahl nitrogen, ammonia-N, nitrite/nitrate-N, accumulated oxygen uptake for the 20-hour respirometric test, soluble protein concentration and headspace concentrations of volatile sulfur compounds. The sampling campaign was performed in October 2009. Two types of lagoons were assessed in this study- high-solids lagoons that are loaded with sludge that was previously anaerobically digested and dewatered on the centrifuges, and low-solids lagoons that are loaded with sludge that was previously digested but not dewatered. The analysis of collected data suggested that for the high-solids lagoons the surface layer biosolids (depth of above 0.15 m) undergo long-term aerobic oxidation resulting in higher degree of final product stabilization. The subsurface layers (depth below 0.15 m) are subjected to anaerobic environment where the conditions allow only the initial rapid organic matter degradation approximately within the first year, followed xii by very slow degradation. In addition, microbiological analyses using Fluorescent in situ Hybridization did not indicate active microbial communities in aged biosolids. The performance of low-solids lagoons in the reduction of the biodegradability parameters was shown to be similar to that of the high-solids lagoons. Low-solids lagoons were shown to perform the dewatering function reducing moisture in the digested sludge from initial 2-3% TS to up to 16% TS. Although the lagoon aged biosolids were found to be stable in comparison with other products, such as composts, further aerobic processes taking place after lagoons, such as air-drying and stock-piling could induce further active biological activity. This could potentially result in the odor formation from the air-dried final product. For these reasons, more research is required on the mechanisms promoting further product degradation after lagoons aging.
PH.D in Environmental Engineering, December 2012
Show less
- Title
- ADVANCING DESIGN SIZING AND PERFORMANCE OPTIMIZATION METHODS FOR BUILDING INTEGRATED THERMAL AND ELECTRICAL ENERGY GENERATION SYSTEMS
- Creator
- Zakrzewski, Thomas
- Date
- 2017, 2017-07
- Description
-
Combined electrical and thermal energy systems (i.e., cogeneration systems) will play an integral role in future energy supplies because they...
Show moreCombined electrical and thermal energy systems (i.e., cogeneration systems) will play an integral role in future energy supplies because they can yield higher overall system fuel utilization and efficiency, and thus produce fewer greenhouse gas emissions, than traditionally separate systems. However, methods for both design sizing and performance optimization for cogeneration systems and commercial buildings lag behind the tremendous advancements that have been made in building performance simulation methods. Therefore, the overall goal of this research is to develop and apply novel cogeneration system modeling techniques for optimizing design sizing and dispatch of generation sets that reduce energy use, energy costs, and greenhouse gas emissions. This research is divided into four main research objectives: (1) generalizing cogeneration performance of lean burn natural gas spark ignition reciprocating engines, (2) developing a new Design and Optimization of Combined Heat and Power (DOCHP) systems optimization tool for improving design-sizing of building-integrated and grid-tied CHP systems, (3) demonstrating the utility of the DOCHP tool with several practical applications, and (4) integrating on-site intermittent renewable energy systems into the DOCHP tool to analyze micro-grid applications. This research leverages recent developments in multiple areas of building and system simulation methods. DOCHP advances design sizing and performance optimization methods for building integrated thermal and electrical energy generation systems through the application of an evolutionary artificial intelligence-based genetic algorithm and its ability to resolve to non-linear optimization with discrete constraints while considering non-linear part-load generation set performance curves.
Ph.D. in Civil Engineering, July 2017
Show less
- Title
- DEVELOPMENT OF AN IMPLICITLY COUPLED ELECTROMECHANICAL AND ELECTROMAGNETIC TRANSIENTS SIMULATOR FOR POWER SYSTEMS
- Creator
- Abhyankar, Shrirang
- Date
- 2011-11, 2011-11
- Description
-
The simulation of electrical power system dynamic behavior is done using tran- sient stability simulators (TS) and electromagnetic transient...
Show moreThe simulation of electrical power system dynamic behavior is done using tran- sient stability simulators (TS) and electromagnetic transient simulators (EMT). A Transient Stability simulator, running at large time steps, is used for studying rela- tively slower dynamics e.g. electromechanical interactions among generators and can be used for simulating large-scale power systems. In contrast, an electromagnetic transient simulator models the same components in finer detail and uses a smaller time step for studying fast dynamics e.g. electromagnetic interactions among power electronics devices. Simulating large-scale power systems with an electromagnetic transient simulator is computationally inefficient due to the small time step size in- volved. A hybrid simulator attempts to interface the TS and EMT simulators which are running at different time steps. By modeling the bulk of the large-scale power system in a transient stability simulator and a small portion of the system in an electromagnetic transient simulator, the fast dynamics of the smaller area could be studied in detail, while providing a global picture of the slower dynamics for the rest of power system. In the existing hybrid simulation interaction protocols, the two simulators run independently, exchanging solutions at regular intervals. However, the exchanged data is accepted without any evaluation, so errors may be introduced. While such an explicit approach may be a good strategy for systems in steady state or having slow variations, it is not an optimal or robust strategy if the voltages and currents are varying rapidly, like in the case of a voltage collapse scenario. This research work proposes an implicitly coupled solution approach for the combined transient stability and electromagnetic transient simulation. To combine the two sets of equations with their different time steps, and ensure that the TS and EMT solutions are consistent, the equations for TS and coupled-in-time EMT equations are solved simultaneously. While computing a single time step of the TS equations, a simultaneous calculation of several time steps of the EMT equations is proposed. Along with the implicitly coupled solution approach, this research work also proposes to use a three phase representation of the TS network instead of using a positive-sequence balanced representation as done in the existing transient stability simulators. Furthermore a parallel implementation of the three phase transient stability simulator and the implicitly coupled electromechanical and electromagnetic transients simulator, using the high performance computing library PETSc, is presented. Re- sults of experimentation with different reordering strategies, linear solution schemes, and preconditioners are discussed for both sequential and parallel implementation.
Ph.D. in Electrical Engineering, December 2011
Show less
- Title
- IMPROVED SPATIAL-TEMPORAL RECONSTRUCTION FOR CARDIAC AND RESPIRATORY GATED SPECT
- Creator
- Qi, Wenyuan
- Date
- 2014, 2014-12
- Description
-
Myocardial perfusion single photon emission computed tomography (SPECT) is an important imaging technique for evaluating coronary artery...
Show moreMyocardial perfusion single photon emission computed tomography (SPECT) is an important imaging technique for evaluating coronary artery disease. It can provide information of both myocardial perfusion and ventricular function. However, SPECT images su er from both cardiac and respiratory motion blur. In order to reduce the motion degrading, cardiac and respiratory gated SPECT imaging is used. In gated SPECT imaging, due to the lowered counts, the gated images will be more noisy than the ungated ones. Spatiotemporal (4D) processing is often used to reduce the noise level in gated images. In this thesis, we aim to investigate spatial and temporal processing techniques for improving the quality in cardiac and respiratory gated SPECT imaging. First, we will investigate a piecewise spatial smoothing prior based on totalvariation (TV) in 4D cardiac SPECT image reconstruction. In previous studies, it was found that spatial smoothing could adversely a ect the accuracy of 4D reconstruction in cardiac gated SPECT when temporal smoothing was applied, even though it could suppress the noise level. Our goal is to explore whether a piecewise spatial smoothing prior will improve the image accuracy while reducing the noise. Toward this goal, we will compare TV based piecewise spatial smoothing with quadratic spatial smoothing with simulated imaging, in which we will evaluate the lesion detectability. Clinical data will also be used to compare the results as a preliminary test. Motion-compensated temporal smoothing is known to play a key role in 4D cardiac gated SPECT reconstruction. Next, we will investigate whether better motion estimation could further improve the accuracy of reconstructed images. We will consider two di erent motion estimation models and the known motion in simulated experiments. The motion estimation methods are the classic optical ow estimation (OFE) and a periodic motion estimation method. We will evaluate the reconstruction from di erent motion models using several numerical quanti cation metrics. Furthermore, we will demonstrate reconstruction with the two motion estimation models using clinical acquisitions. Respiratory motion is known to cause motion blur in SPECT image reconstruction, and respiratory gated SPECT imaging can be e ective to combat its e ect. We will develop reconstruction techniques in respiratory gated SPECT. We will consider two reconstruction schemes for respiratory gated SPECT. The rst scheme is a post motion compensated reconstruction, in which images at di erent respiratory phases are reconstructed seperately, and afterwards are averaged over all the respiratory gates by motion compensation. The second scheme is a model based motion compensated reconstruction approach, in which one reference gate is used to describe the acquisition data of all the respiratory gates. Due to irregular respiratory motion, the data acquisition in each respiratory gate is not uniformly distributed among the acquisition angles, which would lead to limited-angle artifacts. To correct such artifacts, we propose an angle compensation method in the reconstruction. In order to deal with both cardiac and respiratory motion, we will investigate a 4D reconstruction approach for dual cardiac-respiratory gated SPECT reconstruction. This approach can accommodate the acquired data simultaneously from di erent cardiac and respiratory gates. It can exploit the correlation in the signal component among both the cardiac and respiratory phases. Both simulated experiments and clinical reconstruction will be used for evaluating this reconstruction approach. Due to the radiation risk of myocardial perfusion imaging (MPI) scans, there is an urgent need to lower the radiation dose used in SPECT. However, lower radiation dose will lead to more noisy reconstruction, which is even more serious in gated SPECT. We would explore the potential of using 4D reconstruction for lowering the dose in dual cardiac-respiratory gated SPECT.
Ph.D. in Electrical and Computer Engineering, December 2014
Show less
- Title
- ROLE OF EXTRACELLULAR MATRIX IN CELLULAR BEHAVIOR AND TISSUE FUNCTION
- Creator
- Sridharan, Indumathi
- Date
- 2012-04-22, 2012-05
- Description
-
Matrix-dictated control of stem cell differentiation and tissue status are of considerable interest to cell biologists and tissue engineers....
Show moreMatrix-dictated control of stem cell differentiation and tissue status are of considerable interest to cell biologists and tissue engineers. To create suitable biological scaffolds for tissue engineering and cell therapeutics, it is essential to understand the matrix mediated specification of cell lineage. Our study examines the role of matrix properties on cellular behavior and tissue mechanics. To this end, we studied the effect of collagen type I on stem cell differentiation and its mechanical properties within a live tissue. We altered the properties of collagen type 1 by incorporating CNT. The collagen-carbon nanotube (collagen-CNT) composite material was stiffer with thicker fibers and longer D-period. We find that the enhanced mechanical and structural properties of collagen-CNT allow for rapid and efficient derivation of neural progenitors from human decidua parietalis placental stem cells (hdpPSC). Both structure and stiffness of the matrix are important determinants of neural differentiation rate. Strikingly, the collagen-CNT matrix, unlike collagen, imposes the neural fate by an alternate mechanism that is independent of beta-1 integrin and beta-catenin. The study demonstrates the sensitivity of stem cells to subtle changes in the matrix and the utilization of a novel biocomposite material for efficient and directed differentiation of stem cells. Investigation of connective tissue disorders has led to the understanding of the important role played by collagen. So far, native collagen fibers within an intact tissue have not been examined. In this study, we employed a unique approach- histochemical staining guided high-resolution elasticity mapping- to study collagen and smooth muscle in fresh vaginal wall connective tissue. The comparative study of tissues collected from healthy pre-menopausal (pre-M) and post-menopausal (post-M) women suggest that during menopause, collagen’s structure and elasticity are subtly altered. The systematic analysis enables detection of minute changes in collagen in non-fatal conditions such as pelvic organ prolapse and other genitor-urinary disorders, where the initial symptoms are subtle and multivariate and where early diagnosis will allow non-invasive interventions and reduce incidence of surgical correction for these common disorders.
Ph.D. in Molecular Biochemistry and Biophysics, May 2012
Show less
- Title
- Load Redistribution Attacks and Protection Strategy Design in Electric Power Systems
- Creator
- Yuan, Yanling
- Date
- 2012-04-27, 2012-05
- Description
-
Electric power systems have evolved over the past century to the largest and the most complex cyber-physical systems. With the development of...
Show moreElectric power systems have evolved over the past century to the largest and the most complex cyber-physical systems. With the development of Smart Grid, cyber security has become an area of growing concern. False data injection attack has recently attracted wide research interest. This thesis develops the concept of load redistribution attack, a special type of false data injection attack. The physical and economic impact of load redistribution (LR) attacks is quantitatively analyzed in this thesis. Since LR attacks can successfully bypass bad data detection and manipulate the state estimation outcome, security constrained economic dispatch (SCED) based on the false state estimation would lead the system into a non-optimal or insecure operation state. Based on the consequence analysis, two different attacking goals are differentiated from the adversary’s perspective, i.e., immediate attacking goal and delayed attacking goal. For the immediate attacking goal, a max-min attacker-defender model is proposed to identify the most damaging immediate LR attack. Two different algorithms are used to solve this bi-level optimization problem. For the delayed attacking goal, a tri-level model is proposed to identity the most damaging delayed LR attack. This thesis studies the economic impact of LR attacks on power market operation. A convex model is proposed under the mechanism of virtual bidding to compute the optimal injection of LR attack, which gains the most profit from the attackers’ perspective. The quantitative analysis of LR attacks provides an in-depth insight on effective attack prevention with limited protection resource budget. This thesis proposes a new protection strategy, which intends to mitigate the most damaging effect of LR attacks on power system operation. The criterion of determining effective protections against the most damaging LR attack, considering the existence of stochastic measurement errors, is deduced.
Ph.D. in Electrical Engineering, May 2012
Show less
- Title
- IRON INCORPORATION INTO FERROELECTRIC LEAD TITANATE
- Creator
- Ganegoda, Hasitha
- Date
- 2012-11-27, 2012-12
- Description
-
Incorporation of iron into ferroelectric lead titanate with ABO3 perovskite structure is widely utilized to fabricate materials with...
Show moreIncorporation of iron into ferroelectric lead titanate with ABO3 perovskite structure is widely utilized to fabricate materials with ferroelectric and ferromagnetic order. These solid solutions exhibit room temperature ferromagnetic properties at iron concentration as low as 1 mole%. Iron (Fe3+) is highly compatible with titanium (Ti4+) in ionic radii and obviously incompatible in the valence state. Magnetoelectric coupling has been observed in 50 mole% Fe substituted lead titanate. The molecular mixing of precursor materials in wet chemical synthesis such as sol-gel has advantage over conventional solid state sintering in achieving higher solubility of the substituent. A series of Pb(FexTi1−x)O3− solid solution in the composition range x = 0 to 1 has been synthesized using sol-get route, followed by a moderate temperature (700 C) calcination. The structure and properties of samples were characterized using x-ray diffraction (XRD), x-ray absorption spectroscopy (XAS), scanning electron microscopy (SEM), energy dispersive spectroscopy (EDX), dielectric spectroscopy, Raman spectroscopy, and magnetic measurements. SEM results indicated a gradual change in sample morphology, changing from individual particles to nano-aggregates with Fe concentration. EDX confirmed uniform distribution of dopants when Fe concentration x 0.3 and Fe concentration close to nominal composition with possibility of lead loss. XRD results indicated a rapid reduction of tetragonal distortion upon Fe substitution, Fe solubility limit of 10 mole%, and PbFe12O19 impurity phase formation at compositions x 0.4. XAS clearly indicated that Ti prefers a 6 coordinated distorted oxygen octahedra and Fe is surrounded by 5 oxygens situated at a similar distance. Other measurements suggested that ferroelectric and ferromagnetic ordering may arise in Ti-rich and Fe-rich phases, respectively. None of the samples below x = 0.2 was found to be ferromagnetic at room temperature.
PH.D in Physics, December 2012
Show less
- Title
- MODELING THE PERFORMANCE OF A SOLAR CHIMNEY WITH BUILT-IN PHASE CHANGE MATERIALS TO IMPROVE NATURAL VENTILATION
- Creator
- Khorraminejad, Aysan
- Date
- 2014, 2014-12
- Description
-
The use of solar chimneys to improve natural ventilation in buildings has been shown to yield energy savings while maintaining thermal comfort...
Show moreThe use of solar chimneys to improve natural ventilation in buildings has been shown to yield energy savings while maintaining thermal comfort in some climate zones. However, one disadvantage is that the ability to control passive ventilation with solar chimneys is often limited by temporal variations of building function and diurnal weather patterns. The unique thermal properties of phase change materials (PCMs) may provide opportunities to manage some of the limitations of solar chimney strategies in buildings through the timed storage and release of thermal energy, which can aid in minimizing daytime heat gains and increasing the effectiveness of nighttime flushing by natural ventilation. This work explores the potential of PCMs combined with solar chimneys to support the passive ventilation of buildings using whole building energy simulation. Results are intended to inform architects and engineers on best practices for incorporating PCMs in buildings with solar chimneys. A model of a prototype office building designed to ASHRAE Standard 90.1 was developed in EnergyPlus and investigated in five U.S. climate zones. A total of 1023 energy simulations were performed to address the following research questions: (1) In what U.S. climate zone is the use of PCMs most appropriate? (2) Can PCMs improve night flushing ventilation when used with a solar chimney? (3) What are the optimal locations for installation of PCMs in building and solar chimney construction in order to provide optimal cooling load reductions? and (4) How do PCM properties influence cooling load reductions? Results demonstrate that the solidification and melting process of PCMs can indeed be used to prolong natural ventilation for night flushing in the evening and nighttime, particularly in climate zones with large diurnal outdoor temperature fluctuations. Results also suggest the following: (1) the optimal location for installation of PCMs in most climates is within the structure of south-facing envelope assemblies; (2) the PCM melting temperature should be equal to or greater than the zone set point temperature; (3) thicker PCMs can yield slightly more cooling energy savings than thinner PCMs; and (4) the thermal conductivity of PCMs does not appear to have a substantial influence on performance.
Ph.D. in Architecture, December 2014
Show less
- Title
- FUNCTION APPROXIMATION WITH KERNEL METHODS
- Creator
- Zhou, Xuan
- Date
- 2015, 2015-12
- Description
-
This dissertation studies the problem of approximating functions of d variables in a separable Banach space Fd. In particular we are...
Show moreThis dissertation studies the problem of approximating functions of d variables in a separable Banach space Fd. In particular we are interested in convergence and tractability results in the worst case setting and in the average case setting. The symmetric positive definite kernel in both settings is of a product form Kd(x, t) := d =1 1 − α2 + α2 Kγ (x , t ) for all x, t ∈ Rd. The kernel Kd generalizes the anisotropic Gaussian kernel, whose tractability properties have been established in the literature. For a fixed d, we study rates of convergence, which indicate how quickly approximation errors decay. Since rates of convergence can deteriorate quickly as d increases, it is desirable to have dimension-independent convergence rates, which corresponds to the concept of strong polynomial tractability. We present sufficient conditions on {α }∞ =1 and {γ }∞ =1 under which strong polynomial tractability holds for function approximation problems in Fd. Numerical examples are presented to support the theory and guaranteed automatic algorithms are provided to solve the function approximation problem in a straightforward and efficient way. viii
Ph.D. in Applied Mathematics, December 2015
Show less
- Title
- A NOVEL METHOD FOR THE IMPLEMENTATION OF STRUCTURAL CONTACT IN FINITE ELEMENT METHODS OFFERING SIMPLIFIED TREATMENT OF ENERGY DISSIPATION
- Creator
- Grudzinski, James John
- Date
- 2012-04-24, 2012-05
- Description
-
A novel method for implementing contact/impact in an implicit nite element formulation is presented. The method uses the ideas of buoyancy to...
Show moreA novel method for implementing contact/impact in an implicit nite element formulation is presented. The method uses the ideas of buoyancy to enforce the normal contact constraint and a velocity dependent force to model energy dissipation. Upon contact (penetration) a normal force equal to the depth of penetration times a target weight density (di erent and much larger than the actual material weight density) creates a normal pressure on the contacting body. In addition to the buoyancy force, the penetrating surface area is subjected to a drag-like force that acts in a direction opposite the velocity vector of the penetrating node of the contacting body . This rate dependence is broken up into components tangential and normal to the target surface. The normal component of the drag performs two functions. First it provides for an energy absorbing mechanism similar to a coe cient of restitution for modeling non-conservative systems. Secondly, it can provide damping (analogous to mathematical damping) which can aid in solution convergence. The tangential component of the damping force serves the function of modeling friction in a simpli ed manner. The method applies contact forces in the manner of external forces and as such lends itself well to simpli ed contact detection schemes which rely on functional representation of bodies. The method is described and demonstrated through several examples including a comparison to experimental data.
Ph.D. in Mechanical and Aerospace Engineering, May 2012
Show less
- Title
- THE PALAIS GARNIER: TOWARD AN ARCHITECTURE OF DANCE AND MUSIC IN XIX CENTURY FRANCE
- Creator
- Vranas Olsen, Cynthia
- Date
- 2017, 2017-05
- Description
-
As a doctoral candidate, this writer has focused on the Romantic Movement of 19th century France, where the arts mutually influenced each...
Show moreAs a doctoral candidate, this writer has focused on the Romantic Movement of 19th century France, where the arts mutually influenced each other, just as in ancient Greece and Rome. This research highlights the interrelationships of the Arts to Architecture by examining artists such as Hugo in drama, Gautier in literature, Delacroix in painting, Chopin and Adam in music, Noverre in dance and Garnier in architecture. Although Noverre’s specialty was dance, he believed that theater buildings were first monuments to the arts. For Garnier, 100 years later, the Arts and Architecture were not separate identities. They complemented and enhanced one another. Both books, Observations sur la Construction d’une Nouvelle Salle de l’Opera by Jean-Georges Noverre and Le Theatre by Charles Garnier, were publications translated extensively from French to the English language as primary sources. In designing a theater building, the architect is responsible for synthesizing and translating myriad information in order to create an appropriate and responsive language. An invaluable knowledge of technical considerations - set design, lighting, backstage equipment, sightlines, acoustics and facilities for performers - is critical, as is balancing the client’s concerns and the user/spectator’s needs. As performance expresses a cultural instinct for communicative involvement, this writer believes that a theater building, itself, is a reflection of the arts. It is art, therefore, that inextricably shapes the programs and designs of our theater buildings. The Palais Garnier, completed in 1875 for the Paris Opéra and its associated Paris Opéra Ballet, is thought to mirror the cultural complexity of the age. Before this time, the Paris Opéra was housed in the Salle Le Peletier, a temporary building with inadequate facilities that did not reflect the existing, lavish, highly skilled art that was performed. Garnier’s travels to ancient sites in Greece and Rome resulted in his classic book, Le Théatre. By writing, he immersed himself in the study of the physical qualities of the theater, and it eventually became his architecture. The form and character of the theater buildings studied reflect the changing social structures, technology and patronage that sponsored them. The Palais Garnier reflects a unique intersection of the arts, along with the political, social and economic influences of that time. Musicians turned their attention toward the expressions of intense feelings that were influenced by poetry and the visual arts. The Palais Garnier, with its grandeur and opulence, became its own “performance”. The multi-disciplinary nature of architecture and the study of the interrelationships of the arts can inform creativity. Just like the timeless, majestic Palais Garnier in Paris, in 19th century France, new architectural structures can become their own profound monumental and memorable “performances” for future generations.
Ph.D. in Architecture, May 2017
Show less
- Title
- ENABLING TOOLS FOR SINGLE CELL ANALYSIS
- Creator
- Li, Zhaoxia
- Date
- 2011-07, 2011-07
- Description
-
Cells are basic functional units of life. A cell function is mediated by proteins and genes, whose distribution and expression level depends...
Show moreCells are basic functional units of life. A cell function is mediated by proteins and genes, whose distribution and expression level depends remarkably on the microenvironment. In the native environment, individual cells behave differently but communicate with surrounding cells. It is imperative to investigate proteins and genes at the single cell level in their native environment. The current representative single cell analysis methods, fluorescent techniques, are the most direct tools to study a single cell. The commonly used methods to measure protein and gene expression levels in single cells are on the basis of fluorescence labeling, such as fluorescence-activated cell sorting and live cell microscopy. They become more powerful when combine with the use of microfluidic devices. The disadvantages of these methods are, (1) their limited sensitivity doesn’t allow the detection of low-abundance proteins and genes; (2) they are unable to detect the cell-to-cell difference within a population; (3) the cell-sorting based method is lack of spatial resolution since the isolation of cells from the natural environment is required for analysis. To tackle these challenges, we established atomic force microscopy based approaches for in-situ gene and protein analysis on a target single live cell. The methods provide the spatial and quantitative information of cells in their native culture environment. They are effective and sensitive to detect low-abundant proteins and genes. In this thesis work, we developed a novel immunofluorescence assisted affinity mapping (IF-AM) method, in which immunofluorescence provides the guidance to locate a desired type of cell in a cell community for performing affinity mapping to quantify the local protein density at a high spatial resolution. Due to the ability of directly assessing proteins of individual cells, the IF-AM method is shown to be a sensitive tool for xiii resolving subtle differences in the local expression of membrane proteins even at low abundance. In the following work, we improved the accuracy of protein quantification by adapting the separation work based calculation rather than the previously used maximum adhesion force based calculation, and established a practical model to analyze the data systematically. We applied the methods to investigate the membrane proteins TRA-1-81 and E-cadherin on human embryonic stem cells. The heterogeneous distribution of TRA- 1-81 and the homogeneous distribution of E-cadherin as well as the quantitative measurement of the protein local abundance provided comprehensive information in understanding the strategy of hES cells to maintain the stemness during cell proliferation and to initiate the differentiation. An mRNA retrieval method was also developed to perform the gene expression analysis on a single cell of a desired type in a cell community. This was achieved by using a functionalized AFM tip as a bait to bind and retrieve mRNA from a desired single cell, followed by sensitive Quantitative Polymerase Chain Reaction (Q-PCR) analysis. The extraction of mRNA from live cells was performed with non/negligible damage to the cells. The established method here enabled the gene expression analysis of individual live cells at the original sites without disrupting the cell context. Thus the gene expression of a target cell and its surrounding cells can be analyzed in parallel, deriving concrete data for understanding the behavior of one cell in concert with that of the surrounding cells in the same or different cell population. The method was successfully applied in the study of side population cells in ovarian cancer cells. The methods developed in this thesis are versatile, and can be broadly applied to the study of different membrane proteins and genes of various cell types.
Ph.D. in Chemistry, July 2011
Show less
- Title
- PEGYLATION OF FIBRONECTIN AND ITS FUNCTIONAL DOMAINS: EFFECT ON STABILITY AND BIOLOGICAL ACTIVITY
- Creator
- Zhang, Chen
- Date
- 2013, 2013-07
- Description
-
Delayed wound healing in many chronic wounds has been linked to the lack of extracellular matrix (ECM) support and the degradation of...
Show moreDelayed wound healing in many chronic wounds has been linked to the lack of extracellular matrix (ECM) support and the degradation of fibronectin (FN) by an abnormally high protease level. The ECM is important in wound healing because it provides physical and chemical cues that direct tissue growth and development. FN is a key ECM protein that attracts and binds different molecules and cells and thereby supports biological responses associated with wound healing. The goal of my study is two fold: (1) To create an ECM analogue based on a composite of polyethylene glycol (PEG) hydrogels and FN binding domains and (2) To stabilize FN against proteolytic degradation by conjugating it to PEG. To address the first goal, I used Michael addition chemistry to covalently link the cell-binding domain of FN, III9−10, to PEG diacrylate and cross-linked the conjugate to PEG hydrogels. The conjugation of PEG to III9−10 was through cysteines in the affinity tag Glutathione S Transferase (GST). The conjugate of GST-III9−10 and PEG was characterized by: (i) Circular dichroism studies to determine secondary structure, (ii) Sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE) to determine molecular weight, and (iii) Ellmans reagent to determine the efficiency of PEG conjugation to GST-III9−10. The conjugate of PEG and GST-III9−10 had comparable secondary structure to GST-III9−10. SDS-PAGE studies showed that up to three PEG molecules were attached to one GST-III9−10 molecule. The efficiency of PEG conjugation was greater than 90% and occurred within 30 minutes after PEG diacrylate addition. Adhesion assays were used as a metric of biological activity. These assays demonstrated that on a molar basis cell adhesion and spreading were significantly higher on PEG hydrogels with GST-III9−10 than those with the commonly used arginine-glycine-aspartic acid (RGD) peptide. Scaffold rigidity is an important biological cues that influence xi cell responses. However, a comparative study of rigidity on GST-III9−10 and RGD functionalized PEG hydrogels demonstrated that differences in rigidity could not account for differences in cell adhesion and spreading between RGD and GST-III9−10 functionalized PEG hydrogels. Thus as ECM analogues, GST-III9−10 functionalized hydrogels provide more robust biochemical cues than RGD functionalized hydrogels. Studies of PEG hydrogel composites with mixtures of III9−10 and a fibronectin binding domain demonstrated that biological responses of adhesion and spreading and extracellular matrix assembly could be controlled by varying the ratio of these two domains and the rigidity of the PEG hydrogels. FN was stabilized against proteolytic degradation by covalent attaching it to PEG or by PEGylating it. FN was first isolated from human plasma by gelatin affinity chromatography and then PEGylated using two methods. The first method is to PEGylate human plasma fibronectin (HPFN) at cysteine residues with 3.4 kDa PEG diacrylate. The second method is to PEGylate HPFN at lysine residues on the surface with 2-10 kDa PEG Succinimidyl carboxy methyl esters. Cysteine PEGylation of HPFN was first carried out because cysteines are concentrated in the amino- terminus of HPFN which leaves two-thirds of the molecule, including the cell-binding domain, unperturbed. PEGylation of HPFN on cysteines resulted in a molecule that supported cell adhesion, spreading, focal adhesion formation and cell migration in a comparable manner to native HPFN. Moreover, PEGylated HPFN was incorporated into the ECM in a similar manner to native HPFN when present in the culture media but not when coated on a surface indicating that PEGylation on cysteines modified some biological activity of HPFN. Additionally, HPFN PEGylated by this manner could not bind denatured collagen or gelatin. The gelatin-binding domain is at the site of cysteine PEGylation. The second approach for PEGylation of HPFN was through lysine residues xii on the surface of the protein. This approach was used because it targeted different amino acid residues in FN. The length of PEG and extent of PEGylation have been reported to influence biological activity of proteins. In the second approach, both PEG length and extent were varied. HPFN completely PEGylated on lysines residues was significantly more proteolytically stable than native HPFN but had reduced cell attachment and spreading. Furthermore, cell spreading and attachment on surfaces conjugated with this lysine PEGylated HPFN decreased with increasing PEG length. Partially PEGylated HPFN was synthesized by masking the cell and gelatin binding domains during PEGylation. The partially PEGylated HPFN supported cell adhesion and spreading in a similar manner to native HPFN and was more proteolytically stable. For the partially PEGylated HPFN, the size of PEG poly had no significant influence on the attachment and spreading of cells. These studies are the first attempt by any laboratory to stabilize FN against proteolytic degradation while retaining activity and show the feasibility of this approach as a potential therapeutic approach. The work presented here shows a two-prong approach by which the problem of ECM degradation and deficiency chronic wound healing can be addressed. The first approach for addressing ECM deficiency is through a scaffold design methodology. The novelty of the scaffold approach is that it uses the cell-binding domains of FN instead of the often-used RGD peptide. I demonstrate that a PEG hydrogel with the cell-binding domain produces a more robust biological response in cells than a PEG hydrogel with the RGD peptide. I also demonstrate that varying different functional domains of fibronectin can be used to controllably stimulate multiple biological responses. The second approach demonstrates a method by which FN, a key ECM protein, is stabilized against proteolytic degradation without perturbing its activity. These studies of creating PEG-FN conjugates are the first of their kind. Collectively, the data that I present in this thesis will lead to novel therapeutic methods for treating chronic wounds.
PH.D in Chemical Engineering, December 2012
Show less
- Title
- A STEP TOWARD SUPPORTING LONG-RUNNING APPLICATIONS WITH REAL-TIME CONSTRAINTS ON HYBRID CLOUDS
- Creator
- Wu, Hao
- Date
- 2017, 2017-05
- Description
-
The advancement of computer and network technology has brought the world into a new computer cloud era. The ”pay-as-you-go” business model and...
Show moreThe advancement of computer and network technology has brought the world into a new computer cloud era. The ”pay-as-you-go” business model and the service oriented models allow users to have ”unlimited” resources if needed and free from infrastructure maintenance and software upgrades. Cloud services are currently among the top-ranked high growth areas in computing and are seeing an acceleration in enterprise adoption with the worldwide market predicted to reach more than $270b in 2020. According to Google, currently more than 95% of the web services are deployed on cloud.Many di↵erent types of applications are deployed on computer clouds. However, due to inherent performance uncertainty within computer clouds, as of today, applications with real-time and high QoS constraints still operate on traditional computer systems and are not able to benefit from elastic computer clouds.. The thesis focuses on both theoretical analysis and real system implementation on the problem of guaranteeing real-time application’s deadline requirement while minimizing the application’s execution cost on hybrid clouds. Four major problems have been addressed towards moving applications with real-time constraint on hybrid computer clouds. 1). A minimal slack time and minimal distance (MSMD) scheduling algorithm is developed to minimize the resources needed to guarantee an application’s end-to-end deadline requirement using computer clouds. 2). A VM Instance Hour Minimization (IHM) algorithm is developed to reduce the application’s execution cost for given schedules. The proposed IHM algorithm can be integrated with common scheduling algorithm used in the literature. In addition, we also evaluated the feasibility of utilizing spot instance to further reduce the application’s execution cost while not sacrificing QoS guarantees. 3). A reference model for virtual machine launching overhead is developed to predict both system utilization and timing overhead during the VM launching process. 4). A hybrid cloud management tool that integrates the developed algorithms and reference model is developed to support running long-running applications with real-time constraints on hybrid clouds.
Ph.D. in Computer Science, May 2017
Show less
- Title
- PERFORMANCE ANALYSIS AND OPTIMIZATION OF LARGE-SCALE SCIENTIFIC APPLICATIONS
- Creator
- Wu, Jingjin
- Date
- 2013, 2013-07
- Description
-
Scientific applications are critical for solving complex problems in many areas of research, and often require a large amount of computing...
Show moreScientific applications are critical for solving complex problems in many areas of research, and often require a large amount of computing resources in terms of both runtime and memory. Massively parallel supercomputers with ever increasing computing power are being built to satisfy the need of large-scale scientific applications. With the advent of petascale era, there is an enlarged gap between the computing power of supercomputers and the parallel scalability of many applications. To take full advantage of the massive parallelism of supercomputers, it is indispensable to improve the scalability of large-scale scientific applications through performance analysis and optimization. This thesis work is motivated by cell-based AMR (Adaptive Mesh Refinement) cosmology simulations, in particular, the Adaptive Refinement Tree (ART) application. Performance analysis is performed to identify its scaling bottleneck, a performance emulator is designed for efficient evaluation of different load balancing schemes, and topology mapping strategies are explored for performance improvements. More importantly, the exploration of topology mapping mechanisms leads to a generic methodology for network and multicore aware topology mapping, and a set of efficient mapping algorithms for popular topologies. These have been implemented in a topology mapping library – TOPOMap, which can be used to support MPI topology functions.
PH.D in Computer Science, July 2013
Show less
- Title
- NEURAL ADAPTIVE CONTROL STRATEGY FOR HYBRID ELECTRIC VEHICLES WITH PARALLEL POWERTRAIN
- Creator
- Gurkaynak, Yusuf
- Date
- 2011-04-20, 2011-05
- Description
-
In a hybrid electric vehicle (HEV) with parallel powertrain, the system can be controlled by splitting the required power between the electric...
Show moreIn a hybrid electric vehicle (HEV) with parallel powertrain, the system can be controlled by splitting the required power between the electric propulsion machine and internal combustion engine (ICE) to meet specific goals related to fuel consumption, efficiency, performance, and/or emissions. This power splitting scenario, which is of great hybridization importance, is in fact the control strategy or energy management of the hybrid vehicle. Performance of the system depends on the control strategy, which needs to be robust, stable, reliable, and independent from uncertainties. This Ph.D. research is focused on model based control strategies, which are proposed for parallel hybrid powertrains, showing significant advantages in performance and fuel economy. If a model based control strategy is used to develop the hybrid power management algorithm, the accuracy of the model data needs to be high for proper control. Therefore, this type of management method is parameter sensitive. Implementing system identification features into this algorithm reduces the effect. As a result, the proposed controller algorithm learns the existing component parameters while operating. Furthermore, combining the base controller with an online tuner, which simultaneously optimizes the controller for current conditions, will improve the performance of the power management. In addition, this Ph.D. thesis presents a novel neural adaptive equivalent consumption minimization strategy (ECMS) and applies it to a hybrid representative sport utility vehicle (SUV) with parallel powertrain. The ECMS is a model based optimal control strategy and is based on the minimization of both fuel consumption and battery charge usage by introducing the equivalent coefficient between them. Proper operation of the controller depends on the accuracy of the model. It also depends on the correct selection of the equivalent coefficient. In this Ph.D. thesis, specific neural network structures are proposed for both coefficient selections by drive cycle recognition and for precise model building by system identification. This thesis also presents a novel fast solution method of ECMS algorithm for real time applications.
Ph.D. in Electrical Engineering, May 2011
Show less
- Title
- KILOMETER-SPACED GNSS ARRAY FOR IONOSPHERIC IRREGULARITY MONITORING
- Creator
- Su, Yang
- Date
- 2017, 2017-05
- Description
-
This dissertation presents automated, systematic data collection, processing, and analysis methods for studying the spatial-temporal...
Show moreThis dissertation presents automated, systematic data collection, processing, and analysis methods for studying the spatial-temporal properties of Global Navigation Satellite Systems (GNSS) scintillations produced by ionospheric irregularities at high latitudes using a closely spaced multi-receiver array deployed in the northern auroral zone. The main contributions include 1) automated scintillation monitoring, 2) estimation of drift and anisotropy of the irregularities, 3) error analysis of the drift estimates, and 4) multi-instrument study of the ionosphere. A radiowave propagating through the ionosphere, consisting of ionized plasma, may su↵er from rapid signal amplitude and/or phase fluctuations known as scintillation. Caused by non-uniform structures in the ionosphere, intense scintillation can lead to GNSS navigation and high-frequency (HF) communication failures. With specialized GNSS receivers, scintillation can be studied to better understand the structure and dynamics of the ionospheric irregularities, which can be parameterized by altitude, drift motion, anisotropy of the shape, horizontal spatial extent and their time evolution. To study the structuring and motion of ionospheric irregularities at the sub-kilometer scale sizes that produce L-band scintillations, a closely-spaced GNSS array has been established in the auroral zone at Poker Flat Research Range, Alaska to investigate high latitude scintillation and irregularities. Routinely collecting lowrate scintillation statistics, the array database also provides 100 Hz power and phase data for each channel at L1/L2C frequency. In this work, a survey of seasonal and hourly dependence of L1 scintillation events over the course of a year is discussed. To efficiently and systematically study scintillation events, an automated low-rate scintillation detection routine is established and performed for each day by screening the phase scintillation index. The spaced-receiver technique is applied to cross-correlated phase and power measurements from GNSS receivers. Results of horizontal drift velocities and anisotropy ellipses derived from the parameters are shown for several detected events. Results show the possibility of routinely quantifying ionospheric irregularities by drifts and anisotropy. Error analysis on estimated properties is performed to further evaluate the estimation quality. Uncertainties are quantified by ensemble simulation of noise on the phase signals carried through to the observations of the spaced-receiver linear system. These covariances are then propagated through to uncertainties on drifts. A case study of a single scintillating satellite observed by the array is used to demonstrate the uncertainty estimation process. The distributed array is used in coordination with other measuring techniques such as incoherent scatter radar and optical all-sky imagers. These scintillations are correlated with auroral activity, based on all-sky camera images. Measurements and uncertainty estimates made over a 30-minute period are made and compared to a collocated incoherent scatter radar, and show good agreement in horizontal drift speed and direction during periods of scintillation for cases when the characteristic velocity is less than the drift velocity. The methods demonstrated are extensible to other zones and other GNSS arrays of varying size, number, ground distribution, and transmitter frequency.
Ph.D. in Mechanical and Aerospace Engineeering, May 2017
Show less
- Title
- NETWORK CODING BASED COOPERATIVE PEER-TO-PEER REPAIR IN WIRELESS NETWORKS
- Creator
- Liu, Yu
- Date
- 2012-07-11, 2012-07
- Description
-
Multimedia Broadcast/Multicast Service (MBMS) in cellular networks has emerged recently as a promising distribution model to provide rich...
Show moreMultimedia Broadcast/Multicast Service (MBMS) in cellular networks has emerged recently as a promising distribution model to provide rich content distribution where a batch of content is broadcast to a large number of peers simultaneously. However, ensuring efficient error-free message delivery in such a scenario is a challenge, since packet loss is inevitable due to the time-varying nature of wireless transmissions, and the server is probably overwhelmed by floods of individual retransmission requests from peers. Cooperative Peer-to-Peer (P2P) information repair has been proposed to mitigate the packet loss among peers during Base Station (BS) broadcast, by allowing peers to cooperate on information exchange among themselves, rather than asking the BS to rebroadcast the lost packets for the peers. Network Coding, a fairly recent transmission paradigm with the potential network throughput improvement and high reliability advantage, has been widely recognized as a promising information dissemination approach for wireless networks. In this research, we study the network coding based cooperative P2P information repair in wireless networks. We first propose our initial work - a connected dominating set (CDS) based P2P information repair (PPIR) protocol with network coding which utilize the clustering idea, to minimize the total repair latency as well as alleviate the congestion and burden of BS’s downlink channels. Then the decision making problem for P2P repair with densely distributed nodes is studied and two approaches are provided. Later on, the NC based P2P information repair protocol with tunable parameter (NC-PIRTP) which evolved from PPIR protocol is proposed to further reduce transmission collisions and total repair latency. At last, P2P information repair under mobile network environment with pedestrian speed is studied and three efficient protocols are illustrated which are suitable to different specific cases. Extensive simulation results are provided for performance evaluation and comparisons, and to demonstrate the effectiveness and efficiency of our proposed protocols in terms of the total repair latency. Furthermore, an analytical model is developed, based on which theoretical results are derived. These results validate our protocol models and provide useful protocol design guideline for the cooperative P2P information repair problem in wireless networks.
Ph.D. in Electrical Engineering, July 2012
Show less