Search results
(1,001 - 1,020 of 1,076)
Pages
- Title
- Financialization in the Structured Products Market
- Creator
- Zhu, Lizi
- Date
- 2023
- Description
-
This dissertation aims to study financialization in the structured products market. The structured products market has been undergoing a major...
Show moreThis dissertation aims to study financialization in the structured products market. The structured products market has been undergoing a major transformation in recent years. The market used to mainly serve institutional investors. However, as a few trading platforms powered by fintech companies emerged on the horizon, more and more banks are starting to compete in this market. The average trade size has also been declining significantly, thereby making the market increasingly accessible to retail investors. What are the factors that facilitate the development of this market? What are the economic incentives of issuers and investors? How do issuers compete? What does the future hold for this market? The main finding of this dissertation is that structured products provide utility to retail investors; As the level of risk aversion increases, an investor increasingly prefers structured products to other traditional asset classes; issuers develop three sources of competitive advantage to be a satisficer; the rise of fintech and improvement of financial education are the key to opening this market to retail investors.
Show less
- Title
- Multimodal Learning and Generation Toward a Multisensory and Creative AI System
- Creator
- Zhu, Ye
- Date
- 2023
- Description
-
We are perceiving and communicating with the world in a multisensory manner, where different information sources are sophisticatedly processed...
Show moreWe are perceiving and communicating with the world in a multisensory manner, where different information sources are sophisticatedly processed and interpreted by separate parts of the human brain to constitute a complex, yet harmonious and unified intelligent system. To endow the machines with true intelligence, multimodal machine learning that incorporates data from various modalities including vision, audio, and text, has become an increasingly popular research area with emerging technical advances in recent years. Under the context of multimodal learning, the creativity to generate and synthesize novel and meaningful data is a critical criterion to assess machine intelligence.As a step towards a multisensory and creative AI system, we study the problem of multimodal generation in this thesis by exploring the field from multiple perspectives. Firstly, we analyze different data modalities in a comprehensive manner by comparing the data natures, the semantics, and their corresponding mainstream technical designs. We then propose to investigate three multimodal generation application scenarios, namely text generation from visual data, audio generation from visual data, and visual generation from textual data, with diverse approaches to give an overview of the field. For the direction of text generation from visual data, we study a novel multimodal task in which the model is expected to summarize a given video with textual descriptions, under a challenging condition where the video can only be partially seen. We propose to supplement the missing visual information via a dialogue interaction and introduce QA-Cooperative network with a dynamic dialogue history update learning mechanism to tackle the challenge. For the direction of audio generation from visual data, we present a new multimodal task that aims to generate music for a given silent dance video clip. Unlike most existing conditional music generation works that generate specific types of mono-instrumental sounds using symbolic audio representations (e.g., MIDI), and that heavily rely on pre-defined musical synthesizers, we generate dance music in complex styles (e.g., pop, breaking, etc.) by employing a Vector-Quantized (VQ) audio representation via our proposed Dance2Music-GAN (D2M-GAN) framework. For the direction of visual generation from textual data, we tackle a key desideratum in conditional synthesis, which is to achieve high correspondence between the conditioning input and generated output using the state-of-the-art generative model -- Diffusion Probabilistic Model. While most existing methods learn such relationships implicitly, by incorporating the prior into the variational lower bound in model training. In this work, we take a different route by explicitly enhancing input-output connections by maximizing their mutual information, which is achieved by our proposed Conditional Discrete Contrastive Diffusion (CDCD) framework. For each direction, we conduct extensive experiments on multiple multimodal datasets and demonstrate that all of our proposed frameworks are able to effectively and substantially improve task performance in their corresponding contexts.
Show less
- Title
- UTILIZING BACTERIAL INTERACTIONS TO CONTROL PATHOGENIC BIOFILM FORMATION
- Creator
- Fang, Kuili
- Date
- 2020
- Description
-
Many chronic infections involve bacterial biofilms, which are difficult to eliminate using conventional antibiotic treatments. Biofilm...
Show moreMany chronic infections involve bacterial biofilms, which are difficult to eliminate using conventional antibiotic treatments. Biofilm formation is a result of dynamic intra- or inter-species interactions. However, the nature of molecular interactions between bacteria in multi-species biofilms are not well understood compared to those in mono-species biofilms. The first project (Chapter 3) investigated the ability of probiotic Escherichia coli Nissle 1917 (EcN) to outcompete the biofilm formation of pathogens including enterohemorrhagic E. coli (EHEC), Pseudomonas aeruginosa, Staphylococcus aureus, and S. epidermidis. When dual-species biofilms were formed, EcN inhibited the EHEC biofilm population by 14-fold compared to EHEC mono-species biofilms. This figure was 1,100-fold for S. aureus and 8,300-fold for S. epidermidis; however, EcN did not inhibit P. aeruginosa biofilms. In contrast, commensal E. coli did not exhibit any inhibitory effect toward other bacterial biofilms. We identified that EcN secretes DegP, a bifunctional (protease and chaperone) periplasmic protein, outside the cells and controls other biofilms. Although three E. coli strains tested in this study expressed degP, only the EcN strain secreted DegP outside the cells. The deletion of degP disabled the activity of EcN in inhibiting EHEC biofilms, and purified DegP directly repressed EHEC biofilm formation. Hence, probiotic E. coli outcompetes pathogenic biofilms via extracellular DegP activity during dual-species biofilm formation. Enterohemorrhagic Escherichia coli O157:H7 (EHEC) is a pathogen causing the outbreaks of hemorrhagic colitis. Conventional antibiotics treatment is not recommended for EHEC infection as antibiotics trigger Shiga toxin production of EHEC and aggravate hemolytic-uremic syndrome. EHEC biofilm formation is closely associated with its virulence expression. Previously, we identified that probiotic E. coli Nissle 1917 (EcN) secretes DegP resulting in the inhibition of EHEC biofilm formation in a dual culture. DegP is a serine protease exhibiting both proteolytic and chaperone functions and binds to outer membrane proteins (OMPs) of target cells. However, the extracellular function of DegP is not clear. We hypothesized that binding of DegP to OMPs of EHEC might inhibit EHEC biofilm formation by affecting the adhesion ability or changing biofilm-related gene regulations of EHEC. We constructed EHEC mutants lacking ompA, ompC, or ompF individually and in combination and assessed their biofilm formation in the presence of DegP-secreting EcN in the co-culture or by adding purified DegP. It was found that both ompA and ompC double deletion decreased EHEC single species biofilm, and also caused that DegP inhibited more EHEC biofilm (about 25 fold inhibition) than DegP inhibited EHEC wt biofilm (about 10 fold), indicating that OmpA and OmpC are more related to EHEC biofilm than OmpF, and OmpA and OmpC might deplete DegP inhibitory functions. On the other hand, DegP S210A, a DegP mutant lacking protease function, inhibited EHEC wt biofilm, indicating that DegP’s biofilm inhibition function is not from its protease activity. Additionally, EHEC transcription profiles in the presence of DegP showed that DegP up-regulated expressions of cellulose production related genes (csgD and bcsA) and motility related genes (flhD, qseB), which were all involved in EHEC biofilm inhibition, and down-regulated Shiga toxin 2 virulence gene (stx2). Besdies, DegP promoted EHEC cellulose production and motility, which is consistent with transcription profile, and Shiga toxin 2 production will be further tested. This study reveals a new function of DegP secreted by EcN in controlling biofilms and leads us to develop an alternative strategy to control biofilm-related infections. Foodborne pathogen Listeria monocytogenes biofilm formation renders these cells highly resistant to current sanitation methods, and probiotics may be a promising approach to the efficient inhibition of Listeria biofilms. In the Chapter 5 study, three Leuconostoc mesenteroides strains of lactic acid bacteria isolated from kimchi were shown to be effective probiotics for inhibiting Listeria biofilm formation. Biofilms of two L. monocytogenes serotypes, 1/2a (ATCC15313) and 4b (ATCC19115), in dual-species culture with each probiotic strain were decreased by more than 40-fold as compared with single-species Listeria biofilms; for instance, a reduction from 5.4 times 10^6 CFU/cm2 L. monocytogenes ATCC19115 in single-species biofilms to 1.1 times 10^5 CFU/cm2 in dual-species biofilms. Most likely, one of the Leuconostoc strains, L. mesenteroides W51, led to the highest Listeria biofilm inhibition without affecting the growth of L. monocytogenes. The cell-free supernatant from the L. mesenteroides W51 culture containing large protein molecules (> 30 kDa) also inhibited Listeria biofilms. These data indicate that Leuconostoc probiotics can be used to repress L. monocytogenes biofilm contamination on surfaces at food processing facilities.
Show less
- Title
- A MICROFLUIDIC INTESTINAL-MICROBIOTA PLATFORM TO STUDY DRUG METABOLISM
- Creator
- Wang, Chengyao
- Date
- 2020
- Description
-
The intestine is the main site that orally administered drugs are primarily metabolized, absorbed, and distributed. The trillions of bacteria...
Show moreThe intestine is the main site that orally administered drugs are primarily metabolized, absorbed, and distributed. The trillions of bacteria that inhabit the intestine influence health and regulate important biochemical factors, such as the activity of enzymes pertinent to drug metabolism. However, this has not been systematically studied partly due to the challenges of recapitulating the unique and complex intestinal microenvironment that includes (1) the presence of mammalian and microbial cells and (2) a unique partitioned oxygenation profile across the lumen to the subepithelial mucosa from anaerobic to the richly vascularized oxygenated. This thesis reports the development of a microfluidic device in which is integrated a membrane synthesized from a key element of mucosal basal lamina, collagen, and precisely controlled partitioned oxygen environment. The device enabled excellent cell viability and long-term function. More importantly, it enabled the coculture of intestinal epithelial cells and aerobic and anaerobic bacteria in the partitioned oxygen environment. These experiments on one hand allowed the measurement of cellular oxygen consumption rate under perfusion, which could be used to study microbial regulation of oxidative metabolism in epithelial cells. On the other hand, the device allowed a systematic examination of the role of different gut bacteria strains on the regulation of factors that are important in drug metabolism, namely, transporters and phase I enzymes. Our studies highlighted the importance of direct communication between the intestinal cells and the gut bacteria with major findings being that species-specific differences exist in the regulation of drug metabolism. This work will be useful for (1) the discovery of novel regulators of drug metabolizing enzymes, (2) developing new pharmacokinetic models, and (3) advancing precision medicine models for patients.
Show less
- Title
- Investigation in the Uncertainty of Chassis Dynamometer Testing for the Energy Characterization of Conventional, Electric and Automated Vehicles
- Creator
- Di Russo, Miriam
- Date
- 2023
- Description
-
For conventional and electric vehicles tested in a standard chassis dynamometer environment precise regulations on the evaluation of their...
Show moreFor conventional and electric vehicles tested in a standard chassis dynamometer environment precise regulations on the evaluation of their energy performance exist. However, the regulations do not include requirements on the confidence value to associate with the results. As vehicles become more and more efficient to meet the stricter regulations mandates on emissions, fuel and energy consumption, traditional testing methods may become insufficient to validate these improvements, and may need revision. Without information about the accuracy associated with the results of those procedures however, adjustments and improvements are not possible, since no frame of reference exists. For connected and automated vehicles, there are no standard testing procedures, and researchers are still in the process of determining if current evaluation methods can be extended to test intelligent technologies and which metrics best represent their performance. For these vehicles is even more important to determine the uncertainty associated with these experimental methods and how they propagate to the final results. The work presented in this dissertation focuses on the development of a systematic framework for the evaluation of the uncertainty associated with the energy performance of conventional, electric and automated vehicles. The framework is based on a known statistical method, to determine the uncertainty associated with the different stages and processes involved in the experimental testing, and to evaluate how the accuracy of each parameter involved impacts the final results. The results demonstrate that the framework can be successfully applied to existing testing methods and provides a trustworthy value of accuracy to associate with the energy performance results, and can be easily extended to connected-automated vehicle testing to evaluate how novel experimental methods impact the accuracy and the confidence of the outputs. The framework can be easily be implemented into an existing laboratory environment to incorporate the uncertainty evaluation among the current results analyzed at the end of each test, and provide a reference for researchers to evaluate the actual benefits of new algorithms and optimization methods and understand margins for improvements, and by regulators to assess which parameters to enforce to ensure compliance and ensure projected benefits.
Show less
- Title
- Using Niobium surface encapsulation and Rhenium to enhance the coherence of superconducting devices
- Creator
- Crisa, Francesco
- Date
- 2024
- Description
-
In recent decades, the scientific community has grappled with escalating complexity, necessitating a more advanced tool capable of tackling...
Show moreIn recent decades, the scientific community has grappled with escalating complexity, necessitating a more advanced tool capable of tackling increasingly intricate simulations beyond the capabilities of classical computers. This tool, known as a quantum computer, features processors composed of individual units termed qubits. While various methods exist for constructing qubits, superconducting circuits have emerged as a leading approach, owing to their parallels with semiconductor technology.In recent years, significant strides have been made in optimizing the geometry and design of qubits. However, the current bottleneck in the performance of superconducting qubits lies in the presence of defects and impurities within the materials used. Niobium, owing to its desirable properties, such as high critical temperature and low kinetic inductance, stands out as the most prevalent superconducting material. Nonetheless, it is encumbered by a relatively thick oxide layer (approximately 5 nm) exhibiting three distinct oxidation states: NbO, NbO$_2$, and Nb$_2$O$_5$. The primary challenge with niobium lies in the multitude of defects localized within the highly disordered Nb$_2$O$_5$ layer and at the interfaces between the different oxides. In this study, I present an encapsulation strategy aimed at restraining surface oxide growth by depositing a thin layer (5 to 10 nm) of another material in vacuum atop the Nb thin film. This approach exploits the superconducting proximity effect, and it was successfully employed in the development of Josephson junction devices on Nb during the 1980s.In the past two years, tantalum and titanium nitride have emerged as promising alternative materials, with breakthrough qubit publications showcasing coherence times five to ten times superior to those achieved in Nb. The focus will be on the fabrication and RF testing of Nb-based qubits with Ta and Au capping layers. With Ta capping, we have achieved the best T1 (not average) decay time of nearly 600 us, which is more than a factor of 10 improvements over the bare Nb. This establishes the unique capping layer approach as a significant new direction for the development of superconducting qubits.Concurrently with the exploration of materials for encapsulation strategies, identifying materials conducive to enhancing the performance of superconducting qubits is imperative. Ideal candidates should exhibit a thin, low-loss surface oxide and establish a clean interface with the substrate, thereby minimizing defects and potential sources of losses. Rhenium, characterized by an extremely thin surface oxide (less than 1 nm) and nearly perfect crystal structure alignment with commonly used substrates such as sapphire, emerges as a promising material platform poised to elevate the performance of superconducting qubits.
Show less
- Title
- The Double-edged Sword of Executive Pay: How the CEO-TMT Pay Gap Influences Firm Performance
- Creator
- Haddadian Nekah, Pouya
- Date
- 2024
- Description
-
This study examines the relationship between the chief executive officer (CEO) and top management team (TMT) pay gap and consequent firm...
Show moreThis study examines the relationship between the chief executive officer (CEO) and top management team (TMT) pay gap and consequent firm performance. Drawing on tournament theory and equity theory, I argue that the effect of the CEO-TMT pay gap on consequent firm performance is non-monotonic. Using data from 1995 to 2022 from S&P 1500 US firms, I explicate an inverted U-shaped relationship, such that an increase in the pay gap leads to an increase in firm performance up to a certain point, after which it declines. Additionally, multilevel analyses reveal that this curvilinear relationship is moderated by attributes of the TMT, and the industry in which the firm competes. My findings show that firms with higher TMT gender diversity suffer lower performance loss due to wider pay gaps. Furthermore, when firm executives are paid more compared to the industry norms, or when the firm has a long-tenured CEO, firm performance becomes less sensitive to larger CEO-TMT pay gaps. Lastly, when the firm competes in a masculine industry, firm performance is more negatively affected by larger CEO-TMT pay gaps. Contrary to my expectations, firm gender-diversity friendly policies failed to influence the CEO-TMT pay gap-firm performance relationship.
Show less
- Title
- Improving Localization Safety for Landmark-Based LiDAR Localization System
- Creator
- Chen, Yihe
- Date
- 2024
- Description
-
Autonomous ground robots have gained traction in various commercial applications, with established safety protocols covering subsystem...
Show moreAutonomous ground robots have gained traction in various commercial applications, with established safety protocols covering subsystem reliability, control algorithm stability, path planning, and localization. This thesis specifically delves into the localizer, a critical component responsible for determining the vehicle’s state (e.g., position and orientation), assessing compliance with localization safety requirements, and proposing methods for enhancing localization safety.Within the robotics domain, diverse localizers are utilized, such as scan-matching techniques like normal distribution transformations (NDT), the iterative closest point (ICP) algorithm,probabilistic maps method, and semantic map-based localization.Notably, NDT stands out as a widely adopted standalone laser localization method, prevalent in autonomous driving software such as Autoware and Apollo platforms.In addition to the mentioned localizers, common state estimators include variants of Kalman Filter, particle filter-based, and factor graph-based estimators. The evaluation of localization performance typically involves quantifying the estimated state variance for these state estimators.While various localizer options exist, this study focuses on those utilizing extended Kalman filters and factor graph methods. Unlike methods like NDT and ICP algorithms, extended Kalman filters and factor graph based approaches guarantee bounding of estimated state uncertainty and have been extensively researched for integrity monitoring.Common variance analysis, employed for sensor readings and state estimators, has limitations, primarily focusing on non-faulted scenarios under nominal conditions. This approach proves impractical for real-world scenarios and falls short for safety-critical applications like autonomous vehicles (AVs).To overcome these limitations, this thesis utilizes a dedicated safety metric: integrity risk. Integrity risk assesses the reliability of a robot’s sensory readings and localization algorithm performance under both faulted and non-faulted conditions. With a proven track record in aviation, integrity risk has recently been applied to robotics applications, particularly for evaluating the safety of lidar localization.Despite the significance of improving localization integrity risk through laser landmark manipulation, this remains an under explored territory. Existing research on robot integrity risk primarily focuses on the vehicles themselves. To comprehensively understand the integrity risk of a lidar-based localization system, as addressed in this thesis, an exploration of lidar measurement faults’ modes is essential, a topic covered in this thesis.The primary contributions of this thesis include: A realistic error estimation method for state estimators in autonomous vehicles navigating using pole-shape lidar landmark maps, along with a compensatory method; A method for quantifying the risk associated with unmapped associations in urban environments, enhancing the realism of values provided by the integrity risk estimator; a novel approach to improve the localization integrity of autonomous vehicles equipped with lidar feature extractors in urban environments through minimal environmental modifications, mitigating the impact of unmapped association faults. Simulation results and experimental results are presented and discussed to illustrate the impact of each method, providing further insights into their contributions to localization safety.
Show less
- Title
- DEVELOPMENT AND APPLICATION OF A NATIONALLY REPRESENTATIVE MODEL SET TO PREDICT THE IMPACTS OF CLIMATE CHANGE ON ENERGY CONSUMPTION AND INDOOR AIR QUALITY (IAQ) IN U.S. RESIDENCES
- Creator
- Fazli, Torkan
- Date
- 2020
- Description
-
Americans spend most of their time inside residences where they are exposed to a number of pollutants of both indoor and outdoor origin....
Show moreAmericans spend most of their time inside residences where they are exposed to a number of pollutants of both indoor and outdoor origin. Residential buildings also account for over 20% of total primary energy consumption in the U.S. and a similar proportion of greenhouse gas emissions. Moreover, climate change is expected to affect building energy use and indoor air quality (IAQ) through both building design (i.e., via our societal responses to climate change) and building operation (i.e., via changing meteorological and ambient air quality conditions). The overarching objectives of this work are to develop a set of combined building energy and indoor air mass balance models that are generally representative of both the current (i.e., ~2010s) and future (i.e., ~2050s) U.S. residential building stock and to apply them using both current and future climate scenarios to estimate the impacts of climate change and climate change policies on building energy use, IAQ, and the prevalence of chronic health hazards in U.S. homes. The developed model set includes over 4000 individual building models with detailed characteristics of both building operation and indoor pollutant physics/chemistry, and is linked to a disability-adjusted life years (DALYs) approach for estimating chronic health outcomes associated with indoor pollutant exposure. The future building stock model incorporates a combination of predicted changes in future meteorological conditions, ambient air quality, the U.S. housing stock, and population demographics. Using the model set, we estimate the total site and source energy consumption for space conditioning in U.S. residences is predicted to decrease by ~37% and ~20% by mid-century (~2050s) compared to 2012, respectively, driven by decreases in heating energy use across the building stock that are larger than coincident increases in cooling energy use in warmer climates. Indoor concentrations of most pollutants of ambient origin are expected to decrease, driven by predicted reductions in ambient concentrations due to tighter emissions controls, with one notable exception of ozone, which is expected to increase in future climate scenarios. This work provides the first known estimates of the potential magnitude of impacts of expected climate changes on building energy use, IAQ, and the prevalence of chronic health hazards in U.S. homes.
Show less
- Title
- Data-Driven Modeling for Advancing Near-Optimal Control of Water-Cooled Chillers
- Creator
- Salimian Rizi, Behzad
- Date
- 2023
- Description
-
Hydronic heating and cooling systems are among the most common types of heating and cooling systems installed in older existing buildings,...
Show moreHydronic heating and cooling systems are among the most common types of heating and cooling systems installed in older existing buildings, especially commercial buildings. The results of this study based on the Commercial Building Energy Consumption Survey (CBECS) indicates chillers account for providing cooling in more than half of the commercial office building floorspaces in the U.S. Therefore, to address the need of improving energy efficiency of chillers systems operation, research studies developed different models to investigate different chiller sequencing approaches. Engineering-based models and empirical models are among the popular approaches for developing prediction models. Engineering-based models utilize the physical principles to calculate the thermal dynamics and energy behaviors of the systems and require detailed system information, while the empirical models deploy machine learning algorithms to develop relationships between input and output data. The empirical models compared to the engineering-based approach are more practical in a system’s energy prediction because of accessibility to required data, superiority in model implementation and prediction accuracy. Moreover, selecting near accurate chiller prediction models for the chiller sequencing needs to consider the importance of each input variable and its contribution to the overall performance of a chiller system, as well as the ease of application and computational time. Among the empirical modeling methods, ensemble learning techniques overcome the instability of the learning algorithm as well as improve prediction accuracy and identify input variable importance. Ensemble models combine multiple individual models, often called base or weak models, to produce a more accurate and robust predictive model. Random Forest (RF) and Extra Gradient Boosting (XGBoost) models are considered as ensemble models which offer built-in mechanisms for assessing feature importance. These techniques work by measuring how much each feature contributes to the overall predictive performance of the ensemble.In the first objective of this work the frequency of hydronic cooling systems in the U.S. building stock for applying potential energy efficiency measures (EEMs) on chiller plants are explored. Results show that the central chillers inside the buildings are responsible for providing cooling for more than 50% of the commercial buildings with areas greater than 9,000 m2(~100,000 ft2). In addition, hydronic cooling systems contribute to the highest Energy Use Intensity (EUI) among other systems, with EUI of 410.0 kWh/m2 (130.0 kBtu/ft2). Therefore, the results of this objective support developing accurate prediction models to assess the chiller performance parameters as an implication for chiller sequencing control strategies in older existing buildings. The second objective of the dissertation is to evaluate the performance of chiller sequencing strategy for the existing water-cooled chiller plant in a high-rise commercial building and develop highly accurate RF chiller models to investigate and determine the input variables of greatest importance to chiller power consumption predictions. The results show that the average value of mean absolute percentage error (MAPE) and root mean squared error (RMSE) for all three RF chiller models are 5.3% and 30 kW, respectively, for the validation dataset, which confirms a good agreement between measured and predicted values. On the other hand, understanding prediction uncertainty is an important task to confidently reporting smaller savings estimates for different chiller sequencing control strategies. This study aims to quantify prediction uncertainty as a percentile for selecting an appropriate confidence level for chillers models which could lead to better prediction of the peak electricity load and participate in demand response programs more efficiently. The results show that by increasing the confidence level from 80% to 90%, the upper and lower bounds of the demand charge differ from the actual value by a factor of 3.3 and 1.7 times greater, respectively. Therefore, it proves the significance of selecting appropriate confidence levels for implementation of chiller sequencing strategy and demand response programs in commercial buildings. As the third objective of this study, the accuracy of these prediction models with respect to the preprocessing, selection of data, noise analysis, effect of chiller control system performance on the recorded data were investigated. Therefore, this study attempts to investigate the impacts of different data resolution, level of noise and data smoothing methods on the chiller power consumption and chiller COP prediction based on time-series Extra Gradient Boosting (XGBoost) models. The results of applying the smoothing methods indicate that the performance of chiller COP and the chiller power consumption models have improved by 2.8% and 4.8%, respectively. Overall, this study would guide the development of data-driven chiller power consumption and chiller COP prediction models in practice.
Show less
- Title
- Scalable Indexing and Search in High-End Computing Systems
- Creator
- Orhean, Alexandru Iulian
- Date
- 2023
- Description
-
Rapid advances in digital sensors, networks, storage, and computation coupled with decreasing costs is leading to the creation of huge...
Show moreRapid advances in digital sensors, networks, storage, and computation coupled with decreasing costs is leading to the creation of huge collections of data. Increasing data volumes, particularly in science and engineering, has resulted in the widespread adoption of parallel and distributed file systems for storing and accessing data efficiently. However, as file system sizes and the amount of data ``owned” by users grows, it is increasingly difficult to discover and locate data amongst the petabytes of data. While much research effort has focused on methods to efficiently store and process data, there has been relatively little focus on methods to efficiently explore, index, and search data using the same high-performance storage and compute systems. Users of large file systems either invest significant resources to implement specialized data catalogs for accessing and searching data, or resort to software tools that were not designed to exploit modern hardware. While it is now trivial to quickly discover websites from the billions of websites accessible on the Internet, it remains surprisingly difficult for users to search for data on large-scale storage systems. We initially explored the prospect of using existing search engine building blocks (e.g. CLucene) to integrate search in a high-performance distributed file system (e.g. FusionFS), by proposing and building the FusionDex system, a distributed indexing and query model for unstructured data. We found indexing performance to be orders of magnitude slower than theoretical speeds we could achieve in raw storage input and output, and sought to investigate a new clean-slate design for high-performance indexing and search.We proposed the SCANNS indexing framework to address the problem of efficiently indexing data in high-end systems, characterized by many-core architectures, with multiple NUMA nodes and multiple PCIe NVMe storage devices. We designed SCANNS as a single-node framework that can be used as a building block for implementing high-performance indexed search engines, where the software architecture of the framework is scalable by design. The indexing pipeline is exposed and allows easy modification and tuning, enabling SCANNS to saturate storage, memory and compute resources on different hardware. The proposed indexing framework uses a novel tokenizer and inverted index design to achieve high performance improvement both in terms of indexing and in terms of search latency. Given the large amounts and the variety of data found in scientific large-scale file systems, it stands to reason to try to bridge the gap between various data representations and to build and provide a more uniform search space. ScienceSearch is a search infrastructure for scientific data that uses machine learning to automate the creation of metadata tags from different data sources, such as published papers, proposals, images and file system structure. ScienceSearch is a production system that is deployed on a container service platform at NERSC and provides search over data obtained from NCEM. We conducted a performance evaluation of the ScienceSearch infrastructure focusing on scalability trends in order to better understand the implications of performing search over an index built from the generated tags. Drawing from the insights gained from SCANNS and the performance evaluation of ScienceSearch, we explored the problems of efficiently building and searching persistent indexes that do not fit into main memory. The SCIPIS framework builds on top of SCANNS and further optimizes the inverted index design and indexing pipeline, by exposing new tuning parameters that allows the user to further adapt the index to the characteristics of the input data. The proposed framework allows the user to quickly build a persistent index and to efficiently run TFIDF queries over the built index. We evaluated SCIPIS over three kinds of datasets (logs, scientific data, and file system metadata) and showed that it achieves high indexing and search performance and good scalability across all datasets.
Show less
- Title
- Computational Genomics of Human-Infecting Microsporidia Species from the Genus Encephalitozoon
- Creator
- Mascarenhas dos Santos, Anne Caroline
- Date
- 2023
- Description
-
Microsporidia are obligate intracellular pathogens classified as category B priority pathogens by the National Institute of Allergy and...
Show moreMicrosporidia are obligate intracellular pathogens classified as category B priority pathogens by the National Institute of Allergy and Infectious Diseases (NIAID), a division of the National Institutes of Health (NIH). Microsporidian species from the genus Encephalitozoon infect humans and can cause encephalitis, keratoconjunctivitis or enteric diseases in both immunocompromised and immunocompetent individuals. The main treatment for disseminated microsporidiosis available in the United States is albendazole, an anthelmintic benzimidazole that is also used to treat fungal infections, but species from the Encephalitozoonidae have already shown signs of resistance against this drug. The Encephalitozoonidae harbors highly specialized pathogens with the smallest known eukaryote genomes, with Encephalitozoon cuniculi featuring a genome of only 2.9 Mbp and coding for a proteome of roughly 2,000 proteins. Pathogens are in an everlasting race to quicken their adaptation pace against host defenses. This adaptation is often driven by gene duplication, recombination and/or mutation, and due to the potentially disruptive nature of duplication and recombination processes, many of these evolutions in pathogens are taking place outside conserved genomic loci. As such, genes involved in virulence and drug resistance in pathogens are often localized in the (sub)telomeres rather than in chromosome cores. The small and streamlined nature of microsporidian genomes makes them excellent candidates to investigate the adaptation of pathogens to host defenses, the evolution of their virulence, and the development of their resistance to drugs from a genomic perspective. However, microsporidian genomes are highly divergent at the DNA sequence level and the ones that have been sequenced so far are incomplete and are lacking the telomeres. This high level of sequence divergence hinders standard sequence homology-based functional annotations, blurring our understanding of what these organisms are capable of from a metabolic perspective. The gap in our knowledge of what is encoded in the microsporidia telomeres could lead to an underestimation of their pathogenic capabilities. Therefore, deciphering the functions of unknown proteins in microsporidia genomes and unraveling the content of their telomeres is important to fully assess their potential for adaptability to host defenses and predisposition to drug resistance. Likewise, a better understanding of the genetic diversity in microsporidia will help assess the extent by which host-pathogen interactions are shaping the adaptation of these parasites to humans. As observed in the COVID-19 pandemic, genetic diversity can influence the speed at which pathogens adapt to host defenses and thus can pose a big challenge to disease control. The development of strategies for controlling microsporidiosis outbreaks will likely benefit from the work performed in this thesis. As part of my PhD work, I investigated the virulence and host-adaptation capabilities of human-infecting microsporidia species from the genus Encephalitozoon with computational genomic approaches. This work included: 1) using structural homology to infer the functions of unknown proteins from the microsporidia proteome; 2) sequencing the complete genomes from telomere-to-telomere of three distinct Encephalitozoon spp. (E. cuniculi, E. hellem and E. intestinalis) to determine the genetic makeup of their telomeres and better understand the extent of their diversity; and 3) assessing the intraspecies genetic diversity that exists between Encephalitozoon species.
Show less
- Title
- Eating disorder support group utilization: Associations with psychological health and eating disorder psychopathology among support group attendees
- Creator
- Murray, Matthew F.
- Date
- 2023
- Description
-
Individuals with eating disorders (EDs) report psychosocial impairments that may persist beyond ED symptom remission, suggesting a need to...
Show moreIndividuals with eating disorders (EDs) report psychosocial impairments that may persist beyond ED symptom remission, suggesting a need to examine ED treatment-adjunctive services that foster psychosocial health. One promising resource is support groups, as evidence across medical and psychiatric illnesses shows associations between group utilization and wellbeing. However, virtually no literature has examined ED-specific support groups and psychosocial health, and it is also unknown how use of supportive services relates to ED symptoms. The present study examined associations between past-month ED support group attendance and participation frequency, psychosocial health indices, and ED symptoms. A total of 215 participants who attended weekly virtual clinician-moderated ED support groups completed measures of psychosocial health, internalized stigma of mental illness, psychosocial impairment from an ED, specific types of social support elicited in group, and ED psychopathology. Adjusting for past-month ED treatment, Benjamini-Hochberg-corrected partial correlation analyses indicated that more frequent attendance was negatively related to body dissatisfaction, purging, excessive exercise, and negative attitudes toward obesity, and positively related to social support. More frequent verbal and chat participation were positively related to emotional and informational support and social companionship. Chat participation was additionally negatively related to excessive exercise and negative attitudes toward obesity. Results suggest that utilizing and participating in clinician-moderated ED support groups could provide an outlet for ED symptom management and solicitation of social support. Findings highlight areas for further consideration in the delivery of and future research on ED support groups.
Show less
- Title
- Optimization methods and machine learning model for improved projection of energy market dynamics
- Creator
- Saafi, Mohamed Ali
- Date
- 2023
- Description
-
Since signing the legally binding Paris agreement, governments have been striving to fulfill the decarbonization mission. To reduce carbon...
Show moreSince signing the legally binding Paris agreement, governments have been striving to fulfill the decarbonization mission. To reduce carbon emissions from the transportation sector, countries around the world have created a well-defined new energy vehicle development strategy that is further expanding into hydrogen vehicle technologies. In this study, we develop the Transportation Energy Analysis Model (TEAM) to investigate the impact of the CO2 emissions policies on the future of the automotive industries. On the demand side, TEAM models the consumer choice considering the impacts of technology cost, energy cost, refueling/charging availability, consumer travel pattern. On the supply side, the module simulates the technology supply by the auto-industry with the objective of maximizing industry profit under the constraints of government policies. Therefore, we apply different optimization methods to guarantee reaching the optimal automotive industry response each year up to 2050. From developing an upgraded differential evolution algorithm, to applying response surface methodology to simply the objective function, the goal is to enhance the optimization performance and efficiency compared to adopting the standard genetic algorithm. Moreover, we investigate TEAM’s robustness by applying a sensitivity analysis to find the key parameters of the model. Finally based on the key sensitive parameters that drive the automotive industry, we develop a neural network to learn the market penetration model and predict the market shares in a competitive time by bypassing the total cost of ownership analysis and profit optimization. The central motivating hypothesis of this thesis is that modern optimization and modeling methods can be applied to obtain a computationally-efficient, industry-relevant model to predict optimal market sales shares for light-duty vehicle technologies. In fact, developing a robust market penetration model that is optimized using sophisticated methods is a crucial tool to automotive companies, as it quantifies consumer’s behavior and delivers the optimal way to maximize their profits by highlighting the vehicles technologies that they could invest in. In this work, we prove that TEAM reaches the global solution to optimize not only the industry profits but also the alternative fuels optimized blends such as synthetic fuels. The time complexity of the model has been substantially improved to decrease from hours using the genetic algorithm, to minutes using differential evolution, to milliseconds using neural network.
Show less
- Title
- Development of validation guidelines for high pressure processing to inactivate pressure resistant and matrix-adapted Escherichia coli O157:H7, Salmonella spp. and Listeria monocytogenes in treated juices
- Creator
- Rolfe, Catherine
- Date
- 2020
- Description
-
The fruit and vegetable juice industry has shown a growing trend in minimally processed juices. A frequent technology used in the functional...
Show moreThe fruit and vegetable juice industry has shown a growing trend in minimally processed juices. A frequent technology used in the functional juice division is cold pressure, which refers to the application of high pressure processing (HPP) at low temperatures for a mild treatment to inactivate foodborne pathogens instead of thermal pasteurization. HPP juice manufacturers are required to demonstrate a 5-log reduction of the pertinent microorganism to comply with FDA Juice HACCP. The effectiveness of HPP on pathogen inactivation is determinant on processing parameters, juice composition, packaging application, as well as the bacterial strains included for validation studies. Unlike thermal pasteurization, there is currently no consensus on validation study approaches for bacterial strain selection or preparation and no agreement on which HPP process parameters contribute to overall process efficacy.The purpose of this study was to develop validation guidelines for HPP inactivation and post-HPP recovery of pressure resistant and matrix-adapted Escherichia coli O157:H7, Salmonella spp., and Listeria monocytogenes in juice systems. Ten strains of each microorganism were prepared in three growth conditions (neutral, cold-adapted, or acid-adapted) and assessed for barotolerance or sensitivity. Pressure resistant and sensitive strains from each were used to evaluate HPP inactivation with increasing pressure levels (200 – 600 MPa) in two juice matrices (apple and orange). A 75-day shelf-life analysis was conducted on HPP-treated juices inoculated with acid-adapted resistant strains for each pathogen and examined for inactivation and recovery. Individual strains of E. coli O157:H7, Salmonella spp., and L. monocytogenes demonstrated significant (p <0.05) differences in reduction levels in response to pressure treatment in high acid environments. E. coli O157:H7 was the most barotolerant of the three microorganism in multiple matrices. Bacterial screening resulted in identification of pressure resistant strains E. coli O157:H7 TW14359, Salmonella Cubana, and L. monocytogenes MAD328, and pressure sensitive strains E. coli O15:H7 SEA13B88, S. Anatum, and L. monocytogenes CDC. HPP inactivation in juice matrices (apple and orange) confirmed acid adaptation as the most advantageous of the growth conditions. Shelf-life analyses reached the required 5-log reduction in HPP-treated juices immediately following pressure treatment, after 24 h in cold storage, and after 4 days of cold storage for L. monocytogenes MAD328, S. Cubana, and E. coli O157:H7 TW14359, respectively. Recovery of L. monocytogenes in orange juice was observed with prolonged cold storage time. These results suggest the preferred inoculum preparation for HPP validation studies is the use of acid-adapted, pressure resistant strains. At 586 – 600 MPa, critical inactivation (5-log reduction) was achieved during post-HPP cold storage, suggesting sufficient HPP lethality is reached at elevated pressure levels with a subsequent cold holding duration.
Show less
- Title
- Sense of Community and Virtual Community Among People with Autism Spectrum Conditions
- Creator
- Rafajko, Sean I
- Date
- 2020
- Description
-
Individuals with autism spectrum conditions (ASC) face poorer quality of life (QOL) and psychological well-being. Sense of community (SOC) has...
Show moreIndividuals with autism spectrum conditions (ASC) face poorer quality of life (QOL) and psychological well-being. Sense of community (SOC) has been studied in the general population as well as in other disability populations and found to be associated with increased QOL outcomes. However, SOC has never been examined quantitatively in the ASC population. Additionally, a number of communities exist online, and there has been recent research showing that people may feel sense of virtual community (SOVC), which may be particularly important to the ASC population, as internet use is higher in the population, and people with ASC report positive experiences with online communication and relationships. The purpose of this study was to examine SOC and SOVC in the ASC population. A sample of 60 participants with ASC completed an online survey about their communities, SOC, SOVC, QOL, and psychological distress, and their results were compared with a sample of 60 general population participants (N = 120). Results indicated that people with ASC reported participating in a greater number of smaller relational communities compared to the general population sample. There were no significant differences between the ASC and general population samples on levels of SOC or SOVC, suggesting that the differential relationship of the ASC group with their communities does not reduce the experience of SOC. SOC significantly contributed to QOL but not psychological distress. Results indicated that the magnitude of the relationship between SOC and SOVC on QOL was not different between those with ASC and those in the comparisons sample. Findings from this study help frame the different ways in which people with ASC interact with their communities and inform individual and community-level interventions.
Show less
- Title
- LOW-DOSE CARDIAC SPECT USING POST-FILTERING, DEEP LEARNING, AND MOTION CORRECTION
- Creator
- Song, Chao
- Date
- 2019
- Description
-
Single photon emission computed tomography (SPECT) is an important technique in use today for the detection and evaluation of coronary artery...
Show moreSingle photon emission computed tomography (SPECT) is an important technique in use today for the detection and evaluation of coronary artery diseases. The image quality in cardiac SPECT can be adversely affected by cardiac motion and respiratory motion, both of which can lead to motion blur and non-uniform heart wall. In this thesis, we mainly investigate imaging de-noising algorithms and motion correction methods for improving the image quality in cardiac SPECT on both standard dose and reduced dose.First, we investigate a spatiotemporal post-processing approach based on a non-local means (NLM) filter for suppressing the noise in cardiac-gated SPECT images. Since in recent years low-dose studies have gained increased attention in cardiac SPECT owing to its potential radiation risk, to further improve the image quality on reduced dose, we investigate a novel de-noising method for low-dose cardiac-gated SPECT by using a three dimensional residual convolutional neural network (CNN). Furthermore, to reduce the negative effect of respiratory-binned acquisitions and assess the benefit of this approach in both standard dose and reduced dose using simulated acquisitions. Inspired by the success in respiratory correction, we investigate the potential benefit of cardiac motion correction for improving the detectability of perfusion defects. Finally, to combine the benefit of above two types of motion correction, dual-gated data acquisitions are implemented, wherein the acquired list-mode data are further binned into a number of intervals within cardiac and respiratory cycle according to the electrocardiography (ECG) signal and amplitude of the respiratory motion.
Show less
- Title
- Predictive energy efficient control framework for connected and automated vehicles in heterogeneous traffic environments
- Creator
- Vellamattathil Baby, Tinu
- Date
- 2023
- Description
-
Within the automotive industry, there is a significant emphasis on enhancing fuel efficiency and mobility, and reducing emissions. In this...
Show moreWithin the automotive industry, there is a significant emphasis on enhancing fuel efficiency and mobility, and reducing emissions. In this context, connected and automated vehicles (CAVs) represent a significant advancement, as they can optimize their acceleration pattern to improve their fuel efficiency. However, when CAVs coexist with human-driven vehicles (HDVs) on the road, suboptimal conditions arise, which adversely affect the performance of CAVs. This research analyzes the automation capabilities of production vehicles to identify scenarios where their performance is suboptimal, and proposes a merge-aware modification of adaptive cruise control (ACC) method for highway merging situations. The proposed algorithm addresses the issue of sudden gap and velocity changes in relation to the preceding vehicle, thereby reducing substantial braking during merging events, resulting in improved energy efficiency. This research also presents a data-driven model for predicting the velocity and position of the preceding vehicle, as well as a robust model predictive control (MPC) strategy that optimizes fuel consumption while considering prediction inaccuracies. Another focus of this research is a novel suggestion-based control framework in interactive mixed traffic environments leveraging the emerging connectivity between vehicles and with infrastructure. It is based on MPC to optimize the fuel efficiency of CAVs in heterogeneous or mixed traffic environments (i.e., including both CAVs and HDVs). In this suggestion-based control framework, the CAVs are considered to provide non-binding velocity and lane change suggestions to the HDVs to follow to improve the fuel efficiency of both the CAVs and the HDVs. To achieve this, the host CAV must devise its own fuel-efficient control solution and determine the recommendations to convey to its preceding HDV. It is assumed that the CAVs can communicate with the HDVs via Vehicle to Vehicle (V2V) communication, while the Signal Phase and Timing (SPaT) information is accessed via Vehicle-to- Infrastructure (V2I) communication. These velocity suggestions remain constant for a predefined period, allowing the driver to adjust their speed accordingly. It is also considered that the suggestions are non binding, i.e., a driver can choose not to follow the suggested velocity. For this control framework to function, we present a velocity prediction model based on experimental data that captures the response of a HDV to different suggested velocities, and a robust approach to ensure collision avoidance. The velocity prediction’s accuracy is also validated with the experimental data (on a table-top drive simulator), and the results are presented. In cases of low CAV penetration, a CAV needs to provide suggestions to multiple surrounding HDVs and incorporating the suggestions to all the HDVs as decision variables to the optimal control problem can be computationally expensive. Hence, a suggestion-based hierarchical energy efficient control framework is also proposed in which a CAV takes into account the interactive nature of the environment by jointly planning its own trajectory and evaluating the suggestions to the surrounding HDVs. Joint planning requires solving the problem in joint state- and action-space, and this research develops a Monte Carlo Tree Search (MCTS)-based trajectory planning approach for the CAV. Since the joint action- and state-space grows exponentially with the number of agents and can be computationally expensive, an adaptive action-space is proposed through pruning the action-space of each agent so that the actions resulting in unsafe trajectories are eliminated. The trajectory planning approach is followed by a low-level model predictive control (MPC)-based motion controller, which aims at tracking the reference trajectory in an optimal fashion. Simulation studies demonstrate the proposed control strategy’s efficacy compared to existing baseline methods.
Show less
- Title
- Parking Demand Forecasting Using Asymmetric Discrete Choice Models with Applications
- Creator
- Zhang, Ji
- Date
- 2023
- Description
-
Using discrete choice models to forecast travelers parking location choice has been a branch of parking demand research for many years. The...
Show moreUsing discrete choice models to forecast travelers parking location choice has been a branch of parking demand research for many years. The most used discrete choice models have fairly simple mathematical expressions, such as the probit and logit models. The application of simple models helps release the computational burdens brought by parameter estimation tasks in practice, but the cost is the unwanted properties of classic models such as the “symmetry property” that we argue is often undesirable in many fields. To some extent, the symmetry property of related models limits the shape of curves that makes the model fitting less flexible technically. This study addresses the following question: “Can discrete choice models with asymmetry property outperform classic models with symmetry property in forecasting travelers’ parking location choices?” The contributions of this study include: (1) providing a new perspective of using asymmetric discrete choice models to explain and forecast individual’s parking location choice; and (2) completing the travel demand forecasting process from choices of the destination zone centroid to the parking location, enabling parking choice forecasting. This provides a generalized framework to calibrate and validate asymmetric discrete choice models with the field observed parking facility-specific arrival profile data integrated into a large-scale, high-fidelity regional travel demand model. Further, an experimental study is conducted to compare the performance of the proposed asymmetric discrete choice models in the parking demand forecasting framework. The results suggest that asymmetric discrete choice models for individual’s parking choice modeling outperform the symmetric discrete choice models such as the logit models owing largely to their flexibility of parameter fitting and training using the available dataset.
Show less
- Title
- Large-Signal Transient Stability and Control of Inverter-Based Resources
- Creator
- Wang, Duo
- Date
- 2024
- Description
-
Renewable generation, including solar photovoltaic (PV) systems, type 3 and 4 wind turbine generation systems (WTG), battery energy storage...
Show moreRenewable generation, including solar photovoltaic (PV) systems, type 3 and 4 wind turbine generation systems (WTG), battery energy storage systems (BESS), as well as high voltage direct current (HVDC) and flexible alternating current (FACT) transmission system devices with increasing penetration level are being connected to the bulk power systems (BPS) via power electronic (PE) converters as the interface, referred to as the inverter-based resources (IBRs) on the transmission and sub-transmission levels or distributed energy resources (DERs) located on the distribution level. The IBR is almost entirely defined by the control algorithms and found to be more prone to experiencing large disturbances due to the lack of the conventional synchronous machine (SM) intrinsic synchronous characteristics and mechanical inertia, as well as being in smaller capacity sizes. Thus, these reasons motivate this dissertation to study the large-signal transient stability and control of IBRs for reliable grid integration and rapid grid transformation. For large-signal stability analysis methods, Lyapunov-based methods are the fundamental theory used to characterize the stability issues with analytical solutions, although other non-Lyapunov methods could also be very helpful. A main difficulty hindering the widespread adoption of the Lyapunov stability analysis method is the difficulty of finding the proper Lyapunov function candidate for a higher dimensional nonlinear system. The Port-Hamiltonian (PH) nonlinear control theory is explored in this dissertation as a promising theoretical framework solution addressing this challenging issue. A PH-based tracking and robust control method is proposed to facilitate the practical application of the PH framework in IBR controls. In addition, considering the typical grid-forming (GFM) IBR control with a first-order low pass filter (LPF) block is usually involved with control saturation function for protection purposes under abnormal operating conditions with anti-windup issue in practical implementation, a PH-based bounded LPF (PH-BLPF) control is proposed to incorporate this in the large-signal PH interconnection modeling framework while preserving the robust tracking Lyapunov stability with improved transient dynamic performance and stability margin.Moreover, specific real-world transient synchronization stability issues, such as the grid voltage large fault disturbance case, are studied. In addition, to meet the recent emerging IBR grid code requirements, such as the current magnitude limitation, grid support function, and fault recovery capability of GFM-VSCs, a virtual impedance-based current-limiting GFM control with enhanced transient stability and grid support is proposed.
Show less