Search results
(9,881 - 9,900 of 9,967)
Pages
- Title
- Defense-in-Depth for Cyber-Secure Network Architectures of Industrial Control Systems
- Creator
- Arnold, David James
- Date
- 2024
- Description
-
Digitization and modernization efforts have yielded greater efficiency, safety, and cost-savings for Industrial Control Systems (ICS). To...
Show moreDigitization and modernization efforts have yielded greater efficiency, safety, and cost-savings for Industrial Control Systems (ICS). To achieve these gains, the Internet of Things (IoT) has become an integral component of network infrastructures. However, integrating embedded devices expands the network footprint and softens cyberattack resilience. Additionally, legacy devices and improper security configurations are weak points for ICS networks. As a result, ICSs are a valuable target for hackers searching for monetary gains or planning to cause destruction and chaos. Furthermore, recent attacks demonstrate a heightened understanding of ICS network configurations within hacking communities. A Defense-in-Depth strategy is the solution to these threats, applying multiple security layers to detect, interrupt, and prevent cyber threats before they cause damage. Our solution detects threats by deploying an Enhanced Data Historian for Detecting Cyberattacks. By introducing Machine Learning (ML), we enhance cyberattack detection by fusing network traffic and sensor data. Two computing models are examined: 1) a distributed computing model and 2) a localized computing model. The distributed computing model is powered by Apache Spark, introducing redundancy for detecting cyberattacks. In contrast, the localized computing model relies on a network traffic visualization methodology for efficiently detecting cyberattacks with a Convolutional Neural Network. These applications are effective in detecting cyberattacks with nearly 100% accuracy. Next, we prevent eavesdropping by applying Homomorphic Encryption for Secure Computing. HE cryptosystems are a unique family of public key algorithms that permit operations on encrypted data without revealing the underlying information. Through the Microsoft SEAL implementation of the CKKS algorithm, we explored the challenges of introducing Homomorphic Encryption to real-world applications. Despite these challenges, we implemented two ML models: 1) a Neural Network and 2) Principal Component Analysis. Finally, we hinder attackers by integrating a Cyberattack Lockdown Network with Secure Ultrasonic Communication. When a cyberattack is detected, communication for safety-critical elements is redirected through an ultrasonic communication channel, establishing physical network segmentation with compromised devices. We present proof-of-concept work in transmitting video via ultrasonic communication over an Aluminum Rectangular Bar. Within industrial environments, existing piping infrastructure presents an optimal solution for cost-effectively preventing eavesdropping. The effectiveness of these solutions is discussed within the scope of the nuclear industry.
Show less
- Title
- Large Language Model Based Machine Learning Techniques for Fake News Detection
- Creator
- Chen, Pin-Chien
- Date
- 2024
- Description
-
With advanced technology, it’s widely recognized that everyone owns one or more personal devices. Consequently, people are evolving into...
Show moreWith advanced technology, it’s widely recognized that everyone owns one or more personal devices. Consequently, people are evolving into content creators on social media or the streaming platforms sharing their personal ideas regardless of their education or expertise level. Distinguishing fake news is becoming increasingly crucial. However, the recent research only presents comparisons of detecting fake news between one or more models across different datasets. In this work, we applied Natural Language Processing (NLP) techniques with Naïve Bayes and DistilBERT machine learning method combing and augmenting four datasets. The results show that the balanced accuracy is higher than the average in the recent studies. This suggests that our approach holds for improving fake news detection in the era of widespread content creation.
Show less
- Title
- Investigation in the Uncertainty of Chassis Dynamometer Testing for the Energy Characterization of Conventional, Electric and Automated Vehicles
- Creator
- Di Russo, Miriam
- Date
- 2023
- Description
-
For conventional and electric vehicles tested in a standard chassis dynamometer environment precise regulations on the evaluation of their...
Show moreFor conventional and electric vehicles tested in a standard chassis dynamometer environment precise regulations on the evaluation of their energy performance exist. However, the regulations do not include requirements on the confidence value to associate with the results. As vehicles become more and more efficient to meet the stricter regulations mandates on emissions, fuel and energy consumption, traditional testing methods may become insufficient to validate these improvements, and may need revision. Without information about the accuracy associated with the results of those procedures however, adjustments and improvements are not possible, since no frame of reference exists. For connected and automated vehicles, there are no standard testing procedures, and researchers are still in the process of determining if current evaluation methods can be extended to test intelligent technologies and which metrics best represent their performance. For these vehicles is even more important to determine the uncertainty associated with these experimental methods and how they propagate to the final results. The work presented in this dissertation focuses on the development of a systematic framework for the evaluation of the uncertainty associated with the energy performance of conventional, electric and automated vehicles. The framework is based on a known statistical method, to determine the uncertainty associated with the different stages and processes involved in the experimental testing, and to evaluate how the accuracy of each parameter involved impacts the final results. The results demonstrate that the framework can be successfully applied to existing testing methods and provides a trustworthy value of accuracy to associate with the energy performance results, and can be easily extended to connected-automated vehicle testing to evaluate how novel experimental methods impact the accuracy and the confidence of the outputs. The framework can be easily be implemented into an existing laboratory environment to incorporate the uncertainty evaluation among the current results analyzed at the end of each test, and provide a reference for researchers to evaluate the actual benefits of new algorithms and optimization methods and understand margins for improvements, and by regulators to assess which parameters to enforce to ensure compliance and ensure projected benefits.
Show less
- Title
- The Double-edged Sword of Executive Pay: How the CEO-TMT Pay Gap Influences Firm Performance
- Creator
- Haddadian Nekah, Pouya
- Date
- 2024
- Description
-
This study examines the relationship between the chief executive officer (CEO) and top management team (TMT) pay gap and consequent firm...
Show moreThis study examines the relationship between the chief executive officer (CEO) and top management team (TMT) pay gap and consequent firm performance. Drawing on tournament theory and equity theory, I argue that the effect of the CEO-TMT pay gap on consequent firm performance is non-monotonic. Using data from 1995 to 2022 from S&P 1500 US firms, I explicate an inverted U-shaped relationship, such that an increase in the pay gap leads to an increase in firm performance up to a certain point, after which it declines. Additionally, multilevel analyses reveal that this curvilinear relationship is moderated by attributes of the TMT, and the industry in which the firm competes. My findings show that firms with higher TMT gender diversity suffer lower performance loss due to wider pay gaps. Furthermore, when firm executives are paid more compared to the industry norms, or when the firm has a long-tenured CEO, firm performance becomes less sensitive to larger CEO-TMT pay gaps. Lastly, when the firm competes in a masculine industry, firm performance is more negatively affected by larger CEO-TMT pay gaps. Contrary to my expectations, firm gender-diversity friendly policies failed to influence the CEO-TMT pay gap-firm performance relationship.
Show less
- Title
- Using Niobium surface encapsulation and Rhenium to enhance the coherence of superconducting devices
- Creator
- Crisa, Francesco
- Date
- 2024
- Description
-
In recent decades, the scientific community has grappled with escalating complexity, necessitating a more advanced tool capable of tackling...
Show moreIn recent decades, the scientific community has grappled with escalating complexity, necessitating a more advanced tool capable of tackling increasingly intricate simulations beyond the capabilities of classical computers. This tool, known as a quantum computer, features processors composed of individual units termed qubits. While various methods exist for constructing qubits, superconducting circuits have emerged as a leading approach, owing to their parallels with semiconductor technology.In recent years, significant strides have been made in optimizing the geometry and design of qubits. However, the current bottleneck in the performance of superconducting qubits lies in the presence of defects and impurities within the materials used. Niobium, owing to its desirable properties, such as high critical temperature and low kinetic inductance, stands out as the most prevalent superconducting material. Nonetheless, it is encumbered by a relatively thick oxide layer (approximately 5 nm) exhibiting three distinct oxidation states: NbO, NbO$_2$, and Nb$_2$O$_5$. The primary challenge with niobium lies in the multitude of defects localized within the highly disordered Nb$_2$O$_5$ layer and at the interfaces between the different oxides. In this study, I present an encapsulation strategy aimed at restraining surface oxide growth by depositing a thin layer (5 to 10 nm) of another material in vacuum atop the Nb thin film. This approach exploits the superconducting proximity effect, and it was successfully employed in the development of Josephson junction devices on Nb during the 1980s.In the past two years, tantalum and titanium nitride have emerged as promising alternative materials, with breakthrough qubit publications showcasing coherence times five to ten times superior to those achieved in Nb. The focus will be on the fabrication and RF testing of Nb-based qubits with Ta and Au capping layers. With Ta capping, we have achieved the best T1 (not average) decay time of nearly 600 us, which is more than a factor of 10 improvements over the bare Nb. This establishes the unique capping layer approach as a significant new direction for the development of superconducting qubits.Concurrently with the exploration of materials for encapsulation strategies, identifying materials conducive to enhancing the performance of superconducting qubits is imperative. Ideal candidates should exhibit a thin, low-loss surface oxide and establish a clean interface with the substrate, thereby minimizing defects and potential sources of losses. Rhenium, characterized by an extremely thin surface oxide (less than 1 nm) and nearly perfect crystal structure alignment with commonly used substrates such as sapphire, emerges as a promising material platform poised to elevate the performance of superconducting qubits.
Show less
- Title
- Improving Localization Safety for Landmark-Based LiDAR Localization System
- Creator
- Chen, Yihe
- Date
- 2024
- Description
-
Autonomous ground robots have gained traction in various commercial applications, with established safety protocols covering subsystem...
Show moreAutonomous ground robots have gained traction in various commercial applications, with established safety protocols covering subsystem reliability, control algorithm stability, path planning, and localization. This thesis specifically delves into the localizer, a critical component responsible for determining the vehicle’s state (e.g., position and orientation), assessing compliance with localization safety requirements, and proposing methods for enhancing localization safety.Within the robotics domain, diverse localizers are utilized, such as scan-matching techniques like normal distribution transformations (NDT), the iterative closest point (ICP) algorithm,probabilistic maps method, and semantic map-based localization.Notably, NDT stands out as a widely adopted standalone laser localization method, prevalent in autonomous driving software such as Autoware and Apollo platforms.In addition to the mentioned localizers, common state estimators include variants of Kalman Filter, particle filter-based, and factor graph-based estimators. The evaluation of localization performance typically involves quantifying the estimated state variance for these state estimators.While various localizer options exist, this study focuses on those utilizing extended Kalman filters and factor graph methods. Unlike methods like NDT and ICP algorithms, extended Kalman filters and factor graph based approaches guarantee bounding of estimated state uncertainty and have been extensively researched for integrity monitoring.Common variance analysis, employed for sensor readings and state estimators, has limitations, primarily focusing on non-faulted scenarios under nominal conditions. This approach proves impractical for real-world scenarios and falls short for safety-critical applications like autonomous vehicles (AVs).To overcome these limitations, this thesis utilizes a dedicated safety metric: integrity risk. Integrity risk assesses the reliability of a robot’s sensory readings and localization algorithm performance under both faulted and non-faulted conditions. With a proven track record in aviation, integrity risk has recently been applied to robotics applications, particularly for evaluating the safety of lidar localization.Despite the significance of improving localization integrity risk through laser landmark manipulation, this remains an under explored territory. Existing research on robot integrity risk primarily focuses on the vehicles themselves. To comprehensively understand the integrity risk of a lidar-based localization system, as addressed in this thesis, an exploration of lidar measurement faults’ modes is essential, a topic covered in this thesis.The primary contributions of this thesis include: A realistic error estimation method for state estimators in autonomous vehicles navigating using pole-shape lidar landmark maps, along with a compensatory method; A method for quantifying the risk associated with unmapped associations in urban environments, enhancing the realism of values provided by the integrity risk estimator; a novel approach to improve the localization integrity of autonomous vehicles equipped with lidar feature extractors in urban environments through minimal environmental modifications, mitigating the impact of unmapped association faults. Simulation results and experimental results are presented and discussed to illustrate the impact of each method, providing further insights into their contributions to localization safety.
Show less
- Title
- Intraoperative Assessment of Surgical Margins in Head And Neck Cancer Resection Using Time-Domain Fluorescence Imaging
- Creator
- Cleary, Brandon M.
- Date
- 2023
- Description
-
Rapid and accurate determination of surgical margin depth in fluorescence guided surgery has been a difficult issue to overcome, leading to...
Show moreRapid and accurate determination of surgical margin depth in fluorescence guided surgery has been a difficult issue to overcome, leading to over- or under-resection of cancerous tissues and follow-up treatments such as ‘call-back’ surgery and chemotherapy. Current techniques utilizing direct measurement of tumor margins in frozen section pathology are slow, which can prevent surgeons from acting on information before a patient is sent home. Other fluorescence techniques require the measurement of margins via captured images that are overlayed with fluorescent data. This method is flawed, as measuring depth from captured images loses spatial information. Intensity-based fluorescence techniques utilizing tumor-to-background ratios do not decouple the effects of concentration from the depth information acquired. Thus, it is necessary to perform an objective measurement to determine depths of surgical margins. This thesis focuses on the theory, device design, simulation development, and overall viability of time-domain fluorescence imaging as an alternative method of determining surgical margin depths. Characteristic regressions were generated using a thresholding method on acquired time-domain fluorescence signals, which were used to convert time-domain data to a depth value. These were applied to an image space to generate a depth map of a modelled tissue sample. All modeling was performed on homogeneous media using Monte Carlo simulations, providing high accuracy at the cost of increased computational time. In practice, the imaging process should be completed within a span of under 20 minutes for a full tissue sample, rather than 20 minutes for a single slice of the sample. This thesis also explores the effects of different thresholding levels on the accuracy of depth determination, as well as the precautions to be taken regarding hardware limitations and signal noise.
Show less
- Title
- Independence and Graphical Models for Fitting Real Data
- Creator
- Cho, Jason Y.
- Date
- 2023
- Description
-
Given some real life dataset where the attributes of the dataset take on categorical values, with corresponding r(1) × r(2) × … × r(m)...
Show moreGiven some real life dataset where the attributes of the dataset take on categorical values, with corresponding r(1) × r(2) × … × r(m) contingency table with nonzero rows or nonzero columns, we will be testing the goodness-of-fit of various independence models to the dataset using a variation of Metropolis-Hastings that uses Markov bases as a tool to get a Monte Carlo estimate of the p-value. This variation of Metropolis-Hastings can be found in Algorithm 3.1.1. Next we will consider the problem: ``out of all possible undirected graphical models each associated to some graph with m vertices that we test to fit on our dataset, which one best fits the dataset?" Here, the m attributes are labeled as vertices for the graph. We would have to conduct 2^(mC2) goodness-of-fit tests since there are 2^(mC2) possible undirected graphs on m vertices. Instead, we consider a backwards selection method likelihood-ratio test algorithm. We first start with the complete graph G = K(m), and call the corresponding undirected graphical model ℳ(G) as the parent model. Then for each edge e in E(G), we repeatedly apply the likelihood-ratio test to test the relative fit of the model ℳ(G-e), the child model, vs. ℳ(G), the parent model, where ℳ(G-e) ⊆ℳ(G). More details on this iterative process can be found in Algorithm 4.1.3. For our dataset, we will be using the alcohol dataset found in https://www.kaggle.com/datasets/sooyoungher/smoking-drinking-dataset, where the four attributes of the dataset we will use are ``Gender" (male, female), ``Age", ``Total cholesterol (mg/dL)", and ``Drinks alcohol or not?". After testing the goodness-of-fit of three independence models corresponding to the independence statements ``Gender vs Drink or not?", ``Age vs Drink or not?", and "Total cholesterol vs Drink or not?", we found that the data came from a distribution from the two independence models corresponding to``Age vs Drink or not?" and "Total cholesterol vs Drink or not?" And after applying the backwards selection likelihood-ratio method on the alcohol dataset, we found that the data came from a distribution from the undirected graphical model associated to the complete graph minus the edge {``Total cholesterol”, ``Drink or not?”}.
Show less
- Title
- Development of a Model To Investigate Inflammation Using Peripheral Blood Mononucleated Cells
- Creator
- Geevarghese Alex, Peter
- Date
- 2023
- Description
-
Our modern culture in our society is facing one of the biggest risks in health which is high-calorie diet-related postprandial inflammation....
Show moreOur modern culture in our society is facing one of the biggest risks in health which is high-calorie diet-related postprandial inflammation. Chronic diseases may be caused if the energy-dense food is the choice meaning if it is uncontrolled, clinical studies have demonstrated this with the body's post-meal inflammatory response. We aimed to find the causes of postprandial inflammation in response to various dietary treatments and provide a model to demonstrate. We aimed to make use of in vivo and in vitro techniques and statistics to create a model. The created model would help us to design specific treatments to minimize inflammation with response to dietary. In addition to figuring out vital dietary additives, the model additionally facilitates the layout of individualized interventions to reduce inflammation, thereby improving long-time period health outcomes. We aim to understand the clinical observations of diet-induced postprandial inflammation on the molecular level. We desire to make contributions to reduce the impact of chronic inflammatory disorders that is associated with postprandial inflammation.
Show less
- Title
- Utilizing Concurrent Data Accesses for Data-Driven and AI Applications
- Creator
- Lu, Xiaoyang
- Date
- 2024
- Description
-
In the evolving landscape of data-driven and AI applications, the imperative for reducing data access delay has never been more critical,...
Show moreIn the evolving landscape of data-driven and AI applications, the imperative for reducing data access delay has never been more critical, especially as these applications increasingly underpin modern daily life. Traditionally, architectural optimizations in computing systems have concentrated on data locality, utilizing temporal and spatial locality to enhance data access performance by maximizing data and data block reuse. However, as poor locality is a common characteristic of data-driven and AI applications, utilizing data access concurrency emerges as a promising avenue to optimize the performance of evolving data-driven and AI application workloads.This dissertation advocates utilizing concurrent data accesses to enhance performance in data-driven and AI applications, addressing a significant research gap in the integration of data concurrency for performance improvement. It introduces a suite of innovative case studies, including a prefetching framework that dynamically adjusts aggressiveness based on data concurrency, a cache partitioning framework that balances application demands with concurrency, a concurrency-aware cache management framework to reduce costly cache misses, a holistic cache management framework that considers both data locality and concurrency to fine-tune decisions, and an accelerator design for sparse matrix multiplication that optimizes adaptive execution flow and incorporates concurrency-aware cache optimizations.Our comprehensive evaluations demonstrate that the implemented concurrency-aware frameworks significantly enhance the performance of data-driven and AI applications by leveraging data access concurrency.Specifically, our prefetch framework boosts performance by 17.3%, our cache partitioning framework surpasses locality-based approaches by 15.5%, and our cache management framework achieves a 10.3% performance increase over prior works. Furthermore, our holistic cache management framework enhances performance further, achieving a 13.7% speedup. Additionally, our sparse matrix multiplication accelerator outperforms existing accelerators by a factor of 2.1.As optimizing data locality in data-driven and AI applications becomes increasingly challenging, this dissertation demonstrates that utilizing concurrency can still yield significant performance enhancements, offering new insights and actionable examples for the field. This dissertation not only bridges the identified research gap but also establishes a foundation for further exploration of the full potential of concurrency in data-driven and AI applications and architectures, aiming at fulfilling the evolving performance demands of modern and future computing systems.
Show less
- Title
- Health and Well-Being Benefits of Different Types of Urban Green Spaces (UGS): A Cross-Sectional Study of Communities in Chicago, U.S.
- Creator
- Kang, Liwen
- Date
- 2023
- Description
-
There are three main interrelated areas of focus in this doctoral research related tourban green spaces (UGS): general well-being, mental and...
Show moreThere are three main interrelated areas of focus in this doctoral research related tourban green spaces (UGS): general well-being, mental and physical health. In this study, these three different health aspects were analyzed separately. The data of these three health outcomes were collected from the Healthy Chicago Survey (HCS), an annual telephone survey that interviewed adults in Chicago, U.S., based on the randomly selected addresses.Urban green spaces have been associated with better health and well-being. Theyprovide sites for physical activity, buffer air and noise pollution, and alleviate thermal discomfort. Urban green spaces also promote social interaction and increase social cohesion. However, research is limited on the health benefits of different types of UGS exposure. This research aimed to reveal the associations between the provision of different UGS types and urban residents’ general, mental, and physical health in Chicago, the third-largest city in the U.S.Urban green spaces data were collected from the National Land Cover Database(NLCD), the Meter-Scale Urban Land Cover (MULC), and the Chicago Park District (CPD). Different types of UGS were obtained, namely 1) the percent tree canopy cover (TCC) from the first database; 2) the percentage of trees and the percentage of grass from the second database; and 3) the number of parks, park areas, percentage of park areas from the third database. Using hierarchical and logistic regression models that controlled for a range of confounding factors (age, gender, race, education level, employment status, and poverty level), this study assessed which type of UGS affects general well-being, mental health, and physical health, respectively. The results indicated that increased park area was significantly associated with better perceived general health; higher percent of TCC was significantly associated with a lower level of psychological distress (PD); and increased percentage of park areas and increased number of parks were associated with lower odds of being obese. Two micro-scaled on-site observations were conducted in the Avalon Park community and the Loop community to analyze some other UGS characteristics besides quantity and availability. Other characteristics of UGS, such as quality of facilities, attractiveness, and maintenance, are suggested to be taken into consideration for future studies. The study highlights that different UGS types have various impacts on general, mental, and physical health of urban residents. By providing scientific evidence, this study may help policymakers, urban planners, landscape architects, and other related professionals to make informed decisions on maximizing the health benefits of UGS and to achieve social equity. The findings of this study may be applied to other metropolitan cities.
Show less
- Title
- Resolvent analysis of turbulent flows: Extensions, improvements and applications
- Creator
- Lopez-Doriga Costales, Barbara
- Date
- 2024
- Description
-
This thesis presents several advances in both physics-based and data-driven modeling of turbulent fluid flows. In particular, the present...
Show moreThis thesis presents several advances in both physics-based and data-driven modeling of turbulent fluid flows. In particular, the present thesis focuses on resolvent analysis, a physics-based framework that identifies the coherent structures that are most amplified by the Navier-Stokes equations when they are linearized about a known turbulent mean flow via a singular value decomposition (SVD) of a discretized operator. This method has proven to effectively capture energetically-relevant features observed in various flows. However, it has some shortcomings that the present work intends to alleviate. First, the original formulation of resolvent analysis is restricted to statistically-stationary or time-periodic mean flows. To expand the applicability of this framework, this thesis presents a spatiotemporal variant of resolvent analysis that is able to account for time-varying systems. Moreover, sparsity (which manifests in localization) is also incorporated to the analysis through the addition of an l1-norm penalization term to the optimization associated with the SVD. This allows for the identification of energetically-relevant coherent structures that correspond to spatio-temporally localized amplification mechanisms, for flows with either a time-varying or stationary mean. The high computational cost associated with the discretization and analysis of a large discretized of the mean-linearized Navier-Stokes operator represents the second drawback of resolvent analysis. As a second contribution, this thesis provides an analytic form of resolvent analysis for planar flows based on wavepacket pseudomode theory, avoiding the numerical computations required in the original framework. The third contribution focuses on the characterization of the energetically-dominant coherent structures that arise in turbulent flow traveling through straight ducts with square and rectangular cross-sections. First, resolvent analysis is applied to predict the coherent structures that arise in this flow, and to study the sensitivity of this methodology to the secondary mean flow components that display a distinct pattern near the duct corners. Next, a data-driven causality analysis is performed to understand the physical mechanisms involved in the evolution of coherent structures near the duct corners. To do this, a nonlinear Granger causality analysis method is developed and applied to proper orthogonal decomposition coefficients of direct numerical simulation data, revealing that the structures associated with the secondary velocity components are behind the formation and translation of the near-wall and near-corner streamwise structures. A general discussion and future prospects are discussed at the end of this thesis.
Show less
- Title
- Resolvent Analysis of Turbulent Flow over Compliant Surfaces: Optimization Methods and Stability Considerations.
- Creator
- Lapanderie, Kilian Pierre Lucien
- Date
- 2024
- Description
-
This thesis delves into the manipulation of turbulence properties through innovative compliant surface designs. Turbulence, known for its...
Show moreThis thesis delves into the manipulation of turbulence properties through innovative compliant surface designs. Turbulence, known for its unpredictable fluid movements, presents substantial challenges across engineering disciplines, particularly in optimizing system efficiency and minimizing energy losses. This research explores the potential of compliant surfaces to control and mitigate the adverse effects of turbulent flow, thereby enhancing the performance and reliability of engineering systems.Employing the resolvent analysis method, this work investigates the interaction between turbulent flows and surfaces capable of dynamic adaptation. The study evaluates the impact of these surfaces on turbulence suppression through the application of both space-dependent and independent compliance models, where the compliance model is characterised by an admittance, which represents the relationship between the instantaneous surface pressure and surface velocity. This approach allows for a nuanced understanding of how different surface properties can influence the behavior of turbulent flows.A significant contribution of this thesis is the comprehensive stability analysis conducted to assess the implications of compliant surfaces on the linear stability of the dynamical system. By examining the eigenvalues of the mean-linearized system, the research identifies the conditions under which compliant surfaces may induce or mitigate instabilities within turbulent flows. This analysis is pivotal in developing compliant surface designs that not only reduce turbulence-induced energy losses but also ensure the stability of the flow, a critical consideration for practical engineering applications.The findings of this thesis offer valuable insights into the role of surface compliance in turbulence control, paving the way for further research and the development of advanced engineering solutions. Through a detailed investigation of the interactions between compliant surfaces and turbulent flows, this work contributes to the broader field of fluid dynamics and underscores the potential of innovative surface designs in achieving more efficient and sustainable engineering systems.
Show less
- Title
- Utility of a Low-Coverage Genome Assembly for Discovery of Genes Associated with Pyrethroid Resistance in Smicronyx Fulvus LeConte
- Creator
- Markiv, Paulina Patrycja
- Date
- 2023
- Description
-
Red sunflower seed weevil (RSSW) is a major insect pest of cultivated and wild common sunflowers in the Great Plains of North America. The...
Show moreRed sunflower seed weevil (RSSW) is a major insect pest of cultivated and wild common sunflowers in the Great Plains of North America. The extent of the sunflower damage due to RSSW infestation is too great for the natural sunflower defense mechanisms to protect the agriculture industry from losses. Pyrethroids are the only type of insecticide designated for the control of RSSW; however, instances of pyrethroid insecticide ineffectiveness against RSSW have been annually reported to entomologists at South Dakota State University since 2017. The biological bases of insecticide resistance are unknown but common mechanisms associated with pyrethroid resistance include general detoxification mechanism driven by cytochrome P450s (CYP450s) as well as mutations in the pyrethroid target, voltage-gated sodium channels (VGSCs). The goal of this study was to determine if the computational analysis of a low-coverage genome assembly is sufficient to identify and characterize genes associated with insecticide resistance which could contribute to pest control research efforts. By using a low-coverage genome assembly, RNA-Seq data, and bioinformatic tools, 40 complete and 33 partial gene models coding for CYP450 as well as a partial gene model coding for VGSC have been identified in the genome of RSSW. Twenty-seven mutation sites, previously associated with the pyrethroid resistance in other insects, have been identified in the VGSC gene of RSSW. The low-coverage genome proved to be a sufficient resource for preliminary studies of gene identification which could bring significant knowledge to subsequent research focusing on insecticide resistance and pest control.
Show less
- Title
- Measurement and Control of Beam Energy at the Fermilab 400 MeV Transfer Line
- Creator
- Mwaniki, Matilda W.
- Date
- 2023
- Description
-
Linac is the first machine in the Accelerator chain at Fermilab where particles are accelerated from 35 keV to 400 MeV and travel to the...
Show moreLinac is the first machine in the Accelerator chain at Fermilab where particles are accelerated from 35 keV to 400 MeV and travel to the Booster where they are stripped of the extra electrons to become protons. Tuning Linac is performed using diagnostics to ensure stable intensity and energy while minimizing uncontrolled particle loss. I have been revisiting diagnostics in the Linac in order to understand their signals and to ensure their data is reliable. I revisited Beam Loss Monitors (BLMs) for the loss data confidence. For the confidence of energy data there were two approaches. The first approach was time-of-flight measurements using Beam Position Monitors (BPMs) and beam velocity stripline pick-up that provides beam phase data. The second approach used the relation between beam position data from BPMs and dispersion values from MAD-X simulation to calculate energy. Our goal after understanding the data from the Linac diagnostics and finding the data reliable is to control the Linac parameters using Machine Learning techniques to increase the reliability and quality of beam delivered from Linac.
Show less
- Title
- Examination of Listeria monocytogenes survival in refrigerated chopped hard-boiled eggs and deli salads containing this ingredient
- Creator
- Marathe, Aishwarya Nagesh
- Date
- 2024
- Description
-
Peeled hard-boiled eggs (HBEs) are widely favored by both consumers and food services due to their convenience. These HBEs are often chopped...
Show morePeeled hard-boiled eggs (HBEs) are widely favored by both consumers and food services due to their convenience. These HBEs are often chopped and incorporated into various dishes such as deli salads. However, recent recalls of hard-boiled eggs have brought attention to the risk of contamination with Listeria monocytogenes. Prepared HBEs are typically subjected to antibacterial treatment to maintain product safety and quality. Citric acid is a common antibacterial used in the food industry to treat the HBEs. Previous research has determined that 2% citric acid treatment is effective against L. monocytogenes on whole HBEs. This study examined the efficacy of citric acid on the reduction of L. monocytogenes on chopped HBEs and in deli salads containing chopped HBEs. HBEs were treated with 2% citric acid or water (untreated) by submersion for 24 h at 5°C. HBEs were dried for 10 min, inoculated with a 4-strain cocktail of rifampicin-resistant L. monocytogenes, at 1 (low-level inoculation) or 4 log CFU/HBE (high level-inoculation), and allowed to dry for 10 min. Low-level inoculated HBEs were chopped and stored at 5, 10, or 15°C for 28 d. High-level inoculated HBEs were chopped and stored at 5, 10, and 25°C for 14 d. Low-level inoculated HBEs were also chopped and incorporated into potato, tuna, chicken, or macaroni salad at a 1:6 ratio (HBE to other ingredients), or into egg salad at a 7:1 ratio. Salads were stored at 5, 10, or 15°C for 28 d. The presence of L. monocytogenes was determined at intervals during storage by enrichment with BLEB and/or enumerated on BHIArif throughout storage. Triplicate samples were assessed for each time point, and three independent trials were conducted. Data was analyzed by Student’s T-test, ANOVA, and Fisher’s exact test, p≤0.05. For low-level inoculated chopped HBEs, the L. monocytogenes population was significantly higher in untreated chopped HBEs (1.86±0.33 log CFU/g) as compared to treated chopped HBEs (1.47±0.27 log CFU/g) on day 14 at 15°C. On both untreated and treated chopped HBEs, there was no significant difference in the population of L. monocytogenes up to 7 d. However, from 14 d, there was a significant increase in the growth of L. monocytogenes (1.86±0.33 to 2.18±0.35 log CFU/g on untreated chopped HBEs and 1.47±0.27 to 1.94±0.47 log CFU/g for treated, respectively). For high-level inoculated HBEs, a higher L. monocytogenes growth rate was observed on untreated chopped HBEs as compared to treated chopped HBEs at 10 and 25°C. It was observed that treated chopped HBEs at 5°C took the longest to reach 1 log CFU/g increase in the L. monocytogenes population (50 d) whereas, untreated chopped HBEs at 25°C took the shortest (0.22 d). Untreated chopped HBEs showed a significantly higher population of L. monocytogenes as compared to treated chopped HBEs on 14 d at all storage temperatures. In deli salads containing chopped HBEs, potato salad showed the highest growth of L. monocytogenes after 14 d, followed by macaroni, egg, chicken, and tuna salad. The population of L. monocytogenes was the lowest in tuna salad. L. monocytogenes was present throughout the storage period at all storage temperatures. It was observed that 2% citric acid is more efficient in controlling the growth of L. monocytogenes in chopped HBEs as compared to when those HBEs are incorporated into deli salads. The findings contribute to the formulation of preventive measures and standards aimed at guaranteeing the safety of HBEs.
Show less
- Title
- Voxel Transformer with Density-Aware Deformable Attention for 3D Object Detection
- Creator
- Kim, Taeho
- Date
- 2023
- Description
-
The Voxel Transformer (VoTr) is a prominent model in the field of 3D object detection, employing a transformer-based architecture to...
Show moreThe Voxel Transformer (VoTr) is a prominent model in the field of 3D object detection, employing a transformer-based architecture to comprehend long-range voxel relationships through self-attention. However, despite its expanded receptive field, VoTr’s flexibility is constrained by its predefined receptive field. In this paper, we present a Voxel Transformer with Density-Aware Deformable Attention (VoTr-DADA), a novel approach to 3D object detection. VoTr-DADA leverages density-guided deformable attention for a more adaptable receptive field. It efficiently identifies key areas in the input using density features, combining the strengths of both VoTr and Deformable Attention. We introduce the Density-Aware Deformable Attention (DADA) module, which is specifically designed to focus on these crucial areas while adaptively extracting more informative features. Experimental results on the KITTI dataset and the Waymo Open dataset show that our proposed method outperforms the baseline VoTr model in 3D object detection while maintaining a fast inference speed.
Show less
- Title
- Financialization in the Structured Products Market
- Creator
- Zhu, Lizi
- Date
- 2023
- Description
-
This dissertation aims to study financialization in the structured products market. The structured products market has been undergoing a major...
Show moreThis dissertation aims to study financialization in the structured products market. The structured products market has been undergoing a major transformation in recent years. The market used to mainly serve institutional investors. However, as a few trading platforms powered by fintech companies emerged on the horizon, more and more banks are starting to compete in this market. The average trade size has also been declining significantly, thereby making the market increasingly accessible to retail investors. What are the factors that facilitate the development of this market? What are the economic incentives of issuers and investors? How do issuers compete? What does the future hold for this market? The main finding of this dissertation is that structured products provide utility to retail investors; As the level of risk aversion increases, an investor increasingly prefers structured products to other traditional asset classes; issuers develop three sources of competitive advantage to be a satisficer; the rise of fintech and improvement of financial education are the key to opening this market to retail investors.
Show less
- Title
- Multimodal Learning and Generation Toward a Multisensory and Creative AI System
- Creator
- Zhu, Ye
- Date
- 2023
- Description
-
We are perceiving and communicating with the world in a multisensory manner, where different information sources are sophisticatedly processed...
Show moreWe are perceiving and communicating with the world in a multisensory manner, where different information sources are sophisticatedly processed and interpreted by separate parts of the human brain to constitute a complex, yet harmonious and unified intelligent system. To endow the machines with true intelligence, multimodal machine learning that incorporates data from various modalities including vision, audio, and text, has become an increasingly popular research area with emerging technical advances in recent years. Under the context of multimodal learning, the creativity to generate and synthesize novel and meaningful data is a critical criterion to assess machine intelligence.As a step towards a multisensory and creative AI system, we study the problem of multimodal generation in this thesis by exploring the field from multiple perspectives. Firstly, we analyze different data modalities in a comprehensive manner by comparing the data natures, the semantics, and their corresponding mainstream technical designs. We then propose to investigate three multimodal generation application scenarios, namely text generation from visual data, audio generation from visual data, and visual generation from textual data, with diverse approaches to give an overview of the field. For the direction of text generation from visual data, we study a novel multimodal task in which the model is expected to summarize a given video with textual descriptions, under a challenging condition where the video can only be partially seen. We propose to supplement the missing visual information via a dialogue interaction and introduce QA-Cooperative network with a dynamic dialogue history update learning mechanism to tackle the challenge. For the direction of audio generation from visual data, we present a new multimodal task that aims to generate music for a given silent dance video clip. Unlike most existing conditional music generation works that generate specific types of mono-instrumental sounds using symbolic audio representations (e.g., MIDI), and that heavily rely on pre-defined musical synthesizers, we generate dance music in complex styles (e.g., pop, breaking, etc.) by employing a Vector-Quantized (VQ) audio representation via our proposed Dance2Music-GAN (D2M-GAN) framework. For the direction of visual generation from textual data, we tackle a key desideratum in conditional synthesis, which is to achieve high correspondence between the conditioning input and generated output using the state-of-the-art generative model -- Diffusion Probabilistic Model. While most existing methods learn such relationships implicitly, by incorporating the prior into the variational lower bound in model training. In this work, we take a different route by explicitly enhancing input-output connections by maximizing their mutual information, which is achieved by our proposed Conditional Discrete Contrastive Diffusion (CDCD) framework. For each direction, we conduct extensive experiments on multiple multimodal datasets and demonstrate that all of our proposed frameworks are able to effectively and substantially improve task performance in their corresponding contexts.
Show less
- Title
- Parking Demand Forecasting Using Asymmetric Discrete Choice Models with Applications
- Creator
- Zhang, Ji
- Date
- 2023
- Description
-
Using discrete choice models to forecast travelers parking location choice has been a branch of parking demand research for many years. The...
Show moreUsing discrete choice models to forecast travelers parking location choice has been a branch of parking demand research for many years. The most used discrete choice models have fairly simple mathematical expressions, such as the probit and logit models. The application of simple models helps release the computational burdens brought by parameter estimation tasks in practice, but the cost is the unwanted properties of classic models such as the “symmetry property” that we argue is often undesirable in many fields. To some extent, the symmetry property of related models limits the shape of curves that makes the model fitting less flexible technically. This study addresses the following question: “Can discrete choice models with asymmetry property outperform classic models with symmetry property in forecasting travelers’ parking location choices?” The contributions of this study include: (1) providing a new perspective of using asymmetric discrete choice models to explain and forecast individual’s parking location choice; and (2) completing the travel demand forecasting process from choices of the destination zone centroid to the parking location, enabling parking choice forecasting. This provides a generalized framework to calibrate and validate asymmetric discrete choice models with the field observed parking facility-specific arrival profile data integrated into a large-scale, high-fidelity regional travel demand model. Further, an experimental study is conducted to compare the performance of the proposed asymmetric discrete choice models in the parking demand forecasting framework. The results suggest that asymmetric discrete choice models for individual’s parking choice modeling outperform the symmetric discrete choice models such as the logit models owing largely to their flexibility of parameter fitting and training using the available dataset.
Show less