Search results
(4,581 - 4,600 of 4,656)
Pages
- Title
- Development of data assimilation for analysis of ion drifts during geomagnetic storms
- Creator
- Hu, Jiahui
- Date
- 2024
- Description
-
The primary objective of this dissertation is to gain insight into geomagnetic storm effects at mid-latitudes induced by solar activity....
Show moreThe primary objective of this dissertation is to gain insight into geomagnetic storm effects at mid-latitudes induced by solar activity. Geomagnetic storms affect our everyday lives because they give rise to transient signal loss, data transmission errors, negatively impacting users of satellite navigation systems. The Nighttime Localized Ionospheric Enhancement (NILE) is a localized plasma enhancement that because it is not well understood, drives the design of satellite-based augmentationsystems. To better secure operation of technological infrastructure, it is essential to build a comprehensive understanding of the atmospheric drivers, especially during solar active periods. Instrument measurements and climate models serve as valuable tools in obtaining information regarding the occurrence of space weather events; nonetheless, both sources exhibit quantitative and qualitative limitations. Data assimilation, an evolving technique, integrates measurements and model information to optimize the state estimations. This dissertation presents developments in a data assimilation algorithm known as Estimating Model Parameters from Ionospheric Reverse Engineering (EMPIRE), and its applications in investigating the atmospheric behaviors under varying solar conditions. EMPIRE is a data assimilation algorithm specifically designed for upper atmospheric driver estimation of neutral wind and ion drifts at user-defined spatial and temporal scales. The EMPIRE application in this work aims to contribute to a more comprehensive understanding of the effects of the NILE. EMPIRE utilizes the Kalman filter to optimize state calculations primarily based on electron density rates, provided by other data assimilation algorithms. Earlier runs of the algorithm used pre-defined values for the background state covariance cross time. To address model limitations under changing geomagnetic conditions, the algorithm is enhanced by concurrently updating the background state covariance during assimilation processes. Additionally, representation error is incor- porated as a component of the observation error, and error analysis is performed through a synthetic-data study. Previously, EMPIRE fused Fabry-Perot Interferometer (FPI) neutral wind measurements, demonstrating increased agreement with validation neutral wind data. In this work, this approach is extended to augment Coherent Scatter Radar (CSR) ion drift measurements from Super Dual Auroral Radar Network (SuperDARN), providing additional insights into EMPIRE’s estimated field-perpendicular ion motion. For an in-depth exploration of storm-related NILE, both EMPIRE and another data assimilation method, the Whole Atmosphere Community Climate Model with thermosphere and ionosphere eXtension coupled with Data Assimilation Research Testbed (WACCM-X + DART), is implemented for a storm event to test the proposed NILE driving mechanism. Furthermore, this dissertation introduces a Kalman smoother technique into the EMPIRE to enhance its ability to assess past storm events, and to explore the potential for algorithm improvements.
Show less
- Title
- #MeToo: What Urged Users to Post?
- Creator
- Hirsh, Rachel Anna
- Date
- 2023
- Description
-
In this dissertation, I explore the motivations that compelled individuals to share their stories during the #MeToo movement, an unprecedented...
Show moreIn this dissertation, I explore the motivations that compelled individuals to share their stories during the #MeToo movement, an unprecedented digital phenomenon that thrust discussions of sexual harassment and assault into the public sphere. The central research question guiding this study was, "What urged users to post during the #MeToo movement?," which worked to uncover why and how the movement became so widespread.. Research demonstrates that when people are sexually harassed or assaulted, often times they do not come forward (Hlavka, 2014). Spencer et al. shares some of the common reasons women do not come forward are that they don’t classify their harassment or assault as a big enough deal, they do not know who or how to report it, they are afraid, they were drunk, they are ashamed, they don’t want to get their assailant in trouble, or they blame themselves (2017). However, those reasons fell by the wayside, as so many people came forward during the #MeToo movement. This paper aims to figure out why that was and how we can continuously get survivors to come forward. This paper also asks the question, did people come forward to share testimony, be part of a movement, or both?Two distinct hypotheses were formulated to unpack the complex dynamics at play The first hypothesis posited that users who engaged with central nodes, encompassing key figures within the #MeToo movement, original contributors, celebrities, and influencers, were more inclined to hold a positive outlook on the movement as a progressive step for women. This data was collected through a quantitative survey, and the analysis yielded inconclusive results, with 79.15% of the sample population expressing support for the movement while only 54.17% reported following central nodes. Qualitative interviews further underscored the multifaceted nature of motivations.The second hypothesis posited that individuals were more inclined to share their personal experiences of harassment or assault online when they observed weak ties within their social networks, such as acquaintances or friends of friends, sharing their own stories. The findings from survey data revealed that 68.87% of participants witnessed weak ties sharing personal experiences or using the #MeToo hashtag on their social media platforms. Qualitative interviews unanimously highlighted the significant influence of observing friends or weak ties posting about their experiences, further underscoring the diversity of motivators behind #MeToo participation.These findings shed light on the multifaceted nature of online activism and the pivotal role of personal networks in shaping the movement's trajectory. In essence, this research demonstrates that while the motivations for user participation in the #MeToo movement are diverse and complex, the presence of weak ties, or distant social relationships or relationships with infrequent interactions, within social networks emerges as a critical influence.
Show less
- Title
- Optimization of Large-Scale NOMA With Incidence Matrix Design and Physical Layer Security
- Creator
- Hwang, Eli W.
- Date
- 2024
- Description
-
The Non-Orthogonal Multiple Access (NOMA) system is recognized for its capability to achieve higher spectral efficiency and massive...
Show moreThe Non-Orthogonal Multiple Access (NOMA) system is recognized for its capability to achieve higher spectral efficiency and massive connectivity. NOMA is intended to transmit massive user communications. The incidence matrix governs the relationship between users and resources for the Code domain NOMA (CD-NOMA). However, NOMA studies focus less on the design and optimization of the incidence matrix.Therefore, this thesis aims to investigate the development of a secure and large-scale NOMA system based on incidence matrix design. The main contributions are outlined as follows: Firstly, this research introduces a novel NOMA system. Distinct from existing studies, the NOMA system is based on combinatorial design. This innovative approach, coupled with a unique constellation design, eliminates the surjective mapping from the linear adding data of multiusers, reducing the complexity of constellation design and Multiuser Detection (MUD). The characteristics of the incidence matrix designs, Simple Orthogonal Multi-Arrays (SOMA), are explored, which display a distinct Latin Square pattern. The SOMA design's unique structure allows for the creation of a highly flexible and fair resource allocation matrix. The NOMA system's theoretical performance analysis equations are established, supporting dynamic adaptability and optimization. The design is validated by Monte Carlo simulation. Compared to other NOMA schemes, it offers higher degrees of freedom and lower complexity while maintaining graceful error rates to transmit a larger number of users. Secondly, a novel NOMA system utilizing incidence matrix information in the uplink is investigated. The incidence matrix pattern is exploited for MUD to achieve large-scale user connectivity. The incidence matrix is designed based on two critical mathematical concepts: parallel classes in hypergraph theory and orthogonal arrays (OAs) in combinatorial designs. Unlike other NOMA schemes, which require modification of their receiver and transmitter to decode superimposed multiuser signals, the unique pattern of the OA structure enables the use of conventional modulators. Consequently, the system load increases and the complexity and latency are reduced. The order of magnitude of the decoding complexity can be significantly reduced from O(N^3) to O(N) compared to the conventional minimum mean-square estimation (MMSE) decoder. Monte Carlo simulation validates that this novel NOMA system outperforms other NOMA designs in terms of error rate, data rate, and system size. Finally, a reconfigurable convolutional encoder design that integrates security and error correction based on physical layer security (PLS) and randomness is developed. This design addresses concerns over privacy, security, and reliability of Internet of Things devices in edge computing networks. The lightweight Convolutional encoders are designed to ensure security by updating the transfer function dynamically with user data. The reconfigurability of the design is achieved by replacing the fixed adder that represents the generator polynomials with the switch adder, enabling the use of 87 billion distinct updating structures, thereby enhancing the versatility of the design. BER-based PLS paradigms are demonstrated in the simulation. In the simulation, the robustness and randomness of this design are further validated through tests suggested by the National Institute of Standards and Technology for cryptographically secure pseudorandom number generators, such as the monobits, longest one, and run tests.
Show less
- Title
- A Kernel-Free Boundary Integral Method for Two-Dimensional Magnetostatics Analysis
- Creator
- Jin, Zichao
- Date
- 2023
- Description
-
Performing magnetostatic analysis accurately and efficiently is crucial for the multi-objective optimization of electromagnetic device designs...
Show morePerforming magnetostatic analysis accurately and efficiently is crucial for the multi-objective optimization of electromagnetic device designs. Therefore, an accurate and computationally efficient method is essential. Kernel Free Boundary Integral Method is a numerical method that can accurately and efficiently solve partial differential equations. Unlike traditional boundary integral or boundary element methods, KFBIM does not require an analytical form of Green’s function for evaluating integrals via numerical quadrature. Instead, KFBIM computes integrals by solving an equivalent interface problem on a Cartesian mesh. Compared with traditional finite difference methods for solving the governing PDEs directly, KFBIM produces a well-conditioned linear system. Therefore, the numerical solution of KFBIM is not sensitive to computer round-off errors, and the KFBIM requires only a fixed number of iterations when an iterative method (e.g., GMRES) is applied to solve the linear system.In this research, the KFBIM is introduced for solving magnetic computations in a toroidal core geometry in 2D. This study is very relevant in designing and optimizing toroidal inductors or transformers used in electrical systems, where lighter weight, higher inductance, higher efficiency, and lower leakage flux are required. The results are then compared with a commercial finite element solver (ANSYS), which shows excellent agreement. It should be noted that, compared with FEM, the KFBIM does not require a body-fitted mesh and can achieve high accuracy with a coarse mesh. In particular, the magnetic potential and tangential field intensity calculations on the boundaries are more stable and exhibit almost no oscillations.Furthermore, although KFBIM is accurate and computationally efficient, sharp corners can be a significant problem for KFBIM. Therefore, an inverse discrete Fourier transform (DFT) based geometry reconstruction is explored to overcome this challenge for smoothening sharp corners. A toroidal core with an airgap (C-core) is modeled to show the effectiveness of the proposed approach in addressing the sharp corner problem. A numerical example demonstrates that the method works for the variable coefficient PDE. In addition, magnetostatic analysis for homogeneous and nonhomogeneous material is presented for the reconstructed geometry, and results carried out from KFBIM are compared with the results of FEM analysis for the original geometry to show the differences and the potential of the proposed method.
Show less
- Title
- Modeling and Optimization of Embedded Active Flow Control Systems
- Creator
- Henry, James M.
- Date
- 2024
- Description
-
This thesis presents research focused on the aerodynamic performance of circulation control on two-dimensional and quasi-two-dimensional wings...
Show moreThis thesis presents research focused on the aerodynamic performance of circulation control on two-dimensional and quasi-two-dimensional wings. Aerodynamic loads, namely lift, drag, and moment coefficients, are measured through Reynolds Averaged Navier Stokes (RANS) modeling and wind tunnel experiment. A simplified and parameterized RANS model is presented as a rapidly iterable approach to estimating the performance of trailing-edge circulation control on two dimensional airfoils, with the hypothesis that an optimized airfoil shape can be found which maximizes the lift coefficient increment generated by circulation control, through modification of the wing profile. The simplified modeling setup is compared with more conventional approaches to numerical simulation of circulation control. The performance of the simplified modeling scheme is then compared with wind tunnel studies, for both steady-state and dynamic performance, as functions of both momentum coefficient dCμ and chord-based Reynolds number Re_c. The dynamic performance for the model is studied to find an analog to the theoretical unsteady models of Wagner and Theodorsen. An adjoint optimization framework is used to find an optimal airfoil profile for circulation control. The optimized profile is then compared in both a simulation and a wind tunnel test study against a NACA0015 airfoil. In simulation, improvement between 12% and 15% is seen for the lift control authority for all values of dCμ and Re_c tested. In experiment, the optimized profile demonstrated improvements of up to 28% in lift control authority, dCL/dCμfor values of Cμ, and decreased performance for higher values of Cμ.
Show less
- Title
- Evaluation of the efficacy of power ultrasound technology coupled with organic acids to reduce listeria monocytogenes on peaches and apples
- Creator
- Joshi, Mayura Anand
- Date
- 2024
- Description
-
Fresh fruits and vegetables are prone to microbial contamination throughout different phases of human handling, processing, transportation,...
Show moreFresh fruits and vegetables are prone to microbial contamination throughout different phases of human handling, processing, transportation, and distribution. Emerging technologies, such as power ultrasound, have received attention due to their capacity to reduce or eliminate foodborne bacterial pathogens on these commodities. Power ultrasound, when combined with certain antimicrobials, has demonstrated its effectiveness as a valuable tool for washing fresh produce. The objective of this study was to examine the effectiveness of power ultrasound combined with organic acids on the reduction of Listeria monocytogenes on fruits. In this study, peaches and apples were subjected to surface inoculation with a four-strain cocktail of L. monocytogenes and dried for 1 h. Stomacher bags, containing 225 mL of citric, lactic, or malic acids at concentrations of 1%, 2%, or 5%, were employed for treating inoculated peaches and apples. The acid treatment was used alone, or in combination with power ultrasound for 2, 5, or 10 min. Water was used for controls. Before treatment, the initial population of L. monocytogenes on apples was lower compared to the initial population on peaches, with apples showing a 1.94 log CFU/fruit reduction. Water controls demonstrated no significant log reduction in both apples and peaches. The greatest L. monocytogenes reduction on apples occurred when treated with 1% citric acid for 2 min with power ultrasound where L. monocytogenes was significantly reduced from 6.98±0.88 log CFU/fruit to 5.56±0.91 log CFU/fruit. The greatest L. monocytogenes reduction on peaches occurred when treated with 5% citric acid for 5 min with power ultrasound where L. monocytogenes was significantly reduced from 7.44±0.45 log CFU/fruit to 6.68±0.40 log CFU/fruit. Overall, the combined effect of acid and power ultrasound was more pronounced in apples than in peaches. The survival of L. monocytogenes on apples and peaches appeared to be highly dependent on the specific treatment and hurdle technology applied. The combination of ultrasound hurdle technology with acid washing has proven effective in reducing L. monocytogenes on both peaches and apples, with a more significant impact observed on apples. While acid washing is a more economical option compared to ultrasound technology, the efficiency of microorganism reduction is considerably enhanced when power ultrasound is combined with organic acids. Looking ahead, the development of cost-effective power ultrasound methods could facilitate widespread adoption of ultrasound hurdle technology in the produce industry.
Show less
- Title
- Pilgrim Baptist Church, Chicago, Illinois, ca. 1964
- Creator
- Weil, F. Peter
- Date
- 1964
- Description
-
Pilgrim Baptist Church (3301 S. Indiana Ave, Chicago, IL) photographed by Institute of Design student F. Peter Weil. Date is estimated as 1964...
Show morePilgrim Baptist Church (3301 S. Indiana Ave, Chicago, IL) photographed by Institute of Design student F. Peter Weil. Date is estimated as 1964 from other evidence in the collection.
Show less - Collection
- F. Peter Weil photographs, 1952-1964
- Title
- Correlating Microstructural Properties to Macroscopic Shear Mechanics to Improve the Understanding of Tissue Biomechanics
- Creator
- Cahoon, Stacey Marie
- Date
- 2023
- Description
-
Understanding tissue biomechanics is of interest for modeling organ injury from external loads, development of tissue surrogate materials, and...
Show moreUnderstanding tissue biomechanics is of interest for modeling organ injury from external loads, development of tissue surrogate materials, and creating new biomarkers for disease. Probing the response of soft tissue in shear can provide information on histopathology, provided a methodology exists that connects the macroscopic mechanical properties with cell-level properties. Two of the available methods to measure the macroscopic shear viscoelastic properties of soft tissue are oscillatory shear rheometry and ultrasound shear wave elastography (SWE). Due to its accuracy, rheometry is the gold standard, but it is destructive, requires excised homogeneous samples, and can only be applied ex-vivo. SWE is an emerging non-invasive imaging technique which requires validation, ostensibly by comparing with rheometry. Histology is the gold standard for providing morphological information on the cell level, which can determine tissue pathology. The challenge is to connect the macroscopic mechanical metrics derived from SWE and rheometry to the tissue microstructure. To address this challenge, mathematical models can be used that employ multiple, judiciously chosen measurements of macroscopic shear properties and histology to estimate intrinsic mechanical properties at the cell level.A class of homogeneous and composite lipid phantoms mimicking the mechanical properties of brain white matter were fabricated to test a novel stereotactic system and an optimized SWE imaging protocol. The shear stiffness measurements obtained with SWE on the whole phantom were validated with rheometry performed on a series of samples made with the same material as the phantoms. The same procedure was applied to porcine brain white matter excised from fresh whole brains (n=3). Cylindrical cores were extracted from the corpus callosum area, sliced into discs and microscopic sections were subsequently removed for histology. Good agreement was found between the SWE and rheometry measurements of shear stiffness, which generally increases with the level of compressive prestress. Immunofluorescence was used to stain separately the axon neurofilaments and myelin sheaths, and digital image analysis of the confocal microscopy images allowed the estimation of axon volume fraction and axon-to-myelin ratio in the corpus callosum. Using these metrics and a composite mechanical model, a connection between the macroscopic shear measurements and the viscoelastic properties of axon and glia matrix was made for porcine brain tissue. Similarly, rheometry was used to measure the macroscopic properties of decellularized porcine myocardium extracellular matrix (ECM) in two different fiber locations, and for three different fiber orientations. The mechanical properties were found to be dependent upon fiber location, but not on fiber orientation. Since collagen is a primary supportive structure for the ECM, several microscopic slices were probed with immunofluorescence to compute the collagen I and collagen IV volume fractions. Another mechanical model was employed to establish a connection between the macroscopic properties and the mechanical properties of the collagen matrix in decellularized porcine myocardial ECM.This dissertation highlights the use and integration of three different experimental techniques (rheometry, ultrasound SWE, and histology) to correlate key microstructural properties of soft, fibrous tissues (ex-vivo healthy porcine brain white matter and myocardium ECM) with macroscopic shear mechanics. The consideration of the effect of compressive prestress is noteworthy. The reported baseline data for the tissues under shear loading and prestress are pertinent to the physiological function of these tissues, and therefore constitute preliminary data and a necessary first step before a systematic study of the biomechanics of the same tissues in vivo is performed.
Show less
- Title
- Estimation of Platinum Oxide Degradation in Proton Exchange Membrane Fuel Cells
- Creator
- Ahmed, Niyaz Afnan
- Date
- 2024
- Description
-
The performance and durability of Proton Exchange Membrane Fuel Cells (PEMFCs) can be significantly hampered due to the degradation of the...
Show moreThe performance and durability of Proton Exchange Membrane Fuel Cells (PEMFCs) can be significantly hampered due to the degradation of the platinum catalyst. The production of platinum oxide is a major cause of the degradation of the fuel cell system, negatively affecting its performance and durability. In order to predict and prevent this degradation, this research examines a novel method to estimate degradation due to platinum oxide formation and predict the level of platinum oxide coverage over time. Mechanisms of platinum oxide formation are outlined and two methods are compared for platinum oxide estimation. Linear regression and two Artificial Neural Network (ANN) models, including a Recurrent Neural Network (RNN) and Feed-forward Back Propagation Neural Network (FFBPNN), are compared for estimation. The estimation model takes into account the influence of cell temperature and relative humidity.Evaluation of relative errors (RE) and root mean square error (RMSE) illustrates the superior performance of RNN in contrast to GT-Suite and FFBPNN. However, both RNN and GT-Suite showcase an average error rate below 5% while the FFBPNN had a higher error rate of approximately 7%. The RMSE of RNN shows mostly less compared to FFBPNN and GT-Suite, however, at 50% training data, GT-Suite shows lowest RMSE. These findings indicate that GT-Suite can be a valuable tool for estimating platinum oxide in fuel cells with a relatively low RE, but the RNN model may be more suitable for real-time estimation of platinum oxide degradation in PEM fuel cells, due to its accurate predictions and shorter computational time. This comprehensive approach provides crucial insights for optimizing fuel cell efficiency and implementing effective maintenance strategies.
Show less
- Title
- Defense-in-Depth for Cyber-Secure Network Architectures of Industrial Control Systems
- Creator
- Arnold, David James
- Date
- 2024
- Description
-
Digitization and modernization efforts have yielded greater efficiency, safety, and cost-savings for Industrial Control Systems (ICS). To...
Show moreDigitization and modernization efforts have yielded greater efficiency, safety, and cost-savings for Industrial Control Systems (ICS). To achieve these gains, the Internet of Things (IoT) has become an integral component of network infrastructures. However, integrating embedded devices expands the network footprint and softens cyberattack resilience. Additionally, legacy devices and improper security configurations are weak points for ICS networks. As a result, ICSs are a valuable target for hackers searching for monetary gains or planning to cause destruction and chaos. Furthermore, recent attacks demonstrate a heightened understanding of ICS network configurations within hacking communities. A Defense-in-Depth strategy is the solution to these threats, applying multiple security layers to detect, interrupt, and prevent cyber threats before they cause damage. Our solution detects threats by deploying an Enhanced Data Historian for Detecting Cyberattacks. By introducing Machine Learning (ML), we enhance cyberattack detection by fusing network traffic and sensor data. Two computing models are examined: 1) a distributed computing model and 2) a localized computing model. The distributed computing model is powered by Apache Spark, introducing redundancy for detecting cyberattacks. In contrast, the localized computing model relies on a network traffic visualization methodology for efficiently detecting cyberattacks with a Convolutional Neural Network. These applications are effective in detecting cyberattacks with nearly 100% accuracy. Next, we prevent eavesdropping by applying Homomorphic Encryption for Secure Computing. HE cryptosystems are a unique family of public key algorithms that permit operations on encrypted data without revealing the underlying information. Through the Microsoft SEAL implementation of the CKKS algorithm, we explored the challenges of introducing Homomorphic Encryption to real-world applications. Despite these challenges, we implemented two ML models: 1) a Neural Network and 2) Principal Component Analysis. Finally, we hinder attackers by integrating a Cyberattack Lockdown Network with Secure Ultrasonic Communication. When a cyberattack is detected, communication for safety-critical elements is redirected through an ultrasonic communication channel, establishing physical network segmentation with compromised devices. We present proof-of-concept work in transmitting video via ultrasonic communication over an Aluminum Rectangular Bar. Within industrial environments, existing piping infrastructure presents an optimal solution for cost-effectively preventing eavesdropping. The effectiveness of these solutions is discussed within the scope of the nuclear industry.
Show less
- Title
- Large Language Model Based Machine Learning Techniques for Fake News Detection
- Creator
- Chen, Pin-Chien
- Date
- 2024
- Description
-
With advanced technology, it’s widely recognized that everyone owns one or more personal devices. Consequently, people are evolving into...
Show moreWith advanced technology, it’s widely recognized that everyone owns one or more personal devices. Consequently, people are evolving into content creators on social media or the streaming platforms sharing their personal ideas regardless of their education or expertise level. Distinguishing fake news is becoming increasingly crucial. However, the recent research only presents comparisons of detecting fake news between one or more models across different datasets. In this work, we applied Natural Language Processing (NLP) techniques with Naïve Bayes and DistilBERT machine learning method combing and augmenting four datasets. The results show that the balanced accuracy is higher than the average in the recent studies. This suggests that our approach holds for improving fake news detection in the era of widespread content creation.
Show less
- Title
- Investigation in the Uncertainty of Chassis Dynamometer Testing for the Energy Characterization of Conventional, Electric and Automated Vehicles
- Creator
- Di Russo, Miriam
- Date
- 2023
- Description
-
For conventional and electric vehicles tested in a standard chassis dynamometer environment precise regulations on the evaluation of their...
Show moreFor conventional and electric vehicles tested in a standard chassis dynamometer environment precise regulations on the evaluation of their energy performance exist. However, the regulations do not include requirements on the confidence value to associate with the results. As vehicles become more and more efficient to meet the stricter regulations mandates on emissions, fuel and energy consumption, traditional testing methods may become insufficient to validate these improvements, and may need revision. Without information about the accuracy associated with the results of those procedures however, adjustments and improvements are not possible, since no frame of reference exists. For connected and automated vehicles, there are no standard testing procedures, and researchers are still in the process of determining if current evaluation methods can be extended to test intelligent technologies and which metrics best represent their performance. For these vehicles is even more important to determine the uncertainty associated with these experimental methods and how they propagate to the final results. The work presented in this dissertation focuses on the development of a systematic framework for the evaluation of the uncertainty associated with the energy performance of conventional, electric and automated vehicles. The framework is based on a known statistical method, to determine the uncertainty associated with the different stages and processes involved in the experimental testing, and to evaluate how the accuracy of each parameter involved impacts the final results. The results demonstrate that the framework can be successfully applied to existing testing methods and provides a trustworthy value of accuracy to associate with the energy performance results, and can be easily extended to connected-automated vehicle testing to evaluate how novel experimental methods impact the accuracy and the confidence of the outputs. The framework can be easily be implemented into an existing laboratory environment to incorporate the uncertainty evaluation among the current results analyzed at the end of each test, and provide a reference for researchers to evaluate the actual benefits of new algorithms and optimization methods and understand margins for improvements, and by regulators to assess which parameters to enforce to ensure compliance and ensure projected benefits.
Show less
- Title
- The Double-edged Sword of Executive Pay: How the CEO-TMT Pay Gap Influences Firm Performance
- Creator
- Haddadian Nekah, Pouya
- Date
- 2024
- Description
-
This study examines the relationship between the chief executive officer (CEO) and top management team (TMT) pay gap and consequent firm...
Show moreThis study examines the relationship between the chief executive officer (CEO) and top management team (TMT) pay gap and consequent firm performance. Drawing on tournament theory and equity theory, I argue that the effect of the CEO-TMT pay gap on consequent firm performance is non-monotonic. Using data from 1995 to 2022 from S&P 1500 US firms, I explicate an inverted U-shaped relationship, such that an increase in the pay gap leads to an increase in firm performance up to a certain point, after which it declines. Additionally, multilevel analyses reveal that this curvilinear relationship is moderated by attributes of the TMT, and the industry in which the firm competes. My findings show that firms with higher TMT gender diversity suffer lower performance loss due to wider pay gaps. Furthermore, when firm executives are paid more compared to the industry norms, or when the firm has a long-tenured CEO, firm performance becomes less sensitive to larger CEO-TMT pay gaps. Lastly, when the firm competes in a masculine industry, firm performance is more negatively affected by larger CEO-TMT pay gaps. Contrary to my expectations, firm gender-diversity friendly policies failed to influence the CEO-TMT pay gap-firm performance relationship.
Show less
- Title
- Using Niobium surface encapsulation and Rhenium to enhance the coherence of superconducting devices
- Creator
- Crisa, Francesco
- Date
- 2024
- Description
-
In recent decades, the scientific community has grappled with escalating complexity, necessitating a more advanced tool capable of tackling...
Show moreIn recent decades, the scientific community has grappled with escalating complexity, necessitating a more advanced tool capable of tackling increasingly intricate simulations beyond the capabilities of classical computers. This tool, known as a quantum computer, features processors composed of individual units termed qubits. While various methods exist for constructing qubits, superconducting circuits have emerged as a leading approach, owing to their parallels with semiconductor technology.In recent years, significant strides have been made in optimizing the geometry and design of qubits. However, the current bottleneck in the performance of superconducting qubits lies in the presence of defects and impurities within the materials used. Niobium, owing to its desirable properties, such as high critical temperature and low kinetic inductance, stands out as the most prevalent superconducting material. Nonetheless, it is encumbered by a relatively thick oxide layer (approximately 5 nm) exhibiting three distinct oxidation states: NbO, NbO$_2$, and Nb$_2$O$_5$. The primary challenge with niobium lies in the multitude of defects localized within the highly disordered Nb$_2$O$_5$ layer and at the interfaces between the different oxides. In this study, I present an encapsulation strategy aimed at restraining surface oxide growth by depositing a thin layer (5 to 10 nm) of another material in vacuum atop the Nb thin film. This approach exploits the superconducting proximity effect, and it was successfully employed in the development of Josephson junction devices on Nb during the 1980s.In the past two years, tantalum and titanium nitride have emerged as promising alternative materials, with breakthrough qubit publications showcasing coherence times five to ten times superior to those achieved in Nb. The focus will be on the fabrication and RF testing of Nb-based qubits with Ta and Au capping layers. With Ta capping, we have achieved the best T1 (not average) decay time of nearly 600 us, which is more than a factor of 10 improvements over the bare Nb. This establishes the unique capping layer approach as a significant new direction for the development of superconducting qubits.Concurrently with the exploration of materials for encapsulation strategies, identifying materials conducive to enhancing the performance of superconducting qubits is imperative. Ideal candidates should exhibit a thin, low-loss surface oxide and establish a clean interface with the substrate, thereby minimizing defects and potential sources of losses. Rhenium, characterized by an extremely thin surface oxide (less than 1 nm) and nearly perfect crystal structure alignment with commonly used substrates such as sapphire, emerges as a promising material platform poised to elevate the performance of superconducting qubits.
Show less
- Title
- Improving Localization Safety for Landmark-Based LiDAR Localization System
- Creator
- Chen, Yihe
- Date
- 2024
- Description
-
Autonomous ground robots have gained traction in various commercial applications, with established safety protocols covering subsystem...
Show moreAutonomous ground robots have gained traction in various commercial applications, with established safety protocols covering subsystem reliability, control algorithm stability, path planning, and localization. This thesis specifically delves into the localizer, a critical component responsible for determining the vehicle’s state (e.g., position and orientation), assessing compliance with localization safety requirements, and proposing methods for enhancing localization safety.Within the robotics domain, diverse localizers are utilized, such as scan-matching techniques like normal distribution transformations (NDT), the iterative closest point (ICP) algorithm,probabilistic maps method, and semantic map-based localization.Notably, NDT stands out as a widely adopted standalone laser localization method, prevalent in autonomous driving software such as Autoware and Apollo platforms.In addition to the mentioned localizers, common state estimators include variants of Kalman Filter, particle filter-based, and factor graph-based estimators. The evaluation of localization performance typically involves quantifying the estimated state variance for these state estimators.While various localizer options exist, this study focuses on those utilizing extended Kalman filters and factor graph methods. Unlike methods like NDT and ICP algorithms, extended Kalman filters and factor graph based approaches guarantee bounding of estimated state uncertainty and have been extensively researched for integrity monitoring.Common variance analysis, employed for sensor readings and state estimators, has limitations, primarily focusing on non-faulted scenarios under nominal conditions. This approach proves impractical for real-world scenarios and falls short for safety-critical applications like autonomous vehicles (AVs).To overcome these limitations, this thesis utilizes a dedicated safety metric: integrity risk. Integrity risk assesses the reliability of a robot’s sensory readings and localization algorithm performance under both faulted and non-faulted conditions. With a proven track record in aviation, integrity risk has recently been applied to robotics applications, particularly for evaluating the safety of lidar localization.Despite the significance of improving localization integrity risk through laser landmark manipulation, this remains an under explored territory. Existing research on robot integrity risk primarily focuses on the vehicles themselves. To comprehensively understand the integrity risk of a lidar-based localization system, as addressed in this thesis, an exploration of lidar measurement faults’ modes is essential, a topic covered in this thesis.The primary contributions of this thesis include: A realistic error estimation method for state estimators in autonomous vehicles navigating using pole-shape lidar landmark maps, along with a compensatory method; A method for quantifying the risk associated with unmapped associations in urban environments, enhancing the realism of values provided by the integrity risk estimator; a novel approach to improve the localization integrity of autonomous vehicles equipped with lidar feature extractors in urban environments through minimal environmental modifications, mitigating the impact of unmapped association faults. Simulation results and experimental results are presented and discussed to illustrate the impact of each method, providing further insights into their contributions to localization safety.
Show less
- Title
- Intraoperative Assessment of Surgical Margins in Head And Neck Cancer Resection Using Time-Domain Fluorescence Imaging
- Creator
- Cleary, Brandon M.
- Date
- 2023
- Description
-
Rapid and accurate determination of surgical margin depth in fluorescence guided surgery has been a difficult issue to overcome, leading to...
Show moreRapid and accurate determination of surgical margin depth in fluorescence guided surgery has been a difficult issue to overcome, leading to over- or under-resection of cancerous tissues and follow-up treatments such as ‘call-back’ surgery and chemotherapy. Current techniques utilizing direct measurement of tumor margins in frozen section pathology are slow, which can prevent surgeons from acting on information before a patient is sent home. Other fluorescence techniques require the measurement of margins via captured images that are overlayed with fluorescent data. This method is flawed, as measuring depth from captured images loses spatial information. Intensity-based fluorescence techniques utilizing tumor-to-background ratios do not decouple the effects of concentration from the depth information acquired. Thus, it is necessary to perform an objective measurement to determine depths of surgical margins. This thesis focuses on the theory, device design, simulation development, and overall viability of time-domain fluorescence imaging as an alternative method of determining surgical margin depths. Characteristic regressions were generated using a thresholding method on acquired time-domain fluorescence signals, which were used to convert time-domain data to a depth value. These were applied to an image space to generate a depth map of a modelled tissue sample. All modeling was performed on homogeneous media using Monte Carlo simulations, providing high accuracy at the cost of increased computational time. In practice, the imaging process should be completed within a span of under 20 minutes for a full tissue sample, rather than 20 minutes for a single slice of the sample. This thesis also explores the effects of different thresholding levels on the accuracy of depth determination, as well as the precautions to be taken regarding hardware limitations and signal noise.
Show less
- Title
- Independence and Graphical Models for Fitting Real Data
- Creator
- Cho, Jason Y.
- Date
- 2023
- Description
-
Given some real life dataset where the attributes of the dataset take on categorical values, with corresponding r(1) × r(2) × … × r(m)...
Show moreGiven some real life dataset where the attributes of the dataset take on categorical values, with corresponding r(1) × r(2) × … × r(m) contingency table with nonzero rows or nonzero columns, we will be testing the goodness-of-fit of various independence models to the dataset using a variation of Metropolis-Hastings that uses Markov bases as a tool to get a Monte Carlo estimate of the p-value. This variation of Metropolis-Hastings can be found in Algorithm 3.1.1. Next we will consider the problem: ``out of all possible undirected graphical models each associated to some graph with m vertices that we test to fit on our dataset, which one best fits the dataset?" Here, the m attributes are labeled as vertices for the graph. We would have to conduct 2^(mC2) goodness-of-fit tests since there are 2^(mC2) possible undirected graphs on m vertices. Instead, we consider a backwards selection method likelihood-ratio test algorithm. We first start with the complete graph G = K(m), and call the corresponding undirected graphical model ℳ(G) as the parent model. Then for each edge e in E(G), we repeatedly apply the likelihood-ratio test to test the relative fit of the model ℳ(G-e), the child model, vs. ℳ(G), the parent model, where ℳ(G-e) ⊆ℳ(G). More details on this iterative process can be found in Algorithm 4.1.3. For our dataset, we will be using the alcohol dataset found in https://www.kaggle.com/datasets/sooyoungher/smoking-drinking-dataset, where the four attributes of the dataset we will use are ``Gender" (male, female), ``Age", ``Total cholesterol (mg/dL)", and ``Drinks alcohol or not?". After testing the goodness-of-fit of three independence models corresponding to the independence statements ``Gender vs Drink or not?", ``Age vs Drink or not?", and "Total cholesterol vs Drink or not?", we found that the data came from a distribution from the two independence models corresponding to``Age vs Drink or not?" and "Total cholesterol vs Drink or not?" And after applying the backwards selection likelihood-ratio method on the alcohol dataset, we found that the data came from a distribution from the undirected graphical model associated to the complete graph minus the edge {``Total cholesterol”, ``Drink or not?”}.
Show less
- Title
- Development of a Model To Investigate Inflammation Using Peripheral Blood Mononucleated Cells
- Creator
- Geevarghese Alex, Peter
- Date
- 2023
- Description
-
Our modern culture in our society is facing one of the biggest risks in health which is high-calorie diet-related postprandial inflammation....
Show moreOur modern culture in our society is facing one of the biggest risks in health which is high-calorie diet-related postprandial inflammation. Chronic diseases may be caused if the energy-dense food is the choice meaning if it is uncontrolled, clinical studies have demonstrated this with the body's post-meal inflammatory response. We aimed to find the causes of postprandial inflammation in response to various dietary treatments and provide a model to demonstrate. We aimed to make use of in vivo and in vitro techniques and statistics to create a model. The created model would help us to design specific treatments to minimize inflammation with response to dietary. In addition to figuring out vital dietary additives, the model additionally facilitates the layout of individualized interventions to reduce inflammation, thereby improving long-time period health outcomes. We aim to understand the clinical observations of diet-induced postprandial inflammation on the molecular level. We desire to make contributions to reduce the impact of chronic inflammatory disorders that is associated with postprandial inflammation.
Show less
- Title
- Utilizing Concurrent Data Accesses for Data-Driven and AI Applications
- Creator
- Lu, Xiaoyang
- Date
- 2024
- Description
-
In the evolving landscape of data-driven and AI applications, the imperative for reducing data access delay has never been more critical,...
Show moreIn the evolving landscape of data-driven and AI applications, the imperative for reducing data access delay has never been more critical, especially as these applications increasingly underpin modern daily life. Traditionally, architectural optimizations in computing systems have concentrated on data locality, utilizing temporal and spatial locality to enhance data access performance by maximizing data and data block reuse. However, as poor locality is a common characteristic of data-driven and AI applications, utilizing data access concurrency emerges as a promising avenue to optimize the performance of evolving data-driven and AI application workloads.This dissertation advocates utilizing concurrent data accesses to enhance performance in data-driven and AI applications, addressing a significant research gap in the integration of data concurrency for performance improvement. It introduces a suite of innovative case studies, including a prefetching framework that dynamically adjusts aggressiveness based on data concurrency, a cache partitioning framework that balances application demands with concurrency, a concurrency-aware cache management framework to reduce costly cache misses, a holistic cache management framework that considers both data locality and concurrency to fine-tune decisions, and an accelerator design for sparse matrix multiplication that optimizes adaptive execution flow and incorporates concurrency-aware cache optimizations.Our comprehensive evaluations demonstrate that the implemented concurrency-aware frameworks significantly enhance the performance of data-driven and AI applications by leveraging data access concurrency.Specifically, our prefetch framework boosts performance by 17.3%, our cache partitioning framework surpasses locality-based approaches by 15.5%, and our cache management framework achieves a 10.3% performance increase over prior works. Furthermore, our holistic cache management framework enhances performance further, achieving a 13.7% speedup. Additionally, our sparse matrix multiplication accelerator outperforms existing accelerators by a factor of 2.1.As optimizing data locality in data-driven and AI applications becomes increasingly challenging, this dissertation demonstrates that utilizing concurrency can still yield significant performance enhancements, offering new insights and actionable examples for the field. This dissertation not only bridges the identified research gap but also establishes a foundation for further exploration of the full potential of concurrency in data-driven and AI applications and architectures, aiming at fulfilling the evolving performance demands of modern and future computing systems.
Show less
- Title
- Health and Well-Being Benefits of Different Types of Urban Green Spaces (UGS): A Cross-Sectional Study of Communities in Chicago, U.S.
- Creator
- Kang, Liwen
- Date
- 2023
- Description
-
There are three main interrelated areas of focus in this doctoral research related tourban green spaces (UGS): general well-being, mental and...
Show moreThere are three main interrelated areas of focus in this doctoral research related tourban green spaces (UGS): general well-being, mental and physical health. In this study, these three different health aspects were analyzed separately. The data of these three health outcomes were collected from the Healthy Chicago Survey (HCS), an annual telephone survey that interviewed adults in Chicago, U.S., based on the randomly selected addresses.Urban green spaces have been associated with better health and well-being. Theyprovide sites for physical activity, buffer air and noise pollution, and alleviate thermal discomfort. Urban green spaces also promote social interaction and increase social cohesion. However, research is limited on the health benefits of different types of UGS exposure. This research aimed to reveal the associations between the provision of different UGS types and urban residents’ general, mental, and physical health in Chicago, the third-largest city in the U.S.Urban green spaces data were collected from the National Land Cover Database(NLCD), the Meter-Scale Urban Land Cover (MULC), and the Chicago Park District (CPD). Different types of UGS were obtained, namely 1) the percent tree canopy cover (TCC) from the first database; 2) the percentage of trees and the percentage of grass from the second database; and 3) the number of parks, park areas, percentage of park areas from the third database. Using hierarchical and logistic regression models that controlled for a range of confounding factors (age, gender, race, education level, employment status, and poverty level), this study assessed which type of UGS affects general well-being, mental health, and physical health, respectively. The results indicated that increased park area was significantly associated with better perceived general health; higher percent of TCC was significantly associated with a lower level of psychological distress (PD); and increased percentage of park areas and increased number of parks were associated with lower odds of being obese. Two micro-scaled on-site observations were conducted in the Avalon Park community and the Loop community to analyze some other UGS characteristics besides quantity and availability. Other characteristics of UGS, such as quality of facilities, attractiveness, and maintenance, are suggested to be taken into consideration for future studies. The study highlights that different UGS types have various impacts on general, mental, and physical health of urban residents. By providing scientific evidence, this study may help policymakers, urban planners, landscape architects, and other related professionals to make informed decisions on maximizing the health benefits of UGS and to achieve social equity. The findings of this study may be applied to other metropolitan cities.
Show less