Search results
(9,701 - 9,720 of 9,975)
Pages
- Title
- Ink and Colored Pencil Drawings, 1981, verso
- Creator
- Henry, Mary Dill, 1913-2009
- Date
- 1981-03-01
- Description
-
Untitled ink and colored pencil sketches by Mary Henry. Drawings are found on both sides of the sheet. Inscription on verso reads "March 1 '81."
- Collection
- Mary Dill Henry Papers, 1913-2021
- Title
- Ink and Colored Pencil Drawings, 1981, recto
- Creator
- Henry, Mary Dill, 1913-2009
- Date
- 1981-03-01
- Description
-
Untitled ink and colored pencil sketches by Mary Henry. Drawings are found on both sides of the sheet. Inscription on verso reads "March 1 '81."
- Collection
- Mary Dill Henry Papers, 1913-2021
- Title
- Intraoperative Assessment of Surgical Margins in Head And Neck Cancer Resection Using Time-Domain Fluorescence Imaging
- Creator
- Cleary, Brandon M.
- Date
- 2023
- Description
-
Rapid and accurate determination of surgical margin depth in fluorescence guided surgery has been a difficult issue to overcome, leading to...
Show moreRapid and accurate determination of surgical margin depth in fluorescence guided surgery has been a difficult issue to overcome, leading to over- or under-resection of cancerous tissues and follow-up treatments such as ‘call-back’ surgery and chemotherapy. Current techniques utilizing direct measurement of tumor margins in frozen section pathology are slow, which can prevent surgeons from acting on information before a patient is sent home. Other fluorescence techniques require the measurement of margins via captured images that are overlayed with fluorescent data. This method is flawed, as measuring depth from captured images loses spatial information. Intensity-based fluorescence techniques utilizing tumor-to-background ratios do not decouple the effects of concentration from the depth information acquired. Thus, it is necessary to perform an objective measurement to determine depths of surgical margins. This thesis focuses on the theory, device design, simulation development, and overall viability of time-domain fluorescence imaging as an alternative method of determining surgical margin depths. Characteristic regressions were generated using a thresholding method on acquired time-domain fluorescence signals, which were used to convert time-domain data to a depth value. These were applied to an image space to generate a depth map of a modelled tissue sample. All modeling was performed on homogeneous media using Monte Carlo simulations, providing high accuracy at the cost of increased computational time. In practice, the imaging process should be completed within a span of under 20 minutes for a full tissue sample, rather than 20 minutes for a single slice of the sample. This thesis also explores the effects of different thresholding levels on the accuracy of depth determination, as well as the precautions to be taken regarding hardware limitations and signal noise.
Show less
- Title
- Investigation in the Uncertainty of Chassis Dynamometer Testing for the Energy Characterization of Conventional, Electric and Automated Vehicles
- Creator
- Di Russo, Miriam
- Date
- 2023
- Description
-
For conventional and electric vehicles tested in a standard chassis dynamometer environment precise regulations on the evaluation of their...
Show moreFor conventional and electric vehicles tested in a standard chassis dynamometer environment precise regulations on the evaluation of their energy performance exist. However, the regulations do not include requirements on the confidence value to associate with the results. As vehicles become more and more efficient to meet the stricter regulations mandates on emissions, fuel and energy consumption, traditional testing methods may become insufficient to validate these improvements, and may need revision. Without information about the accuracy associated with the results of those procedures however, adjustments and improvements are not possible, since no frame of reference exists. For connected and automated vehicles, there are no standard testing procedures, and researchers are still in the process of determining if current evaluation methods can be extended to test intelligent technologies and which metrics best represent their performance. For these vehicles is even more important to determine the uncertainty associated with these experimental methods and how they propagate to the final results. The work presented in this dissertation focuses on the development of a systematic framework for the evaluation of the uncertainty associated with the energy performance of conventional, electric and automated vehicles. The framework is based on a known statistical method, to determine the uncertainty associated with the different stages and processes involved in the experimental testing, and to evaluate how the accuracy of each parameter involved impacts the final results. The results demonstrate that the framework can be successfully applied to existing testing methods and provides a trustworthy value of accuracy to associate with the energy performance results, and can be easily extended to connected-automated vehicle testing to evaluate how novel experimental methods impact the accuracy and the confidence of the outputs. The framework can be easily be implemented into an existing laboratory environment to incorporate the uncertainty evaluation among the current results analyzed at the end of each test, and provide a reference for researchers to evaluate the actual benefits of new algorithms and optimization methods and understand margins for improvements, and by regulators to assess which parameters to enforce to ensure compliance and ensure projected benefits.
Show less
- Title
- Using Niobium surface encapsulation and Rhenium to enhance the coherence of superconducting devices
- Creator
- Crisa, Francesco
- Date
- 2024
- Description
-
In recent decades, the scientific community has grappled with escalating complexity, necessitating a more advanced tool capable of tackling...
Show moreIn recent decades, the scientific community has grappled with escalating complexity, necessitating a more advanced tool capable of tackling increasingly intricate simulations beyond the capabilities of classical computers. This tool, known as a quantum computer, features processors composed of individual units termed qubits. While various methods exist for constructing qubits, superconducting circuits have emerged as a leading approach, owing to their parallels with semiconductor technology.In recent years, significant strides have been made in optimizing the geometry and design of qubits. However, the current bottleneck in the performance of superconducting qubits lies in the presence of defects and impurities within the materials used. Niobium, owing to its desirable properties, such as high critical temperature and low kinetic inductance, stands out as the most prevalent superconducting material. Nonetheless, it is encumbered by a relatively thick oxide layer (approximately 5 nm) exhibiting three distinct oxidation states: NbO, NbO$_2$, and Nb$_2$O$_5$. The primary challenge with niobium lies in the multitude of defects localized within the highly disordered Nb$_2$O$_5$ layer and at the interfaces between the different oxides. In this study, I present an encapsulation strategy aimed at restraining surface oxide growth by depositing a thin layer (5 to 10 nm) of another material in vacuum atop the Nb thin film. This approach exploits the superconducting proximity effect, and it was successfully employed in the development of Josephson junction devices on Nb during the 1980s.In the past two years, tantalum and titanium nitride have emerged as promising alternative materials, with breakthrough qubit publications showcasing coherence times five to ten times superior to those achieved in Nb. The focus will be on the fabrication and RF testing of Nb-based qubits with Ta and Au capping layers. With Ta capping, we have achieved the best T1 (not average) decay time of nearly 600 us, which is more than a factor of 10 improvements over the bare Nb. This establishes the unique capping layer approach as a significant new direction for the development of superconducting qubits.Concurrently with the exploration of materials for encapsulation strategies, identifying materials conducive to enhancing the performance of superconducting qubits is imperative. Ideal candidates should exhibit a thin, low-loss surface oxide and establish a clean interface with the substrate, thereby minimizing defects and potential sources of losses. Rhenium, characterized by an extremely thin surface oxide (less than 1 nm) and nearly perfect crystal structure alignment with commonly used substrates such as sapphire, emerges as a promising material platform poised to elevate the performance of superconducting qubits.
Show less
- Title
- The Double-edged Sword of Executive Pay: How the CEO-TMT Pay Gap Influences Firm Performance
- Creator
- Haddadian Nekah, Pouya
- Date
- 2024
- Description
-
This study examines the relationship between the chief executive officer (CEO) and top management team (TMT) pay gap and consequent firm...
Show moreThis study examines the relationship between the chief executive officer (CEO) and top management team (TMT) pay gap and consequent firm performance. Drawing on tournament theory and equity theory, I argue that the effect of the CEO-TMT pay gap on consequent firm performance is non-monotonic. Using data from 1995 to 2022 from S&P 1500 US firms, I explicate an inverted U-shaped relationship, such that an increase in the pay gap leads to an increase in firm performance up to a certain point, after which it declines. Additionally, multilevel analyses reveal that this curvilinear relationship is moderated by attributes of the TMT, and the industry in which the firm competes. My findings show that firms with higher TMT gender diversity suffer lower performance loss due to wider pay gaps. Furthermore, when firm executives are paid more compared to the industry norms, or when the firm has a long-tenured CEO, firm performance becomes less sensitive to larger CEO-TMT pay gaps. Lastly, when the firm competes in a masculine industry, firm performance is more negatively affected by larger CEO-TMT pay gaps. Contrary to my expectations, firm gender-diversity friendly policies failed to influence the CEO-TMT pay gap-firm performance relationship.
Show less
- Title
- Improving Niobium Superconducting Radio-Frequency Cavities by Studying Tantalum
- Creator
- Helfrich, Halle
- Date
- 2023
- Description
-
Niobium superconducting radio-frequency (SRF) cavities are widely used accelerating structures. Improvements in both quality factor, Q0, and...
Show moreNiobium superconducting radio-frequency (SRF) cavities are widely used accelerating structures. Improvements in both quality factor, Q0, and maximum accelerating gradient, Eacc, have been made to SRF cavities by introducing new processing techniques. These breakthroughs include processes such as nitrogen doping(N-Doping) and infusion, electrochemical polishing (EP) and High Pressure Rinsing (HPR). [1] There is still abundant opportunity to improve the cavities or, rather, the material they’re primarily composed of: niobium. A focus here is the role the native oxide of Nb plays in SRF cavity performance. The values of interest in a given cavity are its quality factor Q0, maximum accelerating gradient Eacc and surface resistance Rs . This work characterizes Nb and Ta foils prepared under identical conditions using X-ray photoelectron spectroscopy (XPS) to compare surface oxides and better understand RF loss mechanisms in Nb SRF cavities and qubits. It is well established that Ta qubits experience much longer coherence times than Nb qubits, which is probably due to the larger RF losses in Nb oxide. By studying Tantalum, an element similar to Niobium, the mechanisms of the losses that originate in the oxide and suboxide layers present on the surface of Nb cavities might finally be unlocked. We find noticeable differences in the oxides of Nb and Ta formed by air exposure of clean foils. In particular, Ta does not display the TaO2 suboxide in XPS, while Nb commonly shows NbO2. This suggests that suboxides are an additional contributor of RF losses. We also suggest that thin Ta film coatings of Nb SRF cavities may be a way of increasing Q0. It is in the interest of the accelerator community to fully understand the surface impurities present in Nb SRF cavities so that strategies for mitigating the effects can be proposed.
Show less
- Title
- Improving Localization Safety for Landmark-Based LiDAR Localization System
- Creator
- Chen, Yihe
- Date
- 2024
- Description
-
Autonomous ground robots have gained traction in various commercial applications, with established safety protocols covering subsystem...
Show moreAutonomous ground robots have gained traction in various commercial applications, with established safety protocols covering subsystem reliability, control algorithm stability, path planning, and localization. This thesis specifically delves into the localizer, a critical component responsible for determining the vehicle’s state (e.g., position and orientation), assessing compliance with localization safety requirements, and proposing methods for enhancing localization safety.Within the robotics domain, diverse localizers are utilized, such as scan-matching techniques like normal distribution transformations (NDT), the iterative closest point (ICP) algorithm,probabilistic maps method, and semantic map-based localization.Notably, NDT stands out as a widely adopted standalone laser localization method, prevalent in autonomous driving software such as Autoware and Apollo platforms.In addition to the mentioned localizers, common state estimators include variants of Kalman Filter, particle filter-based, and factor graph-based estimators. The evaluation of localization performance typically involves quantifying the estimated state variance for these state estimators.While various localizer options exist, this study focuses on those utilizing extended Kalman filters and factor graph methods. Unlike methods like NDT and ICP algorithms, extended Kalman filters and factor graph based approaches guarantee bounding of estimated state uncertainty and have been extensively researched for integrity monitoring.Common variance analysis, employed for sensor readings and state estimators, has limitations, primarily focusing on non-faulted scenarios under nominal conditions. This approach proves impractical for real-world scenarios and falls short for safety-critical applications like autonomous vehicles (AVs).To overcome these limitations, this thesis utilizes a dedicated safety metric: integrity risk. Integrity risk assesses the reliability of a robot’s sensory readings and localization algorithm performance under both faulted and non-faulted conditions. With a proven track record in aviation, integrity risk has recently been applied to robotics applications, particularly for evaluating the safety of lidar localization.Despite the significance of improving localization integrity risk through laser landmark manipulation, this remains an under explored territory. Existing research on robot integrity risk primarily focuses on the vehicles themselves. To comprehensively understand the integrity risk of a lidar-based localization system, as addressed in this thesis, an exploration of lidar measurement faults’ modes is essential, a topic covered in this thesis.The primary contributions of this thesis include: A realistic error estimation method for state estimators in autonomous vehicles navigating using pole-shape lidar landmark maps, along with a compensatory method; A method for quantifying the risk associated with unmapped associations in urban environments, enhancing the realism of values provided by the integrity risk estimator; a novel approach to improve the localization integrity of autonomous vehicles equipped with lidar feature extractors in urban environments through minimal environmental modifications, mitigating the impact of unmapped association faults. Simulation results and experimental results are presented and discussed to illustrate the impact of each method, providing further insights into their contributions to localization safety.
Show less
- Title
- Independence and Graphical Models for Fitting Real Data
- Creator
- Cho, Jason Y.
- Date
- 2023
- Description
-
Given some real life dataset where the attributes of the dataset take on categorical values, with corresponding r(1) × r(2) × … × r(m)...
Show moreGiven some real life dataset where the attributes of the dataset take on categorical values, with corresponding r(1) × r(2) × … × r(m) contingency table with nonzero rows or nonzero columns, we will be testing the goodness-of-fit of various independence models to the dataset using a variation of Metropolis-Hastings that uses Markov bases as a tool to get a Monte Carlo estimate of the p-value. This variation of Metropolis-Hastings can be found in Algorithm 3.1.1. Next we will consider the problem: ``out of all possible undirected graphical models each associated to some graph with m vertices that we test to fit on our dataset, which one best fits the dataset?" Here, the m attributes are labeled as vertices for the graph. We would have to conduct 2^(mC2) goodness-of-fit tests since there are 2^(mC2) possible undirected graphs on m vertices. Instead, we consider a backwards selection method likelihood-ratio test algorithm. We first start with the complete graph G = K(m), and call the corresponding undirected graphical model ℳ(G) as the parent model. Then for each edge e in E(G), we repeatedly apply the likelihood-ratio test to test the relative fit of the model ℳ(G-e), the child model, vs. ℳ(G), the parent model, where ℳ(G-e) ⊆ℳ(G). More details on this iterative process can be found in Algorithm 4.1.3. For our dataset, we will be using the alcohol dataset found in https://www.kaggle.com/datasets/sooyoungher/smoking-drinking-dataset, where the four attributes of the dataset we will use are ``Gender" (male, female), ``Age", ``Total cholesterol (mg/dL)", and ``Drinks alcohol or not?". After testing the goodness-of-fit of three independence models corresponding to the independence statements ``Gender vs Drink or not?", ``Age vs Drink or not?", and "Total cholesterol vs Drink or not?", we found that the data came from a distribution from the two independence models corresponding to``Age vs Drink or not?" and "Total cholesterol vs Drink or not?" And after applying the backwards selection likelihood-ratio method on the alcohol dataset, we found that the data came from a distribution from the undirected graphical model associated to the complete graph minus the edge {``Total cholesterol”, ``Drink or not?”}.
Show less
- Title
- Development of a Model To Investigate Inflammation Using Peripheral Blood Mononucleated Cells
- Creator
- Geevarghese Alex, Peter
- Date
- 2023
- Description
-
Our modern culture in our society is facing one of the biggest risks in health which is high-calorie diet-related postprandial inflammation....
Show moreOur modern culture in our society is facing one of the biggest risks in health which is high-calorie diet-related postprandial inflammation. Chronic diseases may be caused if the energy-dense food is the choice meaning if it is uncontrolled, clinical studies have demonstrated this with the body's post-meal inflammatory response. We aimed to find the causes of postprandial inflammation in response to various dietary treatments and provide a model to demonstrate. We aimed to make use of in vivo and in vitro techniques and statistics to create a model. The created model would help us to design specific treatments to minimize inflammation with response to dietary. In addition to figuring out vital dietary additives, the model additionally facilitates the layout of individualized interventions to reduce inflammation, thereby improving long-time period health outcomes. We aim to understand the clinical observations of diet-induced postprandial inflammation on the molecular level. We desire to make contributions to reduce the impact of chronic inflammatory disorders that is associated with postprandial inflammation.
Show less
- Title
- Large Language Model Based Machine Learning Techniques for Fake News Detection
- Creator
- Chen, Pin-Chien
- Date
- 2024
- Description
-
With advanced technology, it’s widely recognized that everyone owns one or more personal devices. Consequently, people are evolving into...
Show moreWith advanced technology, it’s widely recognized that everyone owns one or more personal devices. Consequently, people are evolving into content creators on social media or the streaming platforms sharing their personal ideas regardless of their education or expertise level. Distinguishing fake news is becoming increasingly crucial. However, the recent research only presents comparisons of detecting fake news between one or more models across different datasets. In this work, we applied Natural Language Processing (NLP) techniques with Naïve Bayes and DistilBERT machine learning method combing and augmenting four datasets. The results show that the balanced accuracy is higher than the average in the recent studies. This suggests that our approach holds for improving fake news detection in the era of widespread content creation.
Show less
- Title
- Retrospective Quantitative T1 Imaging to Examine Characteristics of Multiple Sclerosis Lesions
- Creator
- Young, Griffin James
- Date
- 2024
- Description
-
Quantitative MRI plays an essential role in assessing tissue abnormality and diseaseprogression in multiple sclerosis (MS). Specifically, T1...
Show moreQuantitative MRI plays an essential role in assessing tissue abnormality and diseaseprogression in multiple sclerosis (MS). Specifically, T1 relaxometry is gaining popularity as elevated T1 values have been shown to correlate with increased inflammation, demyelination, and gliosis. The predominant issue is that relaxometry requires parametric mapping through advanced imaging techniques not commonly included in standard clinical protocols. This leaves an information gap in large clinical datasets from which quantitative mapping could have been performed. We introduce T1-REQUIRE, a retrospective T1 mapping method that approximates T1 values from a single T1-weighted MR image. This method has already been shown to be accurate within 10% of a clinically available reference standard in healthy controls but will be further validated in MS cohorts. We also further aim to determine T1-REQUIRE’s statistical significance as a unique biomarker for the assessment of MS lesions as they relate to clinical disability and disease burden. A 14-subject comparison between T1-REQUIRE maps derived from 3D T1 weighted turbo field echoes (3D T1w TFE) and an inversion-recovery fast field echo (IRFFE) revealed a whole-brain voxel-wise Pearson’s correlation of r = 0.89 (p < 0.001) and mean bias of 3.99%. In MS white matter lesions, r = 0.81, R2 = 0.65 (p < 0.001, N = 159), bias = 10.07%, and in normal appearing white matter (NAWM), r = 0.82, R 2 = 0.67 (p < 0.001), bias = 9.48%. Mean lesional T1-REQUIRE and MTR correlated significantly (r = -0.68, p < 0.001, N = 587) similar to previously published literature. Median lesional MTR correlated significantly with EDSS (rho = -0.34, p = 0.037), and lesional T1-REQUIRE exhibited xiii significant correlations with global brain tissue atrophy as measured by brain parenchymal fraction (BPF) (r = -0.41, p = 0.010, N = 38). Multivariate linear regressions between T1- REQUIRE NAWM provided meaningful statistical relationships with EDSS (β = 0.03, p = 0.027, N = 38), as well as did mean MTR values in the Thalamus (β = -0.27, p = 0.037, N = 38). A new spoiled gradient echo variation of T1-REQUIRE was assessed as a proof of concept in a small 5-subject MS cohort compared with IR-FFE T1 maps, with a whole brain voxel-wise correlation of r = 0.88, R2 = 0.77 (p < 0.001), and Bias = 0.19%. Lesional T1 comparisons reached a correlation of r = 0.75, R2 = 0.56 (p < 0.001, N = 42), and Bias = 10.81%. The significance of these findings means that there is the potential to provide supplementary quantitative information in clinical datasets where quantitative protocols were not implemented. Large MS data repositories previously only containing structural T1 weighted images now may be used in big data relaxometric studies with the potential to lead to new findings in newly uncovered datasets. Furthermore, T1-REQUIRE has the potential for immediate use in clinics where standard T1 mapping sequences aren’t able to be readily implemented.
Show less
- Title
- Large-Signal Transient Stability and Control of Inverter-Based Resources
- Creator
- Wang, Duo
- Date
- 2024
- Description
-
Renewable generation, including solar photovoltaic (PV) systems, type 3 and 4 wind turbine generation systems (WTG), battery energy storage...
Show moreRenewable generation, including solar photovoltaic (PV) systems, type 3 and 4 wind turbine generation systems (WTG), battery energy storage systems (BESS), as well as high voltage direct current (HVDC) and flexible alternating current (FACT) transmission system devices with increasing penetration level are being connected to the bulk power systems (BPS) via power electronic (PE) converters as the interface, referred to as the inverter-based resources (IBRs) on the transmission and sub-transmission levels or distributed energy resources (DERs) located on the distribution level. The IBR is almost entirely defined by the control algorithms and found to be more prone to experiencing large disturbances due to the lack of the conventional synchronous machine (SM) intrinsic synchronous characteristics and mechanical inertia, as well as being in smaller capacity sizes. Thus, these reasons motivate this dissertation to study the large-signal transient stability and control of IBRs for reliable grid integration and rapid grid transformation. For large-signal stability analysis methods, Lyapunov-based methods are the fundamental theory used to characterize the stability issues with analytical solutions, although other non-Lyapunov methods could also be very helpful. A main difficulty hindering the widespread adoption of the Lyapunov stability analysis method is the difficulty of finding the proper Lyapunov function candidate for a higher dimensional nonlinear system. The Port-Hamiltonian (PH) nonlinear control theory is explored in this dissertation as a promising theoretical framework solution addressing this challenging issue. A PH-based tracking and robust control method is proposed to facilitate the practical application of the PH framework in IBR controls. In addition, considering the typical grid-forming (GFM) IBR control with a first-order low pass filter (LPF) block is usually involved with control saturation function for protection purposes under abnormal operating conditions with anti-windup issue in practical implementation, a PH-based bounded LPF (PH-BLPF) control is proposed to incorporate this in the large-signal PH interconnection modeling framework while preserving the robust tracking Lyapunov stability with improved transient dynamic performance and stability margin.Moreover, specific real-world transient synchronization stability issues, such as the grid voltage large fault disturbance case, are studied. In addition, to meet the recent emerging IBR grid code requirements, such as the current magnitude limitation, grid support function, and fault recovery capability of GFM-VSCs, a virtual impedance-based current-limiting GFM control with enhanced transient stability and grid support is proposed.
Show less
- Title
- Two Essays on Mergers and Acquisitions
- Creator
- Xu, Yang
- Date
- 2024
- Description
-
This dissertation is composed of two self-contained chapters that both relate to mergers and acquisitions (M&A). In the first essay, we...
Show moreThis dissertation is composed of two self-contained chapters that both relate to mergers and acquisitions (M&A). In the first essay, we examine the Delaware (DE) reincorporation effect on firms’ post-IPO behaviors on mergers and acquisitions. We find that firms’ DE reincorporation decisions enhance the likelihood of engaging in M&A as targets. However, as a tradeoff, DE reincorporated firms get lower takeover valuations compared to stay-at-home-state firms, and the acquisition of reincorporated firms is less likely to be successful. Our second essay aims to explore the role of the options market in price discovery for M&A. We find that the predictive power of the changes in implied volatility of the target firm stock for the takeover outcome is statistically and economically significant. The risk arbitrage portfolios incorporating filters derived from the options on stocks of the target firms generate annualized risk-adjusted abnormal returns between 2.6% and 5%, depending on the portfolio weighting method, the threshold of filters for the implied volatility change, and the asset pricing models applied for abnormal returns. The results are robust to different empirical setups and are not explained by traditional factors.
Show less
- Title
- Heterogeneous Workloads Study towards Large-scale Interconnect Network Simulation
- Creator
- Wang, Xin
- Date
- 2023
- Description
-
High-bandwidth, low-latency interconnect networks play a key role in the design of modern high- performance computing (HPC) systems. The ever...
Show moreHigh-bandwidth, low-latency interconnect networks play a key role in the design of modern high- performance computing (HPC) systems. The ever-increasing need for higher bandwidth and higher message rate has driven the design of low-diameter interconnect topologies like variants of dragonfly. As these hierarchical networks become increasingly dominant, interference caused by resource sharing can lead to significant network congestion and performance variability. Meanwhile, with the rapid growth of the machine learning applications, the workloads of future HPC systems are anticipated to be a mix of scientific simulation, big data analytics, and machine learning applications. However, little work has been conducted to understand performance implications of co-running heterogeneous workloads on large-scale dragonfly systems. There is a greater need to study how different interconnect technologies affect workload performance, and how conventional scientific applications interact with emerging big data applications at the underlying interconnect level. In this work, we firstly present a comparative analysis exploring the communication interference for traditional HPC applications by analyzing the trade-off between localizing communication and balancing network traffic. We conduct trace-based simulations for applications with different communication patterns, using multiple job placement policies and routing mechanisms. Then we develop a scalable workload manager that provides an automatic framework to facilitate hybrid workload simulation. We investigate various hybrid workloads and navigate various application-system configurations for a deeper understanding of performance implications of a diverse mix of workloads on current and future supercomputers. Finally, we propose a scalable framework, Union+, that enables simulation of communication and I/O simultaneously. By combining different levels of abstraction, Union+ is able to efficiently co-model the communication and I/O traffic on HPC systems that equipped with flash-based storage. We conduct experiments with different system configurations, showing how Union+ can help system designers to assess the usefulness of future technologies in next-generation HPC machines.
Show less
- Title
- Predictive energy efficient control framework for connected and automated vehicles in heterogeneous traffic environments
- Creator
- Vellamattathil Baby, Tinu
- Date
- 2023
- Description
-
Within the automotive industry, there is a significant emphasis on enhancing fuel efficiency and mobility, and reducing emissions. In this...
Show moreWithin the automotive industry, there is a significant emphasis on enhancing fuel efficiency and mobility, and reducing emissions. In this context, connected and automated vehicles (CAVs) represent a significant advancement, as they can optimize their acceleration pattern to improve their fuel efficiency. However, when CAVs coexist with human-driven vehicles (HDVs) on the road, suboptimal conditions arise, which adversely affect the performance of CAVs. This research analyzes the automation capabilities of production vehicles to identify scenarios where their performance is suboptimal, and proposes a merge-aware modification of adaptive cruise control (ACC) method for highway merging situations. The proposed algorithm addresses the issue of sudden gap and velocity changes in relation to the preceding vehicle, thereby reducing substantial braking during merging events, resulting in improved energy efficiency. This research also presents a data-driven model for predicting the velocity and position of the preceding vehicle, as well as a robust model predictive control (MPC) strategy that optimizes fuel consumption while considering prediction inaccuracies. Another focus of this research is a novel suggestion-based control framework in interactive mixed traffic environments leveraging the emerging connectivity between vehicles and with infrastructure. It is based on MPC to optimize the fuel efficiency of CAVs in heterogeneous or mixed traffic environments (i.e., including both CAVs and HDVs). In this suggestion-based control framework, the CAVs are considered to provide non-binding velocity and lane change suggestions to the HDVs to follow to improve the fuel efficiency of both the CAVs and the HDVs. To achieve this, the host CAV must devise its own fuel-efficient control solution and determine the recommendations to convey to its preceding HDV. It is assumed that the CAVs can communicate with the HDVs via Vehicle to Vehicle (V2V) communication, while the Signal Phase and Timing (SPaT) information is accessed via Vehicle-to- Infrastructure (V2I) communication. These velocity suggestions remain constant for a predefined period, allowing the driver to adjust their speed accordingly. It is also considered that the suggestions are non binding, i.e., a driver can choose not to follow the suggested velocity. For this control framework to function, we present a velocity prediction model based on experimental data that captures the response of a HDV to different suggested velocities, and a robust approach to ensure collision avoidance. The velocity prediction’s accuracy is also validated with the experimental data (on a table-top drive simulator), and the results are presented. In cases of low CAV penetration, a CAV needs to provide suggestions to multiple surrounding HDVs and incorporating the suggestions to all the HDVs as decision variables to the optimal control problem can be computationally expensive. Hence, a suggestion-based hierarchical energy efficient control framework is also proposed in which a CAV takes into account the interactive nature of the environment by jointly planning its own trajectory and evaluating the suggestions to the surrounding HDVs. Joint planning requires solving the problem in joint state- and action-space, and this research develops a Monte Carlo Tree Search (MCTS)-based trajectory planning approach for the CAV. Since the joint action- and state-space grows exponentially with the number of agents and can be computationally expensive, an adaptive action-space is proposed through pruning the action-space of each agent so that the actions resulting in unsafe trajectories are eliminated. The trajectory planning approach is followed by a low-level model predictive control (MPC)-based motion controller, which aims at tracking the reference trajectory in an optimal fashion. Simulation studies demonstrate the proposed control strategy’s efficacy compared to existing baseline methods.
Show less
- Title
- Integrating Deep Learning And Innovative Feature Selection For Improved Short-Term Price Prediction In Futures Markets
- Creator
- Tian, Tian
- Date
- 2024
- Description
-
This study presents a novel approach for predicting short-term price movements in futures markets using advanced deep-learning models, namely...
Show moreThis study presents a novel approach for predicting short-term price movements in futures markets using advanced deep-learning models, namely LSTM, CNN_LSTM, and GRU_LSTM. By incorporating cophenetic correlation in feature preparation, the study addresses the challenges posed by sudden fluctuations and price spikes while maintaining diversification and utilizing a limited number of variables derived from daily public data. However, the effectiveness of adding features relies on appropriate feature selection, even when employing powerful deep-learning models. To overcome this limitation, an innovative feature selection method is proposed, which combines cophenetic correlation-based hierarchical linkage clustering with the XGBoost importance listing function. This method efficiently identifies and integrates the most relevant features, significantly improving price prediction accuracy. The empirical findings contribute valuable insights into price prediction accuracy and the potential integration of algorithmic and intuitive approaches in futures markets. Moreover, the developed feature preparation method enhances the performance of all deep learning models, including LSTM, CNN_LSTM, and GRU_LSTM. This study contributes to the advancement of price prediction techniques by demonstrating the potential of integrating deep learning models with innovative feature selection methods. Traders and investors can leverage this approach to enhance their decision-making processes and optimize trading strategies in dynamic and complex futures markets.
Show less
- Title
- Design and Synthesis of New Sulfur Cathodes Containing Polysulfide Adsorbing Materials
- Creator
- Suzanowicz, Artur M
- Date
- 2023
- Description
-
Lithium-sulfur battery (LSB) technology has tremendous prospects to substitute lithium-ion battery (LIB) technology due to its high...
Show moreLithium-sulfur battery (LSB) technology has tremendous prospects to substitute lithium-ion battery (LIB) technology due to its high theoretical specific capacity and energy density. However, escaping polysulfide intermediates (produced during the redox reaction process) from the cathode structure is the primary reason for rapid capacity fading. Suppressing the polysulfide shuttle (PSS) is a viable solution for this technology to move closer to commercialization and supersede the established LIB technology. In this dissertation, I have analyzed the challenges faced by LSBs and selected methods and materials to address these problems. I have concluded that in order to further pioneer LSBs, it is necessary to address these essential features of the sulfur cathode: superior electrical conductivity to ensure faster redox reaction kinetics and high discharge capacity, high pore volume of the cathode host to maximize sulfur loading/utilization, and polar polysulfide-resistive materials to anchor and suppress the migration of lithium polysulfides.Furthermore, a versatile, low-cost, and practical scalable synthesis method is essential for translating bench-level development to large-scale production. This dissertation covers designing and synthesizing new scalable cathode structures for lithium-sulfur batteries that are inexpensive and highly functional. The rationally chosen cathode components accommodate sulfur, suppress the migration of polysulfide intermediates via chemical interactions, enhance redox kinetics, and provide electrical conductivity to sulfur, rendering excellent electrochemical performance in terms of high initial specific capacity and good long-term cycling performance. TiO2, Ni12P5, and g-C3N4 as polysulfide adsorbing materials (PAMs) have been fully studied in this thesis along with three distinct types of host structures for lithium-sulfur batteries: Polymer, Carbon Cloth, and Reduced Graphene Oxide. I have created adaptable bulk synthesis techniques that are inexpensive, easily scalable, and suitable for bench-level research as well as large-scale manufacturing. The exceptional performance and scalability of these materials make my cathodes attractive options for the commercialization of lithium-sulfur batteries.
Show less
- Title
- Effect of organic acid treatment in reducing Salmonella on six types of sprout seeds
- Creator
- Yang, Dachuan
- Date
- 2023
- Description
-
Fresh sprouts present a special food safety concern as their growing conditions also favor the growth of pathogens such as Salmonella....
Show moreFresh sprouts present a special food safety concern as their growing conditions also favor the growth of pathogens such as Salmonella. Contamination in sprouts often originates from the seeds used for sprouting. The Produce Safety Rule requires that seeds used to grow sprouts be treated to reduce pathogens. The treatments may be applied by sprout growers or by seed suppliers. Although 20,000 ppm calcium hypochlorite is the most used seed treatment method, the high chlorine level can be hazardous to workers and the environment. Alternative seed treatment methods that are safe and environmentally friendly are needed. In addition, a post-treatment drying step is needed when seed suppliers are using chemical seed treatment methods. This study evaluated the efficacy of an organic acid solution for reducing Salmonella on six types of seeds (alfalfa, clover, radish, mung bean, onion, and broccoli). The impact of treatment on seed germination and sprout yield was also examined. Ten grams of seeds inoculated with a five-serotype cocktail of Salmonella were pre-rinsed with 40 ml of water twice and treated with 75.7 ml of the organic acid solution for 1 hour. The treated seeds were either not rinsed or rinsed with 40 ml of water twice before being dried in the biological safety cabinet for 24 hours. The Salmonella level, germination percentage, and sprout yield of seeds treated with water, seeds treated with the organic acid solution, seeds treated with organic acid, dried, and rinsed, and seeds treated with organic acids, dried, and not rinsed were compared. Salmonella reductions that could be achieved with this organic acid solution treatment were less than 0.5 log CFU/g without drying, 0.6-2.0 log CFU/g with drying and rinse, or 1.6-2.9 log CFU/g with drying and no rinse. Drying significantly enhanced the treatment efficacy (p < 0.05 ) on alfalfa, radish, mung bean, and onion seeds. If seeds were not rinsed after treatment, the log reductions achieved on mung bean and onion seeds were significantly higher (p < 0.05). If seeds were treated and rinsed, the germination rates of six types of seeds were not affected (p > 0.05) regardless of whether the seeds were dried or not. All treatments significantly decreased the sprout yield of clover seeds by 13% (p < 0.05 ). If seeds were not rinsed after treatment, the germination rates of clover and broccoli seeds were reduced by 7 and 9%, respectively, and the sprout yield of alfalfa seeds was reduced by 35%. Overall, the organic acid solution was ineffective when compared with 20,000 ppm calcium hypochlorite in reducing Salmonella on sprout seeds, although the drying step after treatment could improve the treatment efficacy.
Show less
- Title
- Utilizing Image Processing in Evaluation of Fibroblast Stimulation for Collagen Remodeling
- Creator
- Yoon, Shin Hae
- Date
- 2023
- Description
-
This research delves into the realm of image processing as a pivotal component in the evaluation of fibroblast stimulation for collagen...
Show moreThis research delves into the realm of image processing as a pivotal component in the evaluation of fibroblast stimulation for collagen remodeling. The study focuses on unraveling the intricate synergy between electrospun silk fibroin-carbon nanotube (SF-CNT) fibers and electrical stimulation, working in harmony to enhance tissue regeneration. Building upon our previous work, we successfully engineered SF-CNT fibers through the electrospinning process, yielding highly aligned structures reminiscent of natural extracellular matrix proteins. These fibers were fortified with water stability through post-treatment with ethanol vapor, while subtle additions of carbon nanotubes (CNTs) significantly improved fiber alignment, strength, and conductivity without compromising biocompatibility. This innovative platform served as a cell culture matrix for fibroblasts harvested from pelvic organ prolapse (POP) patients, facilitating electrical stimulation that triggered a substantial increase in collagen production. In this study, we harnessed the power of various image-processing software tools, including ImageJ and Python, to analyze immunostained images of fibroblasts obtained from POP patients. Under carefully tailored electrical stimulation conditions, the stimulated cells exhibited an astonishing up to 11.97-fold increase in alpha-smooth muscle actin (α-SMA) expression, unequivocally signifying the successful activation of myofibroblasts. Additionally, in an animal model employing LOX-knockout mice to mimic collagen disorders associated with POP, the application of optimized electrical stimulation conditions for patient 003 led to a remarkable surge in collagen production and structural enhancement, underlining the potential of electrical stimulation in expediting tissue remodeling. Intriguingly, fibroblasts from patient 005 and patient 006 exhibited a distinct response, shedding light on the influence of POP severity on cellular behavior. This study firmly reinforces the imperative of personalized therapeutic approaches, emphasizing the need to customize treatment strategies to align with individual patient characteristics through innovative biological image analysis techniques.
Show less