Search results
(9,381 - 9,400 of 9,660)
Pages
- Title
- Colored pencil drawing, undated
- Creator
- Henry, Mary Dill, 1913-2009
- Description
-
Untitled colored pencil drawing by Mary Henry, date unknown.
- Collection
- Mary Dill Henry Papers, 1913-2021
- Title
- Photograph of the Aaron Galleries booth at the Art 20 art fair, including Mary Henry's The Chelsea Way, New York, New York, 2006
- Date
- 2006
- Description
-
Photograph of the Aaron Galleries Booth at the Art 20 exhibition, at Park Place Armory in 2006, including Mary Henry's painting The Chelsea...
Show morePhotograph of the Aaron Galleries Booth at the Art 20 exhibition, at Park Place Armory in 2006, including Mary Henry's painting The Chelsea Way visible at right. Inscription on verso: "Art 20 - Park Ave. Armory 2006 Mary Henry 'The Chelsea Way' on the aisle Aaron Galleries Booth."
Show less - Collection
- Mary Dill Henry Papers, 1913-2021
- Title
- TechNews, March 29, 2011
- Creator
- Illinois Institute of Technology
- Date
- 2011-03-29, 2011-03-29
- Collection
- Technology News print collection, 1940-2019
- Title
- Ink Drawings, 1981
- Creator
- Henry, Mary Dill, 1913-2009
- Date
- 2/17/1981
- Description
-
Untitled ink drawings by Mary Henry. Inscription on verso reads "8' x 6' Feb 17 81" and also contains what appear to be mathematical...
Show moreUntitled ink drawings by Mary Henry. Inscription on verso reads "8' x 6' Feb 17 81" and also contains what appear to be mathematical calculations.
Show less - Collection
- Mary Dill Henry Papers, 1913-2021
- Title
- Ink and Colored Pencil Drawings, 1981
- Creator
- Henry, Mary Dill, 1913-2009
- Date
- 1981-03-01
- Description
-
Untitled ink and colored pencil sketches by Mary Henry. Drawings are found on both sides of the sheet. Inscription on verso reads "March 1 '81."
- Collection
- Mary Dill Henry Papers, 1913-2021
- Title
- Ink and Colored Pencil Drawings, 1981, verso
- Creator
- Henry, Mary Dill, 1913-2009
- Date
- 1981-03-01
- Description
-
Untitled ink and colored pencil sketches by Mary Henry. Drawings are found on both sides of the sheet. Inscription on verso reads "March 1 '81."
- Collection
- Mary Dill Henry Papers, 1913-2021
- Title
- Ink and Colored Pencil Drawings, 1981, recto
- Creator
- Henry, Mary Dill, 1913-2009
- Date
- 1981-03-01
- Description
-
Untitled ink and colored pencil sketches by Mary Henry. Drawings are found on both sides of the sheet. Inscription on verso reads "March 1 '81."
- Collection
- Mary Dill Henry Papers, 1913-2021
- Title
- Intraoperative Assessment of Surgical Margins in Head And Neck Cancer Resection Using Time-Domain Fluorescence Imaging
- Creator
- Cleary, Brandon M.
- Date
- 2023
- Description
-
Rapid and accurate determination of surgical margin depth in fluorescence guided surgery has been a difficult issue to overcome, leading to...
Show moreRapid and accurate determination of surgical margin depth in fluorescence guided surgery has been a difficult issue to overcome, leading to over- or under-resection of cancerous tissues and follow-up treatments such as ‘call-back’ surgery and chemotherapy. Current techniques utilizing direct measurement of tumor margins in frozen section pathology are slow, which can prevent surgeons from acting on information before a patient is sent home. Other fluorescence techniques require the measurement of margins via captured images that are overlayed with fluorescent data. This method is flawed, as measuring depth from captured images loses spatial information. Intensity-based fluorescence techniques utilizing tumor-to-background ratios do not decouple the effects of concentration from the depth information acquired. Thus, it is necessary to perform an objective measurement to determine depths of surgical margins. This thesis focuses on the theory, device design, simulation development, and overall viability of time-domain fluorescence imaging as an alternative method of determining surgical margin depths. Characteristic regressions were generated using a thresholding method on acquired time-domain fluorescence signals, which were used to convert time-domain data to a depth value. These were applied to an image space to generate a depth map of a modelled tissue sample. All modeling was performed on homogeneous media using Monte Carlo simulations, providing high accuracy at the cost of increased computational time. In practice, the imaging process should be completed within a span of under 20 minutes for a full tissue sample, rather than 20 minutes for a single slice of the sample. This thesis also explores the effects of different thresholding levels on the accuracy of depth determination, as well as the precautions to be taken regarding hardware limitations and signal noise.
Show less
- Title
- Investigation in the Uncertainty of Chassis Dynamometer Testing for the Energy Characterization of Conventional, Electric and Automated Vehicles
- Creator
- Di Russo, Miriam
- Date
- 2023
- Description
-
For conventional and electric vehicles tested in a standard chassis dynamometer environment precise regulations on the evaluation of their...
Show moreFor conventional and electric vehicles tested in a standard chassis dynamometer environment precise regulations on the evaluation of their energy performance exist. However, the regulations do not include requirements on the confidence value to associate with the results. As vehicles become more and more efficient to meet the stricter regulations mandates on emissions, fuel and energy consumption, traditional testing methods may become insufficient to validate these improvements, and may need revision. Without information about the accuracy associated with the results of those procedures however, adjustments and improvements are not possible, since no frame of reference exists. For connected and automated vehicles, there are no standard testing procedures, and researchers are still in the process of determining if current evaluation methods can be extended to test intelligent technologies and which metrics best represent their performance. For these vehicles is even more important to determine the uncertainty associated with these experimental methods and how they propagate to the final results. The work presented in this dissertation focuses on the development of a systematic framework for the evaluation of the uncertainty associated with the energy performance of conventional, electric and automated vehicles. The framework is based on a known statistical method, to determine the uncertainty associated with the different stages and processes involved in the experimental testing, and to evaluate how the accuracy of each parameter involved impacts the final results. The results demonstrate that the framework can be successfully applied to existing testing methods and provides a trustworthy value of accuracy to associate with the energy performance results, and can be easily extended to connected-automated vehicle testing to evaluate how novel experimental methods impact the accuracy and the confidence of the outputs. The framework can be easily be implemented into an existing laboratory environment to incorporate the uncertainty evaluation among the current results analyzed at the end of each test, and provide a reference for researchers to evaluate the actual benefits of new algorithms and optimization methods and understand margins for improvements, and by regulators to assess which parameters to enforce to ensure compliance and ensure projected benefits.
Show less
- Title
- Using Niobium surface encapsulation and Rhenium to enhance the coherence of superconducting devices
- Creator
- Crisa, Francesco
- Date
- 2024
- Description
-
In recent decades, the scientific community has grappled with escalating complexity, necessitating a more advanced tool capable of tackling...
Show moreIn recent decades, the scientific community has grappled with escalating complexity, necessitating a more advanced tool capable of tackling increasingly intricate simulations beyond the capabilities of classical computers. This tool, known as a quantum computer, features processors composed of individual units termed qubits. While various methods exist for constructing qubits, superconducting circuits have emerged as a leading approach, owing to their parallels with semiconductor technology.In recent years, significant strides have been made in optimizing the geometry and design of qubits. However, the current bottleneck in the performance of superconducting qubits lies in the presence of defects and impurities within the materials used. Niobium, owing to its desirable properties, such as high critical temperature and low kinetic inductance, stands out as the most prevalent superconducting material. Nonetheless, it is encumbered by a relatively thick oxide layer (approximately 5 nm) exhibiting three distinct oxidation states: NbO, NbO$_2$, and Nb$_2$O$_5$. The primary challenge with niobium lies in the multitude of defects localized within the highly disordered Nb$_2$O$_5$ layer and at the interfaces between the different oxides. In this study, I present an encapsulation strategy aimed at restraining surface oxide growth by depositing a thin layer (5 to 10 nm) of another material in vacuum atop the Nb thin film. This approach exploits the superconducting proximity effect, and it was successfully employed in the development of Josephson junction devices on Nb during the 1980s.In the past two years, tantalum and titanium nitride have emerged as promising alternative materials, with breakthrough qubit publications showcasing coherence times five to ten times superior to those achieved in Nb. The focus will be on the fabrication and RF testing of Nb-based qubits with Ta and Au capping layers. With Ta capping, we have achieved the best T1 (not average) decay time of nearly 600 us, which is more than a factor of 10 improvements over the bare Nb. This establishes the unique capping layer approach as a significant new direction for the development of superconducting qubits.Concurrently with the exploration of materials for encapsulation strategies, identifying materials conducive to enhancing the performance of superconducting qubits is imperative. Ideal candidates should exhibit a thin, low-loss surface oxide and establish a clean interface with the substrate, thereby minimizing defects and potential sources of losses. Rhenium, characterized by an extremely thin surface oxide (less than 1 nm) and nearly perfect crystal structure alignment with commonly used substrates such as sapphire, emerges as a promising material platform poised to elevate the performance of superconducting qubits.
Show less
- Title
- The Double-edged Sword of Executive Pay: How the CEO-TMT Pay Gap Influences Firm Performance
- Creator
- Haddadian Nekah, Pouya
- Date
- 2024
- Description
-
This study examines the relationship between the chief executive officer (CEO) and top management team (TMT) pay gap and consequent firm...
Show moreThis study examines the relationship between the chief executive officer (CEO) and top management team (TMT) pay gap and consequent firm performance. Drawing on tournament theory and equity theory, I argue that the effect of the CEO-TMT pay gap on consequent firm performance is non-monotonic. Using data from 1995 to 2022 from S&P 1500 US firms, I explicate an inverted U-shaped relationship, such that an increase in the pay gap leads to an increase in firm performance up to a certain point, after which it declines. Additionally, multilevel analyses reveal that this curvilinear relationship is moderated by attributes of the TMT, and the industry in which the firm competes. My findings show that firms with higher TMT gender diversity suffer lower performance loss due to wider pay gaps. Furthermore, when firm executives are paid more compared to the industry norms, or when the firm has a long-tenured CEO, firm performance becomes less sensitive to larger CEO-TMT pay gaps. Lastly, when the firm competes in a masculine industry, firm performance is more negatively affected by larger CEO-TMT pay gaps. Contrary to my expectations, firm gender-diversity friendly policies failed to influence the CEO-TMT pay gap-firm performance relationship.
Show less
- Title
- Improving Niobium Superconducting Radio-Frequency Cavities by Studying Tantalum
- Creator
- Helfrich, Halle
- Date
- 2023
- Description
-
Niobium superconducting radio-frequency (SRF) cavities are widely used accelerating structures. Improvements in both quality factor, Q0, and...
Show moreNiobium superconducting radio-frequency (SRF) cavities are widely used accelerating structures. Improvements in both quality factor, Q0, and maximum accelerating gradient, Eacc, have been made to SRF cavities by introducing new processing techniques. These breakthroughs include processes such as nitrogen doping(N-Doping) and infusion, electrochemical polishing (EP) and High Pressure Rinsing (HPR). [1] There is still abundant opportunity to improve the cavities or, rather, the material they’re primarily composed of: niobium. A focus here is the role the native oxide of Nb plays in SRF cavity performance. The values of interest in a given cavity are its quality factor Q0, maximum accelerating gradient Eacc and surface resistance Rs . This work characterizes Nb and Ta foils prepared under identical conditions using X-ray photoelectron spectroscopy (XPS) to compare surface oxides and better understand RF loss mechanisms in Nb SRF cavities and qubits. It is well established that Ta qubits experience much longer coherence times than Nb qubits, which is probably due to the larger RF losses in Nb oxide. By studying Tantalum, an element similar to Niobium, the mechanisms of the losses that originate in the oxide and suboxide layers present on the surface of Nb cavities might finally be unlocked. We find noticeable differences in the oxides of Nb and Ta formed by air exposure of clean foils. In particular, Ta does not display the TaO2 suboxide in XPS, while Nb commonly shows NbO2. This suggests that suboxides are an additional contributor of RF losses. We also suggest that thin Ta film coatings of Nb SRF cavities may be a way of increasing Q0. It is in the interest of the accelerator community to fully understand the surface impurities present in Nb SRF cavities so that strategies for mitigating the effects can be proposed.
Show less
- Title
- Improving Localization Safety for Landmark-Based LiDAR Localization System
- Creator
- Chen, Yihe
- Date
- 2024
- Description
-
Autonomous ground robots have gained traction in various commercial applications, with established safety protocols covering subsystem...
Show moreAutonomous ground robots have gained traction in various commercial applications, with established safety protocols covering subsystem reliability, control algorithm stability, path planning, and localization. This thesis specifically delves into the localizer, a critical component responsible for determining the vehicle’s state (e.g., position and orientation), assessing compliance with localization safety requirements, and proposing methods for enhancing localization safety.Within the robotics domain, diverse localizers are utilized, such as scan-matching techniques like normal distribution transformations (NDT), the iterative closest point (ICP) algorithm,probabilistic maps method, and semantic map-based localization.Notably, NDT stands out as a widely adopted standalone laser localization method, prevalent in autonomous driving software such as Autoware and Apollo platforms.In addition to the mentioned localizers, common state estimators include variants of Kalman Filter, particle filter-based, and factor graph-based estimators. The evaluation of localization performance typically involves quantifying the estimated state variance for these state estimators.While various localizer options exist, this study focuses on those utilizing extended Kalman filters and factor graph methods. Unlike methods like NDT and ICP algorithms, extended Kalman filters and factor graph based approaches guarantee bounding of estimated state uncertainty and have been extensively researched for integrity monitoring.Common variance analysis, employed for sensor readings and state estimators, has limitations, primarily focusing on non-faulted scenarios under nominal conditions. This approach proves impractical for real-world scenarios and falls short for safety-critical applications like autonomous vehicles (AVs).To overcome these limitations, this thesis utilizes a dedicated safety metric: integrity risk. Integrity risk assesses the reliability of a robot’s sensory readings and localization algorithm performance under both faulted and non-faulted conditions. With a proven track record in aviation, integrity risk has recently been applied to robotics applications, particularly for evaluating the safety of lidar localization.Despite the significance of improving localization integrity risk through laser landmark manipulation, this remains an under explored territory. Existing research on robot integrity risk primarily focuses on the vehicles themselves. To comprehensively understand the integrity risk of a lidar-based localization system, as addressed in this thesis, an exploration of lidar measurement faults’ modes is essential, a topic covered in this thesis.The primary contributions of this thesis include: A realistic error estimation method for state estimators in autonomous vehicles navigating using pole-shape lidar landmark maps, along with a compensatory method; A method for quantifying the risk associated with unmapped associations in urban environments, enhancing the realism of values provided by the integrity risk estimator; a novel approach to improve the localization integrity of autonomous vehicles equipped with lidar feature extractors in urban environments through minimal environmental modifications, mitigating the impact of unmapped association faults. Simulation results and experimental results are presented and discussed to illustrate the impact of each method, providing further insights into their contributions to localization safety.
Show less
- Title
- Independence and Graphical Models for Fitting Real Data
- Creator
- Cho, Jason Y.
- Date
- 2023
- Description
-
Given some real life dataset where the attributes of the dataset take on categorical values, with corresponding r(1) × r(2) × … × r(m)...
Show moreGiven some real life dataset where the attributes of the dataset take on categorical values, with corresponding r(1) × r(2) × … × r(m) contingency table with nonzero rows or nonzero columns, we will be testing the goodness-of-fit of various independence models to the dataset using a variation of Metropolis-Hastings that uses Markov bases as a tool to get a Monte Carlo estimate of the p-value. This variation of Metropolis-Hastings can be found in Algorithm 3.1.1. Next we will consider the problem: ``out of all possible undirected graphical models each associated to some graph with m vertices that we test to fit on our dataset, which one best fits the dataset?" Here, the m attributes are labeled as vertices for the graph. We would have to conduct 2^(mC2) goodness-of-fit tests since there are 2^(mC2) possible undirected graphs on m vertices. Instead, we consider a backwards selection method likelihood-ratio test algorithm. We first start with the complete graph G = K(m), and call the corresponding undirected graphical model ℳ(G) as the parent model. Then for each edge e in E(G), we repeatedly apply the likelihood-ratio test to test the relative fit of the model ℳ(G-e), the child model, vs. ℳ(G), the parent model, where ℳ(G-e) ⊆ℳ(G). More details on this iterative process can be found in Algorithm 4.1.3. For our dataset, we will be using the alcohol dataset found in https://www.kaggle.com/datasets/sooyoungher/smoking-drinking-dataset, where the four attributes of the dataset we will use are ``Gender" (male, female), ``Age", ``Total cholesterol (mg/dL)", and ``Drinks alcohol or not?". After testing the goodness-of-fit of three independence models corresponding to the independence statements ``Gender vs Drink or not?", ``Age vs Drink or not?", and "Total cholesterol vs Drink or not?", we found that the data came from a distribution from the two independence models corresponding to``Age vs Drink or not?" and "Total cholesterol vs Drink or not?" And after applying the backwards selection likelihood-ratio method on the alcohol dataset, we found that the data came from a distribution from the undirected graphical model associated to the complete graph minus the edge {``Total cholesterol”, ``Drink or not?”}.
Show less
- Title
- Development of a Model To Investigate Inflammation Using Peripheral Blood Mononucleated Cells
- Creator
- Geevarghese Alex, Peter
- Date
- 2023
- Description
-
Our modern culture in our society is facing one of the biggest risks in health which is high-calorie diet-related postprandial inflammation....
Show moreOur modern culture in our society is facing one of the biggest risks in health which is high-calorie diet-related postprandial inflammation. Chronic diseases may be caused if the energy-dense food is the choice meaning if it is uncontrolled, clinical studies have demonstrated this with the body's post-meal inflammatory response. We aimed to find the causes of postprandial inflammation in response to various dietary treatments and provide a model to demonstrate. We aimed to make use of in vivo and in vitro techniques and statistics to create a model. The created model would help us to design specific treatments to minimize inflammation with response to dietary. In addition to figuring out vital dietary additives, the model additionally facilitates the layout of individualized interventions to reduce inflammation, thereby improving long-time period health outcomes. We aim to understand the clinical observations of diet-induced postprandial inflammation on the molecular level. We desire to make contributions to reduce the impact of chronic inflammatory disorders that is associated with postprandial inflammation.
Show less
- Title
- Large Language Model Based Machine Learning Techniques for Fake News Detection
- Creator
- Chen, Pin-Chien
- Date
- 2024
- Description
-
With advanced technology, it’s widely recognized that everyone owns one or more personal devices. Consequently, people are evolving into...
Show moreWith advanced technology, it’s widely recognized that everyone owns one or more personal devices. Consequently, people are evolving into content creators on social media or the streaming platforms sharing their personal ideas regardless of their education or expertise level. Distinguishing fake news is becoming increasingly crucial. However, the recent research only presents comparisons of detecting fake news between one or more models across different datasets. In this work, we applied Natural Language Processing (NLP) techniques with Naïve Bayes and DistilBERT machine learning method combing and augmenting four datasets. The results show that the balanced accuracy is higher than the average in the recent studies. This suggests that our approach holds for improving fake news detection in the era of widespread content creation.
Show less
- Title
- Retrospective Quantitative T1 Imaging to Examine Characteristics of Multiple Sclerosis Lesions
- Creator
- Young, Griffin James
- Date
- 2024
- Description
-
Quantitative MRI plays an essential role in assessing tissue abnormality and diseaseprogression in multiple sclerosis (MS). Specifically, T1...
Show moreQuantitative MRI plays an essential role in assessing tissue abnormality and diseaseprogression in multiple sclerosis (MS). Specifically, T1 relaxometry is gaining popularity as elevated T1 values have been shown to correlate with increased inflammation, demyelination, and gliosis. The predominant issue is that relaxometry requires parametric mapping through advanced imaging techniques not commonly included in standard clinical protocols. This leaves an information gap in large clinical datasets from which quantitative mapping could have been performed. We introduce T1-REQUIRE, a retrospective T1 mapping method that approximates T1 values from a single T1-weighted MR image. This method has already been shown to be accurate within 10% of a clinically available reference standard in healthy controls but will be further validated in MS cohorts. We also further aim to determine T1-REQUIRE’s statistical significance as a unique biomarker for the assessment of MS lesions as they relate to clinical disability and disease burden. A 14-subject comparison between T1-REQUIRE maps derived from 3D T1 weighted turbo field echoes (3D T1w TFE) and an inversion-recovery fast field echo (IRFFE) revealed a whole-brain voxel-wise Pearson’s correlation of r = 0.89 (p < 0.001) and mean bias of 3.99%. In MS white matter lesions, r = 0.81, R2 = 0.65 (p < 0.001, N = 159), bias = 10.07%, and in normal appearing white matter (NAWM), r = 0.82, R 2 = 0.67 (p < 0.001), bias = 9.48%. Mean lesional T1-REQUIRE and MTR correlated significantly (r = -0.68, p < 0.001, N = 587) similar to previously published literature. Median lesional MTR correlated significantly with EDSS (rho = -0.34, p = 0.037), and lesional T1-REQUIRE exhibited xiii significant correlations with global brain tissue atrophy as measured by brain parenchymal fraction (BPF) (r = -0.41, p = 0.010, N = 38). Multivariate linear regressions between T1- REQUIRE NAWM provided meaningful statistical relationships with EDSS (β = 0.03, p = 0.027, N = 38), as well as did mean MTR values in the Thalamus (β = -0.27, p = 0.037, N = 38). A new spoiled gradient echo variation of T1-REQUIRE was assessed as a proof of concept in a small 5-subject MS cohort compared with IR-FFE T1 maps, with a whole brain voxel-wise correlation of r = 0.88, R2 = 0.77 (p < 0.001), and Bias = 0.19%. Lesional T1 comparisons reached a correlation of r = 0.75, R2 = 0.56 (p < 0.001, N = 42), and Bias = 10.81%. The significance of these findings means that there is the potential to provide supplementary quantitative information in clinical datasets where quantitative protocols were not implemented. Large MS data repositories previously only containing structural T1 weighted images now may be used in big data relaxometric studies with the potential to lead to new findings in newly uncovered datasets. Furthermore, T1-REQUIRE has the potential for immediate use in clinics where standard T1 mapping sequences aren’t able to be readily implemented.
Show less
- Title
- Large-Signal Transient Stability and Control of Inverter-Based Resources
- Creator
- Wang, Duo
- Date
- 2024
- Description
-
Renewable generation, including solar photovoltaic (PV) systems, type 3 and 4 wind turbine generation systems (WTG), battery energy storage...
Show moreRenewable generation, including solar photovoltaic (PV) systems, type 3 and 4 wind turbine generation systems (WTG), battery energy storage systems (BESS), as well as high voltage direct current (HVDC) and flexible alternating current (FACT) transmission system devices with increasing penetration level are being connected to the bulk power systems (BPS) via power electronic (PE) converters as the interface, referred to as the inverter-based resources (IBRs) on the transmission and sub-transmission levels or distributed energy resources (DERs) located on the distribution level. The IBR is almost entirely defined by the control algorithms and found to be more prone to experiencing large disturbances due to the lack of the conventional synchronous machine (SM) intrinsic synchronous characteristics and mechanical inertia, as well as being in smaller capacity sizes. Thus, these reasons motivate this dissertation to study the large-signal transient stability and control of IBRs for reliable grid integration and rapid grid transformation. For large-signal stability analysis methods, Lyapunov-based methods are the fundamental theory used to characterize the stability issues with analytical solutions, although other non-Lyapunov methods could also be very helpful. A main difficulty hindering the widespread adoption of the Lyapunov stability analysis method is the difficulty of finding the proper Lyapunov function candidate for a higher dimensional nonlinear system. The Port-Hamiltonian (PH) nonlinear control theory is explored in this dissertation as a promising theoretical framework solution addressing this challenging issue. A PH-based tracking and robust control method is proposed to facilitate the practical application of the PH framework in IBR controls. In addition, considering the typical grid-forming (GFM) IBR control with a first-order low pass filter (LPF) block is usually involved with control saturation function for protection purposes under abnormal operating conditions with anti-windup issue in practical implementation, a PH-based bounded LPF (PH-BLPF) control is proposed to incorporate this in the large-signal PH interconnection modeling framework while preserving the robust tracking Lyapunov stability with improved transient dynamic performance and stability margin.Moreover, specific real-world transient synchronization stability issues, such as the grid voltage large fault disturbance case, are studied. In addition, to meet the recent emerging IBR grid code requirements, such as the current magnitude limitation, grid support function, and fault recovery capability of GFM-VSCs, a virtual impedance-based current-limiting GFM control with enhanced transient stability and grid support is proposed.
Show less
- Title
- Two Essays on Mergers and Acquisitions
- Creator
- Xu, Yang
- Date
- 2024
- Description
-
This dissertation is composed of two self-contained chapters that both relate to mergers and acquisitions (M&A). In the first essay, we...
Show moreThis dissertation is composed of two self-contained chapters that both relate to mergers and acquisitions (M&A). In the first essay, we examine the Delaware (DE) reincorporation effect on firms’ post-IPO behaviors on mergers and acquisitions. We find that firms’ DE reincorporation decisions enhance the likelihood of engaging in M&A as targets. However, as a tradeoff, DE reincorporated firms get lower takeover valuations compared to stay-at-home-state firms, and the acquisition of reincorporated firms is less likely to be successful. Our second essay aims to explore the role of the options market in price discovery for M&A. We find that the predictive power of the changes in implied volatility of the target firm stock for the takeover outcome is statistically and economically significant. The risk arbitrage portfolios incorporating filters derived from the options on stocks of the target firms generate annualized risk-adjusted abnormal returns between 2.6% and 5%, depending on the portfolio weighting method, the threshold of filters for the implied volatility change, and the asset pricing models applied for abnormal returns. The results are robust to different empirical setups and are not explained by traditional factors.
Show less
- Title
- Heterogeneous Workloads Study towards Large-scale Interconnect Network Simulation
- Creator
- Wang, Xin
- Date
- 2023
- Description
-
High-bandwidth, low-latency interconnect networks play a key role in the design of modern high- performance computing (HPC) systems. The ever...
Show moreHigh-bandwidth, low-latency interconnect networks play a key role in the design of modern high- performance computing (HPC) systems. The ever-increasing need for higher bandwidth and higher message rate has driven the design of low-diameter interconnect topologies like variants of dragonfly. As these hierarchical networks become increasingly dominant, interference caused by resource sharing can lead to significant network congestion and performance variability. Meanwhile, with the rapid growth of the machine learning applications, the workloads of future HPC systems are anticipated to be a mix of scientific simulation, big data analytics, and machine learning applications. However, little work has been conducted to understand performance implications of co-running heterogeneous workloads on large-scale dragonfly systems. There is a greater need to study how different interconnect technologies affect workload performance, and how conventional scientific applications interact with emerging big data applications at the underlying interconnect level. In this work, we firstly present a comparative analysis exploring the communication interference for traditional HPC applications by analyzing the trade-off between localizing communication and balancing network traffic. We conduct trace-based simulations for applications with different communication patterns, using multiple job placement policies and routing mechanisms. Then we develop a scalable workload manager that provides an automatic framework to facilitate hybrid workload simulation. We investigate various hybrid workloads and navigate various application-system configurations for a deeper understanding of performance implications of a diverse mix of workloads on current and future supercomputers. Finally, we propose a scalable framework, Union+, that enables simulation of communication and I/O simultaneously. By combining different levels of abstraction, Union+ is able to efficiently co-model the communication and I/O traffic on HPC systems that equipped with flash-based storage. We conduct experiments with different system configurations, showing how Union+ can help system designers to assess the usefulness of future technologies in next-generation HPC machines.
Show less