Search results
(7,301 - 7,320 of 7,414)
Pages
- Title
- In situ EXAFS studies of novel Palladium-based anode catalysts for direct ethanol and formic acid fuel cells
- Creator
- Su, Ning
- Date
- 2024
- Description
-
In this work we made nanoscale uniform deposition of Pd based anode catalyst on the transition metal Au (with atomic ratio Pd:Au=1:10) support...
Show moreIn this work we made nanoscale uniform deposition of Pd based anode catalyst on the transition metal Au (with atomic ratio Pd:Au=1:10) support of direct liquid ethanol fuel cells (DLEFCs) and direct liquid formic acid fuel cells (DLFAFCs). Synthesizing with uniform dispersion and catalyst nanoparticle dimensions understand the role of Pd reaction on its support in the direct EOR (ethanol oxidation reaction) and FOR (formic acid reaction) pathways, we performed in situ Pd K-edge X-ray absorption spectroscopy measurements as a function of potential using a custom-designed flow cell with the catalyst deposited on the glassy carbon window. We did in-situ EXAFS to better understand the reaction mechanism of Pd1@Au10 anode catalyst with EOR and AOR in nanoscale. Compared EOR with FOR electrochemical performance showed Pd@Au&C played better in ethanol than HCOOH and more stable which the the current density can reach up to 1216.25 mA·mg-1 Pd of EOR with Pd1@Au10&C in 1M KOH+1M EtOH (CH3CH2OH) on the ethanol fuel cells (DLEFCs), and 3.56 times higher of the EOR current compared with commercial Pd@C
Show less
- Title
- Agency and Pathway Thinking as Mediators of The Relationship Between Caregiver Burden And Life Satisfaction Among Family Caregivers Of People With Parkinson’s Disease: An Application Of Snyder’s Hope Theory
- Creator
- Springer, Jessica Gabrielle
- Date
- 2024
- Description
-
In the United States, there are 47.9 million caregivers providing care to family members with disabilities. Those providing care to someone...
Show moreIn the United States, there are 47.9 million caregivers providing care to family members with disabilities. Those providing care to someone who has Parkinson’s Disease (PD), a complex degenerative movement disorder, may have a unique caregiving experience, given that disease-related factors (e.g. motor and non-motor symptoms) can contribute to worsening caregiver burden and life satisfactions (LS). PD has an increasing incidence of 90,000 new cases per year, likely resulting in an increased need for caregivers. Caregiving research frequently focuses on the mediators between caregiver burden and LS including social support, coping skills, and appraisals. Research that has specifically focused on caregivers of people with PD (Pw/PD) is significantly limited. Hope is a “positive motivational characteristic comprised of agency and pathways thinking that can help facilitate drive towards one’s goal while also serving as a buffer against negative events” (Snyder et al.,1991). The goal of this study is to understand Snyder’s hope theory as it relates to caregiver burden and LS for caregivers of Pw/PD. Specifically, we hypothesized that (a) caregiver burden will be negatively correlated with agency thinking, pathways thinking, and LS among caregivers of Pw/PD. In addition, pathways thinking, and agency thinking will be positively associated with LS, and (b) agency thinking, and pathways thinking will mediate the relationship between caregiver burden and LS among caregivers of Pw/PD. The study sample consisted of 249 caregivers of Pw/PD who completed an online anonymous questionnaire. Correlations between agency and pathways thinking, LS, caregiver burden, and sociodemographic factors were evaluated. A parallel mediation analysis was run to evaluate the mediating roles of pathways and agency thinking in the relationship between caregiver burden and LS. Results indicated that LS was significantly and negatively correlated with caregiver burden. LS was significantly and positively correlated with both pathways and agency thinking. Pathways thinking had no indirect effect on the relationship of caregiver burden on LS. Agency thinking had a negative, indirect effect on the relationship suggesting that agency thinking partially mediated the relationship between caregiver burden and LS. Clinical implications and future directions are discussed.
Show less
- Title
- Three-Dimensional Co-Culture Systems for Vascularization of Cardiac Tissue
- Creator
- Rodriguez Arias, Jessica A.
- Date
- 2023
- Description
-
Myocardial Infarction (MI) is the partial or complete blockage of blood flow to the myocardial tissue resulting in damage and therefore loss...
Show moreMyocardial Infarction (MI) is the partial or complete blockage of blood flow to the myocardial tissue resulting in damage and therefore loss of heart function. In the U.S. every 40 seconds, someone will suffer from MI and the only available treatment is medication to treat the symptoms of heart function loss, but do not treat the underlying cause. Some attempts to treat the underlying cause have arisen in the last decades including cell-based therapies or tissue engineering therapies such as spheroid-based cardiac patches that have shown to be promising. Improvement in the mechanical properties to create suturable engineered tissues remain to be improved for ease of implantation purposes. Cell-laden hydrogel scaffolds can provide improved mechanical properties compared to biomaterial free cell-based therapies but need to allow for vascularization of the engineered tissue. Thus, the goal of this thesis is to provide preliminary studies for the use of a cell adhesive, proteolytically degradable PEG hydrogel scaffold that eventually would be used as an invitro model to evaluate engineered tissue vascularization for cardiac tissue engineering. To construct this model, important cell spheroid parameters on vascular invasion in 3D culture were investigated including the total number of cells/spheroid, the supporting cell for endothelial cells. In order to scale-up scaffolds to size of clinically relevant dimensions, a multilayered hydrogel construct visible light free-radical polymerization approach encapsulating vascular spheroids in multiple layers was also investigated. Results indicate that a total cell number of 5000 cells/spheroid aggregate were feasible due to cell sourcing. In addition, co-cultures of endothelial and mesenchymal stem cells led to maximized vascular invasion of the spheroids compared to fibroblast/endothelial co-culture and endothelial monoculture of spheroids in the hydrogel. Finally, the extent of vascularization of spheroids in each layer of the multilayered hydrogel constructs varied due to the observed differences in mechanical properties and swelling ratio of each layer due to incomplete polymerization of layers. This study demonstrated the importance of support cells and hydrogel mechanical properties in promoting vascularization of spheroid which serves as basis for building cell-laden hydrogel scaffolds for vascularization for cardiac tissues.
Show less
- Title
- Multimodal Learning and Generation Toward a Multisensory and Creative AI System
- Creator
- Zhu, Ye
- Date
- 2023
- Description
-
We are perceiving and communicating with the world in a multisensory manner, where different information sources are sophisticatedly processed...
Show moreWe are perceiving and communicating with the world in a multisensory manner, where different information sources are sophisticatedly processed and interpreted by separate parts of the human brain to constitute a complex, yet harmonious and unified intelligent system. To endow the machines with true intelligence, multimodal machine learning that incorporates data from various modalities including vision, audio, and text, has become an increasingly popular research area with emerging technical advances in recent years. Under the context of multimodal learning, the creativity to generate and synthesize novel and meaningful data is a critical criterion to assess machine intelligence.As a step towards a multisensory and creative AI system, we study the problem of multimodal generation in this thesis by exploring the field from multiple perspectives. Firstly, we analyze different data modalities in a comprehensive manner by comparing the data natures, the semantics, and their corresponding mainstream technical designs. We then propose to investigate three multimodal generation application scenarios, namely text generation from visual data, audio generation from visual data, and visual generation from textual data, with diverse approaches to give an overview of the field. For the direction of text generation from visual data, we study a novel multimodal task in which the model is expected to summarize a given video with textual descriptions, under a challenging condition where the video can only be partially seen. We propose to supplement the missing visual information via a dialogue interaction and introduce QA-Cooperative network with a dynamic dialogue history update learning mechanism to tackle the challenge. For the direction of audio generation from visual data, we present a new multimodal task that aims to generate music for a given silent dance video clip. Unlike most existing conditional music generation works that generate specific types of mono-instrumental sounds using symbolic audio representations (e.g., MIDI), and that heavily rely on pre-defined musical synthesizers, we generate dance music in complex styles (e.g., pop, breaking, etc.) by employing a Vector-Quantized (VQ) audio representation via our proposed Dance2Music-GAN (D2M-GAN) framework. For the direction of visual generation from textual data, we tackle a key desideratum in conditional synthesis, which is to achieve high correspondence between the conditioning input and generated output using the state-of-the-art generative model -- Diffusion Probabilistic Model. While most existing methods learn such relationships implicitly, by incorporating the prior into the variational lower bound in model training. In this work, we take a different route by explicitly enhancing input-output connections by maximizing their mutual information, which is achieved by our proposed Conditional Discrete Contrastive Diffusion (CDCD) framework. For each direction, we conduct extensive experiments on multiple multimodal datasets and demonstrate that all of our proposed frameworks are able to effectively and substantially improve task performance in their corresponding contexts.
Show less
- Title
- UTILIZING BACTERIAL INTERACTIONS TO CONTROL PATHOGENIC BIOFILM FORMATION
- Creator
- Fang, Kuili
- Date
- 2020
- Description
-
Many chronic infections involve bacterial biofilms, which are difficult to eliminate using conventional antibiotic treatments. Biofilm...
Show moreMany chronic infections involve bacterial biofilms, which are difficult to eliminate using conventional antibiotic treatments. Biofilm formation is a result of dynamic intra- or inter-species interactions. However, the nature of molecular interactions between bacteria in multi-species biofilms are not well understood compared to those in mono-species biofilms. The first project (Chapter 3) investigated the ability of probiotic Escherichia coli Nissle 1917 (EcN) to outcompete the biofilm formation of pathogens including enterohemorrhagic E. coli (EHEC), Pseudomonas aeruginosa, Staphylococcus aureus, and S. epidermidis. When dual-species biofilms were formed, EcN inhibited the EHEC biofilm population by 14-fold compared to EHEC mono-species biofilms. This figure was 1,100-fold for S. aureus and 8,300-fold for S. epidermidis; however, EcN did not inhibit P. aeruginosa biofilms. In contrast, commensal E. coli did not exhibit any inhibitory effect toward other bacterial biofilms. We identified that EcN secretes DegP, a bifunctional (protease and chaperone) periplasmic protein, outside the cells and controls other biofilms. Although three E. coli strains tested in this study expressed degP, only the EcN strain secreted DegP outside the cells. The deletion of degP disabled the activity of EcN in inhibiting EHEC biofilms, and purified DegP directly repressed EHEC biofilm formation. Hence, probiotic E. coli outcompetes pathogenic biofilms via extracellular DegP activity during dual-species biofilm formation. Enterohemorrhagic Escherichia coli O157:H7 (EHEC) is a pathogen causing the outbreaks of hemorrhagic colitis. Conventional antibiotics treatment is not recommended for EHEC infection as antibiotics trigger Shiga toxin production of EHEC and aggravate hemolytic-uremic syndrome. EHEC biofilm formation is closely associated with its virulence expression. Previously, we identified that probiotic E. coli Nissle 1917 (EcN) secretes DegP resulting in the inhibition of EHEC biofilm formation in a dual culture. DegP is a serine protease exhibiting both proteolytic and chaperone functions and binds to outer membrane proteins (OMPs) of target cells. However, the extracellular function of DegP is not clear. We hypothesized that binding of DegP to OMPs of EHEC might inhibit EHEC biofilm formation by affecting the adhesion ability or changing biofilm-related gene regulations of EHEC. We constructed EHEC mutants lacking ompA, ompC, or ompF individually and in combination and assessed their biofilm formation in the presence of DegP-secreting EcN in the co-culture or by adding purified DegP. It was found that both ompA and ompC double deletion decreased EHEC single species biofilm, and also caused that DegP inhibited more EHEC biofilm (about 25 fold inhibition) than DegP inhibited EHEC wt biofilm (about 10 fold), indicating that OmpA and OmpC are more related to EHEC biofilm than OmpF, and OmpA and OmpC might deplete DegP inhibitory functions. On the other hand, DegP S210A, a DegP mutant lacking protease function, inhibited EHEC wt biofilm, indicating that DegP’s biofilm inhibition function is not from its protease activity. Additionally, EHEC transcription profiles in the presence of DegP showed that DegP up-regulated expressions of cellulose production related genes (csgD and bcsA) and motility related genes (flhD, qseB), which were all involved in EHEC biofilm inhibition, and down-regulated Shiga toxin 2 virulence gene (stx2). Besdies, DegP promoted EHEC cellulose production and motility, which is consistent with transcription profile, and Shiga toxin 2 production will be further tested. This study reveals a new function of DegP secreted by EcN in controlling biofilms and leads us to develop an alternative strategy to control biofilm-related infections. Foodborne pathogen Listeria monocytogenes biofilm formation renders these cells highly resistant to current sanitation methods, and probiotics may be a promising approach to the efficient inhibition of Listeria biofilms. In the Chapter 5 study, three Leuconostoc mesenteroides strains of lactic acid bacteria isolated from kimchi were shown to be effective probiotics for inhibiting Listeria biofilm formation. Biofilms of two L. monocytogenes serotypes, 1/2a (ATCC15313) and 4b (ATCC19115), in dual-species culture with each probiotic strain were decreased by more than 40-fold as compared with single-species Listeria biofilms; for instance, a reduction from 5.4 times 10^6 CFU/cm2 L. monocytogenes ATCC19115 in single-species biofilms to 1.1 times 10^5 CFU/cm2 in dual-species biofilms. Most likely, one of the Leuconostoc strains, L. mesenteroides W51, led to the highest Listeria biofilm inhibition without affecting the growth of L. monocytogenes. The cell-free supernatant from the L. mesenteroides W51 culture containing large protein molecules (> 30 kDa) also inhibited Listeria biofilms. These data indicate that Leuconostoc probiotics can be used to repress L. monocytogenes biofilm contamination on surfaces at food processing facilities.
Show less
- Title
- A MICROFLUIDIC INTESTINAL-MICROBIOTA PLATFORM TO STUDY DRUG METABOLISM
- Creator
- Wang, Chengyao
- Date
- 2020
- Description
-
The intestine is the main site that orally administered drugs are primarily metabolized, absorbed, and distributed. The trillions of bacteria...
Show moreThe intestine is the main site that orally administered drugs are primarily metabolized, absorbed, and distributed. The trillions of bacteria that inhabit the intestine influence health and regulate important biochemical factors, such as the activity of enzymes pertinent to drug metabolism. However, this has not been systematically studied partly due to the challenges of recapitulating the unique and complex intestinal microenvironment that includes (1) the presence of mammalian and microbial cells and (2) a unique partitioned oxygenation profile across the lumen to the subepithelial mucosa from anaerobic to the richly vascularized oxygenated. This thesis reports the development of a microfluidic device in which is integrated a membrane synthesized from a key element of mucosal basal lamina, collagen, and precisely controlled partitioned oxygen environment. The device enabled excellent cell viability and long-term function. More importantly, it enabled the coculture of intestinal epithelial cells and aerobic and anaerobic bacteria in the partitioned oxygen environment. These experiments on one hand allowed the measurement of cellular oxygen consumption rate under perfusion, which could be used to study microbial regulation of oxidative metabolism in epithelial cells. On the other hand, the device allowed a systematic examination of the role of different gut bacteria strains on the regulation of factors that are important in drug metabolism, namely, transporters and phase I enzymes. Our studies highlighted the importance of direct communication between the intestinal cells and the gut bacteria with major findings being that species-specific differences exist in the regulation of drug metabolism. This work will be useful for (1) the discovery of novel regulators of drug metabolizing enzymes, (2) developing new pharmacokinetic models, and (3) advancing precision medicine models for patients.
Show less
- Title
- Technology News, January 10, 1944
- Creator
- Illinois Institute of Technology
- Date
- 1944-01-10, 1944-01-10
- Collection
- Technology News Microfilm collection, 1928-1981
- Title
- Population Dynamics of Listeria monocytogenes in Nut, Seed and Legume Butters
- Creator
- Zhang, Xinyuan
- Date
- 2020
- Description
-
Nut, seed, and legume butters are low water activity foods and do not support the growth of foodborne pathogens. Research has determined that...
Show moreNut, seed, and legume butters are low water activity foods and do not support the growth of foodborne pathogens. Research has determined that some pathogens, such as Listeria monocytogenes, can survive for long periods of time in butters, such as almond butter. However, information on the persistence of L. monocytogenes in butters is lacking. The purpose of this study was to determine the population dynamics of L. monocytogenes in butters stored at 5 and 25°C. Nut (almond, hazelnut, pecan), seed (pumpkin, sesame, sunflower), legume (peanut and soy) and butters containing chocolate (hazelnut and peanut) were inoculated with a 4-strain cocktail of rifampicin-resistant L. monocytogenes at 4 (high inoculation) or 1 log CFU/g (low inoculation). High inoculation butters were mixed by hand for 15 min and 100-g portions were weighed into deli-style containers with lids and stored at 5 or 25°C for 12 mo (370 d). Low inoculation butters were stored in 25- g portions in stomacher bags at 25°C for 6 mo (180 d). During storage, 25 g from the 100- g high inoculation portion or 25 g from the low inoculation samples, in triplicate, were homogenized with 225 mL BPB (or BLEB for FDA BAM enrichments when necessary) and serial dilutions of the homogenate were plated onto BHIA with rifampicin for enumeration of L. monocytogenes. Data were statistically analyzed using Student’s t-test (α=0.05). The average initial population of L. monocytogenes in the butters was 3.58±0.25 log CFU/g for the high inoculation butters; L. monocytogenes was detected through enrichments for all low inoculation butters. After 12 mo storage at 5°C, the population of L. monocytogenes decreased by 1.34, 1.27, 1.72, 2.04 and 0.93 log CFU/g in almond, hazelnut, peanut with chocolate, hazelnut with chocolate and pecan butter, respectively, when inoculated at the higher level. Significantly less population reduction was observed in pumpkin, sesame, soy, peanut and sunflower butters (1.08, 0.61, 0.84, 0.05 and 0.40 log CFU/g, respectively). After 12 mo storage at 25°C, the L. monocytogenes population in all butters, with the exception of sunflower butter, decreased to below the limit of enumeration (1.67 log CFU/g), but the pathogen was still present via enrichment. For low inoculation butters, L. monocytogenes was present as determined by enrichment in all butters in at least one of two trials after 6 mo. The results of this study provide information on the survival of L. monocytogenes in different butter types when stored at different temperatures.
Show less
- Title
- TechNews, March 29, 2011
- Creator
- Illinois Institute of Technology
- Date
- 2011-03-29, 2011-03-29
- Collection
- Technology News print collection, 1940-2019
- Title
- Intraoperative Assessment of Surgical Margins in Head And Neck Cancer Resection Using Time-Domain Fluorescence Imaging
- Creator
- Cleary, Brandon M.
- Date
- 2023
- Description
-
Rapid and accurate determination of surgical margin depth in fluorescence guided surgery has been a difficult issue to overcome, leading to...
Show moreRapid and accurate determination of surgical margin depth in fluorescence guided surgery has been a difficult issue to overcome, leading to over- or under-resection of cancerous tissues and follow-up treatments such as ‘call-back’ surgery and chemotherapy. Current techniques utilizing direct measurement of tumor margins in frozen section pathology are slow, which can prevent surgeons from acting on information before a patient is sent home. Other fluorescence techniques require the measurement of margins via captured images that are overlayed with fluorescent data. This method is flawed, as measuring depth from captured images loses spatial information. Intensity-based fluorescence techniques utilizing tumor-to-background ratios do not decouple the effects of concentration from the depth information acquired. Thus, it is necessary to perform an objective measurement to determine depths of surgical margins. This thesis focuses on the theory, device design, simulation development, and overall viability of time-domain fluorescence imaging as an alternative method of determining surgical margin depths. Characteristic regressions were generated using a thresholding method on acquired time-domain fluorescence signals, which were used to convert time-domain data to a depth value. These were applied to an image space to generate a depth map of a modelled tissue sample. All modeling was performed on homogeneous media using Monte Carlo simulations, providing high accuracy at the cost of increased computational time. In practice, the imaging process should be completed within a span of under 20 minutes for a full tissue sample, rather than 20 minutes for a single slice of the sample. This thesis also explores the effects of different thresholding levels on the accuracy of depth determination, as well as the precautions to be taken regarding hardware limitations and signal noise.
Show less
- Title
- Investigation in the Uncertainty of Chassis Dynamometer Testing for the Energy Characterization of Conventional, Electric and Automated Vehicles
- Creator
- Di Russo, Miriam
- Date
- 2023
- Description
-
For conventional and electric vehicles tested in a standard chassis dynamometer environment precise regulations on the evaluation of their...
Show moreFor conventional and electric vehicles tested in a standard chassis dynamometer environment precise regulations on the evaluation of their energy performance exist. However, the regulations do not include requirements on the confidence value to associate with the results. As vehicles become more and more efficient to meet the stricter regulations mandates on emissions, fuel and energy consumption, traditional testing methods may become insufficient to validate these improvements, and may need revision. Without information about the accuracy associated with the results of those procedures however, adjustments and improvements are not possible, since no frame of reference exists. For connected and automated vehicles, there are no standard testing procedures, and researchers are still in the process of determining if current evaluation methods can be extended to test intelligent technologies and which metrics best represent their performance. For these vehicles is even more important to determine the uncertainty associated with these experimental methods and how they propagate to the final results. The work presented in this dissertation focuses on the development of a systematic framework for the evaluation of the uncertainty associated with the energy performance of conventional, electric and automated vehicles. The framework is based on a known statistical method, to determine the uncertainty associated with the different stages and processes involved in the experimental testing, and to evaluate how the accuracy of each parameter involved impacts the final results. The results demonstrate that the framework can be successfully applied to existing testing methods and provides a trustworthy value of accuracy to associate with the energy performance results, and can be easily extended to connected-automated vehicle testing to evaluate how novel experimental methods impact the accuracy and the confidence of the outputs. The framework can be easily be implemented into an existing laboratory environment to incorporate the uncertainty evaluation among the current results analyzed at the end of each test, and provide a reference for researchers to evaluate the actual benefits of new algorithms and optimization methods and understand margins for improvements, and by regulators to assess which parameters to enforce to ensure compliance and ensure projected benefits.
Show less
- Title
- Using Niobium surface encapsulation and Rhenium to enhance the coherence of superconducting devices
- Creator
- Crisa, Francesco
- Date
- 2024
- Description
-
In recent decades, the scientific community has grappled with escalating complexity, necessitating a more advanced tool capable of tackling...
Show moreIn recent decades, the scientific community has grappled with escalating complexity, necessitating a more advanced tool capable of tackling increasingly intricate simulations beyond the capabilities of classical computers. This tool, known as a quantum computer, features processors composed of individual units termed qubits. While various methods exist for constructing qubits, superconducting circuits have emerged as a leading approach, owing to their parallels with semiconductor technology.In recent years, significant strides have been made in optimizing the geometry and design of qubits. However, the current bottleneck in the performance of superconducting qubits lies in the presence of defects and impurities within the materials used. Niobium, owing to its desirable properties, such as high critical temperature and low kinetic inductance, stands out as the most prevalent superconducting material. Nonetheless, it is encumbered by a relatively thick oxide layer (approximately 5 nm) exhibiting three distinct oxidation states: NbO, NbO$_2$, and Nb$_2$O$_5$. The primary challenge with niobium lies in the multitude of defects localized within the highly disordered Nb$_2$O$_5$ layer and at the interfaces between the different oxides. In this study, I present an encapsulation strategy aimed at restraining surface oxide growth by depositing a thin layer (5 to 10 nm) of another material in vacuum atop the Nb thin film. This approach exploits the superconducting proximity effect, and it was successfully employed in the development of Josephson junction devices on Nb during the 1980s.In the past two years, tantalum and titanium nitride have emerged as promising alternative materials, with breakthrough qubit publications showcasing coherence times five to ten times superior to those achieved in Nb. The focus will be on the fabrication and RF testing of Nb-based qubits with Ta and Au capping layers. With Ta capping, we have achieved the best T1 (not average) decay time of nearly 600 us, which is more than a factor of 10 improvements over the bare Nb. This establishes the unique capping layer approach as a significant new direction for the development of superconducting qubits.Concurrently with the exploration of materials for encapsulation strategies, identifying materials conducive to enhancing the performance of superconducting qubits is imperative. Ideal candidates should exhibit a thin, low-loss surface oxide and establish a clean interface with the substrate, thereby minimizing defects and potential sources of losses. Rhenium, characterized by an extremely thin surface oxide (less than 1 nm) and nearly perfect crystal structure alignment with commonly used substrates such as sapphire, emerges as a promising material platform poised to elevate the performance of superconducting qubits.
Show less
- Title
- The Double-edged Sword of Executive Pay: How the CEO-TMT Pay Gap Influences Firm Performance
- Creator
- Haddadian Nekah, Pouya
- Date
- 2024
- Description
-
This study examines the relationship between the chief executive officer (CEO) and top management team (TMT) pay gap and consequent firm...
Show moreThis study examines the relationship between the chief executive officer (CEO) and top management team (TMT) pay gap and consequent firm performance. Drawing on tournament theory and equity theory, I argue that the effect of the CEO-TMT pay gap on consequent firm performance is non-monotonic. Using data from 1995 to 2022 from S&P 1500 US firms, I explicate an inverted U-shaped relationship, such that an increase in the pay gap leads to an increase in firm performance up to a certain point, after which it declines. Additionally, multilevel analyses reveal that this curvilinear relationship is moderated by attributes of the TMT, and the industry in which the firm competes. My findings show that firms with higher TMT gender diversity suffer lower performance loss due to wider pay gaps. Furthermore, when firm executives are paid more compared to the industry norms, or when the firm has a long-tenured CEO, firm performance becomes less sensitive to larger CEO-TMT pay gaps. Lastly, when the firm competes in a masculine industry, firm performance is more negatively affected by larger CEO-TMT pay gaps. Contrary to my expectations, firm gender-diversity friendly policies failed to influence the CEO-TMT pay gap-firm performance relationship.
Show less
- Title
- Improving Niobium Superconducting Radio-Frequency Cavities by Studying Tantalum
- Creator
- Helfrich, Halle
- Date
- 2023
- Description
-
Niobium superconducting radio-frequency (SRF) cavities are widely used accelerating structures. Improvements in both quality factor, Q0, and...
Show moreNiobium superconducting radio-frequency (SRF) cavities are widely used accelerating structures. Improvements in both quality factor, Q0, and maximum accelerating gradient, Eacc, have been made to SRF cavities by introducing new processing techniques. These breakthroughs include processes such as nitrogen doping(N-Doping) and infusion, electrochemical polishing (EP) and High Pressure Rinsing (HPR). [1] There is still abundant opportunity to improve the cavities or, rather, the material they’re primarily composed of: niobium. A focus here is the role the native oxide of Nb plays in SRF cavity performance. The values of interest in a given cavity are its quality factor Q0, maximum accelerating gradient Eacc and surface resistance Rs . This work characterizes Nb and Ta foils prepared under identical conditions using X-ray photoelectron spectroscopy (XPS) to compare surface oxides and better understand RF loss mechanisms in Nb SRF cavities and qubits. It is well established that Ta qubits experience much longer coherence times than Nb qubits, which is probably due to the larger RF losses in Nb oxide. By studying Tantalum, an element similar to Niobium, the mechanisms of the losses that originate in the oxide and suboxide layers present on the surface of Nb cavities might finally be unlocked. We find noticeable differences in the oxides of Nb and Ta formed by air exposure of clean foils. In particular, Ta does not display the TaO2 suboxide in XPS, while Nb commonly shows NbO2. This suggests that suboxides are an additional contributor of RF losses. We also suggest that thin Ta film coatings of Nb SRF cavities may be a way of increasing Q0. It is in the interest of the accelerator community to fully understand the surface impurities present in Nb SRF cavities so that strategies for mitigating the effects can be proposed.
Show less
- Title
- Improving Localization Safety for Landmark-Based LiDAR Localization System
- Creator
- Chen, Yihe
- Date
- 2024
- Description
-
Autonomous ground robots have gained traction in various commercial applications, with established safety protocols covering subsystem...
Show moreAutonomous ground robots have gained traction in various commercial applications, with established safety protocols covering subsystem reliability, control algorithm stability, path planning, and localization. This thesis specifically delves into the localizer, a critical component responsible for determining the vehicle’s state (e.g., position and orientation), assessing compliance with localization safety requirements, and proposing methods for enhancing localization safety.Within the robotics domain, diverse localizers are utilized, such as scan-matching techniques like normal distribution transformations (NDT), the iterative closest point (ICP) algorithm,probabilistic maps method, and semantic map-based localization.Notably, NDT stands out as a widely adopted standalone laser localization method, prevalent in autonomous driving software such as Autoware and Apollo platforms.In addition to the mentioned localizers, common state estimators include variants of Kalman Filter, particle filter-based, and factor graph-based estimators. The evaluation of localization performance typically involves quantifying the estimated state variance for these state estimators.While various localizer options exist, this study focuses on those utilizing extended Kalman filters and factor graph methods. Unlike methods like NDT and ICP algorithms, extended Kalman filters and factor graph based approaches guarantee bounding of estimated state uncertainty and have been extensively researched for integrity monitoring.Common variance analysis, employed for sensor readings and state estimators, has limitations, primarily focusing on non-faulted scenarios under nominal conditions. This approach proves impractical for real-world scenarios and falls short for safety-critical applications like autonomous vehicles (AVs).To overcome these limitations, this thesis utilizes a dedicated safety metric: integrity risk. Integrity risk assesses the reliability of a robot’s sensory readings and localization algorithm performance under both faulted and non-faulted conditions. With a proven track record in aviation, integrity risk has recently been applied to robotics applications, particularly for evaluating the safety of lidar localization.Despite the significance of improving localization integrity risk through laser landmark manipulation, this remains an under explored territory. Existing research on robot integrity risk primarily focuses on the vehicles themselves. To comprehensively understand the integrity risk of a lidar-based localization system, as addressed in this thesis, an exploration of lidar measurement faults’ modes is essential, a topic covered in this thesis.The primary contributions of this thesis include: A realistic error estimation method for state estimators in autonomous vehicles navigating using pole-shape lidar landmark maps, along with a compensatory method; A method for quantifying the risk associated with unmapped associations in urban environments, enhancing the realism of values provided by the integrity risk estimator; a novel approach to improve the localization integrity of autonomous vehicles equipped with lidar feature extractors in urban environments through minimal environmental modifications, mitigating the impact of unmapped association faults. Simulation results and experimental results are presented and discussed to illustrate the impact of each method, providing further insights into their contributions to localization safety.
Show less
- Title
- Independence and Graphical Models for Fitting Real Data
- Creator
- Cho, Jason Y.
- Date
- 2023
- Description
-
Given some real life dataset where the attributes of the dataset take on categorical values, with corresponding r(1) × r(2) × … × r(m)...
Show moreGiven some real life dataset where the attributes of the dataset take on categorical values, with corresponding r(1) × r(2) × … × r(m) contingency table with nonzero rows or nonzero columns, we will be testing the goodness-of-fit of various independence models to the dataset using a variation of Metropolis-Hastings that uses Markov bases as a tool to get a Monte Carlo estimate of the p-value. This variation of Metropolis-Hastings can be found in Algorithm 3.1.1. Next we will consider the problem: ``out of all possible undirected graphical models each associated to some graph with m vertices that we test to fit on our dataset, which one best fits the dataset?" Here, the m attributes are labeled as vertices for the graph. We would have to conduct 2^(mC2) goodness-of-fit tests since there are 2^(mC2) possible undirected graphs on m vertices. Instead, we consider a backwards selection method likelihood-ratio test algorithm. We first start with the complete graph G = K(m), and call the corresponding undirected graphical model ℳ(G) as the parent model. Then for each edge e in E(G), we repeatedly apply the likelihood-ratio test to test the relative fit of the model ℳ(G-e), the child model, vs. ℳ(G), the parent model, where ℳ(G-e) ⊆ℳ(G). More details on this iterative process can be found in Algorithm 4.1.3. For our dataset, we will be using the alcohol dataset found in https://www.kaggle.com/datasets/sooyoungher/smoking-drinking-dataset, where the four attributes of the dataset we will use are ``Gender" (male, female), ``Age", ``Total cholesterol (mg/dL)", and ``Drinks alcohol or not?". After testing the goodness-of-fit of three independence models corresponding to the independence statements ``Gender vs Drink or not?", ``Age vs Drink or not?", and "Total cholesterol vs Drink or not?", we found that the data came from a distribution from the two independence models corresponding to``Age vs Drink or not?" and "Total cholesterol vs Drink or not?" And after applying the backwards selection likelihood-ratio method on the alcohol dataset, we found that the data came from a distribution from the undirected graphical model associated to the complete graph minus the edge {``Total cholesterol”, ``Drink or not?”}.
Show less
- Title
- Development of a Model To Investigate Inflammation Using Peripheral Blood Mononucleated Cells
- Creator
- Geevarghese Alex, Peter
- Date
- 2023
- Description
-
Our modern culture in our society is facing one of the biggest risks in health which is high-calorie diet-related postprandial inflammation....
Show moreOur modern culture in our society is facing one of the biggest risks in health which is high-calorie diet-related postprandial inflammation. Chronic diseases may be caused if the energy-dense food is the choice meaning if it is uncontrolled, clinical studies have demonstrated this with the body's post-meal inflammatory response. We aimed to find the causes of postprandial inflammation in response to various dietary treatments and provide a model to demonstrate. We aimed to make use of in vivo and in vitro techniques and statistics to create a model. The created model would help us to design specific treatments to minimize inflammation with response to dietary. In addition to figuring out vital dietary additives, the model additionally facilitates the layout of individualized interventions to reduce inflammation, thereby improving long-time period health outcomes. We aim to understand the clinical observations of diet-induced postprandial inflammation on the molecular level. We desire to make contributions to reduce the impact of chronic inflammatory disorders that is associated with postprandial inflammation.
Show less
- Title
- Large Language Model Based Machine Learning Techniques for Fake News Detection
- Creator
- Chen, Pin-Chien
- Date
- 2024
- Description
-
With advanced technology, it’s widely recognized that everyone owns one or more personal devices. Consequently, people are evolving into...
Show moreWith advanced technology, it’s widely recognized that everyone owns one or more personal devices. Consequently, people are evolving into content creators on social media or the streaming platforms sharing their personal ideas regardless of their education or expertise level. Distinguishing fake news is becoming increasingly crucial. However, the recent research only presents comparisons of detecting fake news between one or more models across different datasets. In this work, we applied Natural Language Processing (NLP) techniques with Naïve Bayes and DistilBERT machine learning method combing and augmenting four datasets. The results show that the balanced accuracy is higher than the average in the recent studies. This suggests that our approach holds for improving fake news detection in the era of widespread content creation.
Show less
- Title
- DEVELOPMENT AND APPLICATION OF A NATIONALLY REPRESENTATIVE MODEL SET TO PREDICT THE IMPACTS OF CLIMATE CHANGE ON ENERGY CONSUMPTION AND INDOOR AIR QUALITY (IAQ) IN U.S. RESIDENCES
- Creator
- Fazli, Torkan
- Date
- 2020
- Description
-
Americans spend most of their time inside residences where they are exposed to a number of pollutants of both indoor and outdoor origin....
Show moreAmericans spend most of their time inside residences where they are exposed to a number of pollutants of both indoor and outdoor origin. Residential buildings also account for over 20% of total primary energy consumption in the U.S. and a similar proportion of greenhouse gas emissions. Moreover, climate change is expected to affect building energy use and indoor air quality (IAQ) through both building design (i.e., via our societal responses to climate change) and building operation (i.e., via changing meteorological and ambient air quality conditions). The overarching objectives of this work are to develop a set of combined building energy and indoor air mass balance models that are generally representative of both the current (i.e., ~2010s) and future (i.e., ~2050s) U.S. residential building stock and to apply them using both current and future climate scenarios to estimate the impacts of climate change and climate change policies on building energy use, IAQ, and the prevalence of chronic health hazards in U.S. homes. The developed model set includes over 4000 individual building models with detailed characteristics of both building operation and indoor pollutant physics/chemistry, and is linked to a disability-adjusted life years (DALYs) approach for estimating chronic health outcomes associated with indoor pollutant exposure. The future building stock model incorporates a combination of predicted changes in future meteorological conditions, ambient air quality, the U.S. housing stock, and population demographics. Using the model set, we estimate the total site and source energy consumption for space conditioning in U.S. residences is predicted to decrease by ~37% and ~20% by mid-century (~2050s) compared to 2012, respectively, driven by decreases in heating energy use across the building stock that are larger than coincident increases in cooling energy use in warmer climates. Indoor concentrations of most pollutants of ambient origin are expected to decrease, driven by predicted reductions in ambient concentrations due to tighter emissions controls, with one notable exception of ozone, which is expected to increase in future climate scenarios. This work provides the first known estimates of the potential magnitude of impacts of expected climate changes on building energy use, IAQ, and the prevalence of chronic health hazards in U.S. homes.
Show less
- Title
- Data-Driven Modeling for Advancing Near-Optimal Control of Water-Cooled Chillers
- Creator
- Salimian Rizi, Behzad
- Date
- 2023
- Description
-
Hydronic heating and cooling systems are among the most common types of heating and cooling systems installed in older existing buildings,...
Show moreHydronic heating and cooling systems are among the most common types of heating and cooling systems installed in older existing buildings, especially commercial buildings. The results of this study based on the Commercial Building Energy Consumption Survey (CBECS) indicates chillers account for providing cooling in more than half of the commercial office building floorspaces in the U.S. Therefore, to address the need of improving energy efficiency of chillers systems operation, research studies developed different models to investigate different chiller sequencing approaches. Engineering-based models and empirical models are among the popular approaches for developing prediction models. Engineering-based models utilize the physical principles to calculate the thermal dynamics and energy behaviors of the systems and require detailed system information, while the empirical models deploy machine learning algorithms to develop relationships between input and output data. The empirical models compared to the engineering-based approach are more practical in a system’s energy prediction because of accessibility to required data, superiority in model implementation and prediction accuracy. Moreover, selecting near accurate chiller prediction models for the chiller sequencing needs to consider the importance of each input variable and its contribution to the overall performance of a chiller system, as well as the ease of application and computational time. Among the empirical modeling methods, ensemble learning techniques overcome the instability of the learning algorithm as well as improve prediction accuracy and identify input variable importance. Ensemble models combine multiple individual models, often called base or weak models, to produce a more accurate and robust predictive model. Random Forest (RF) and Extra Gradient Boosting (XGBoost) models are considered as ensemble models which offer built-in mechanisms for assessing feature importance. These techniques work by measuring how much each feature contributes to the overall predictive performance of the ensemble.In the first objective of this work the frequency of hydronic cooling systems in the U.S. building stock for applying potential energy efficiency measures (EEMs) on chiller plants are explored. Results show that the central chillers inside the buildings are responsible for providing cooling for more than 50% of the commercial buildings with areas greater than 9,000 m2(~100,000 ft2). In addition, hydronic cooling systems contribute to the highest Energy Use Intensity (EUI) among other systems, with EUI of 410.0 kWh/m2 (130.0 kBtu/ft2). Therefore, the results of this objective support developing accurate prediction models to assess the chiller performance parameters as an implication for chiller sequencing control strategies in older existing buildings. The second objective of the dissertation is to evaluate the performance of chiller sequencing strategy for the existing water-cooled chiller plant in a high-rise commercial building and develop highly accurate RF chiller models to investigate and determine the input variables of greatest importance to chiller power consumption predictions. The results show that the average value of mean absolute percentage error (MAPE) and root mean squared error (RMSE) for all three RF chiller models are 5.3% and 30 kW, respectively, for the validation dataset, which confirms a good agreement between measured and predicted values. On the other hand, understanding prediction uncertainty is an important task to confidently reporting smaller savings estimates for different chiller sequencing control strategies. This study aims to quantify prediction uncertainty as a percentile for selecting an appropriate confidence level for chillers models which could lead to better prediction of the peak electricity load and participate in demand response programs more efficiently. The results show that by increasing the confidence level from 80% to 90%, the upper and lower bounds of the demand charge differ from the actual value by a factor of 3.3 and 1.7 times greater, respectively. Therefore, it proves the significance of selecting appropriate confidence levels for implementation of chiller sequencing strategy and demand response programs in commercial buildings. As the third objective of this study, the accuracy of these prediction models with respect to the preprocessing, selection of data, noise analysis, effect of chiller control system performance on the recorded data were investigated. Therefore, this study attempts to investigate the impacts of different data resolution, level of noise and data smoothing methods on the chiller power consumption and chiller COP prediction based on time-series Extra Gradient Boosting (XGBoost) models. The results of applying the smoothing methods indicate that the performance of chiller COP and the chiller power consumption models have improved by 2.8% and 4.8%, respectively. Overall, this study would guide the development of data-driven chiller power consumption and chiller COP prediction models in practice.
Show less