Search results
(5,941 - 5,960 of 6,090)
Pages
- Title
- The Detection of Emerging Pathogenic Arcobacter Species In Poultry and Poultry By-Products
- Creator
- Nguyen, Paul
- Date
- 2022
- Description
-
Arcobacter species are emerging foodborne pathogens that are associated with human gastrointestinal illness. Typical symptoms of Arcobacter...
Show moreArcobacter species are emerging foodborne pathogens that are associated with human gastrointestinal illness. Typical symptoms of Arcobacter infection that have been reported include diarrhea, abdominal cramps, nausea, vomiting, and in severe cases, bacteremia. Consumption of contaminated food and water is the most common transmission source that leads to human infection. When consumed, pathogenic Arcobacter spp. pass through the stomach and establishes themselves in the host intestinal tract, where they cause gastroenteritis. Currently, there is no standard isolation method to detect pathogenic Arcobacter spp. from food and environment sample matrices. The research detailed in this thesis describes the development of the Nguyen-Restaino-Juárez Arcobacter detection system (NRJ) comprised of a selective enrichment broth and a chromogenic agar plate used to isolate three pathogenic species: Arcobacter butzleri, Arcobacter cryaerophilus, and Arcobacter skirrowii. Results revealed that NRJ yielded 97.8% inclusivity and 100.0% exclusivity when evaluating against select bacterial strains found in foods. Our research group internally validated the novel chromogenic detection system by comparing its efficacy against the modified Houf reference method (HB). Method-performance evaluations determined the NRJ method was significantly more sensitive and specific than modified HB when isolating the three Arcobacter species from ground chicken samples. Furthermore, 16S amplicon sequencing data identified that greater than 97% of bacterial isolates recovered using the NRJ detection system were Arcobacter species. This thesis presents the development and validation of a new gold standard method for isolating these emerging pathogens in food, clinical and environmental sampling.
Show less
- Title
- Non-Hermitian Phononics
- Creator
- Mokhtari, Amir Ashkan
- Date
- 2021
- Description
-
Non-Hermitian and open systems are those that interact with their environment by the flows of energy, particles, and information. These systems...
Show moreNon-Hermitian and open systems are those that interact with their environment by the flows of energy, particles, and information. These systems show rich physical behaviors such as unidirectional wave reflection, enhanced transmission, and enhanced sensitivity to external perturbations comparing to a Hermitian system. To study non-Hermitian and open systems, we first present key concepts and required mathematical tools such as the theory of linear operators, linear algebra, biorthogonality, and exceptional points. We first consider the operator properties of various phononic eigenvalue problems. The aim is to answer some fundamental questions about the eigenvalues and eigenvectors of phononic operators. These include questions about the potential real and complex nature of the eigenvalues, whether the eigenvectors form a complete basis, what are the right orthogonality relationships, and how to create a complete basis when none may exist at the outset. In doing so we present a unified understanding of the properties of the phononic eigenvalues and eigenvectors which would emerge from any numerical method employed to compute such quantities. Next, we apply the mentioned theories on the phononic operators to the problem of scattering of in-plane waves at an interface between a homogeneous medium and a layered composite. This problem is an example of a non self-adjoint operator with biorthogonal eigenvectors and a complex spectrum. Since this problem is non self-adjoint, the degeneracies in the spectrum generally represent a coalescing of both the eigenvalues and eigenvectors (exceptional points). These degeneracies appear in both the complex and real domains of the wavevector. After calculating the eigenvalues and eigenvectors, we then calculate the scattered fields through a novel application of the Betti-Rayleigh reciprocity theorem. Several numerical examples showing rich scattering phenomena are presented afterward. We also prove that energy flux conservation is a restatement of the biorthogonality relationship of the non self-adjoint operators. Finally, we discuss open elastodynamics as a subset of non-Hermitian systems. A basic concept in open systems is effective Hamiltonian. It is a Hamiltonian that acts in the space of reduced set of degrees of freedom in a system and describes only a part of the eigenvalue spectrum of the total Hamiltonian. We present the Feshbach projection operator formalism -- traditionally used for calculating effective Hamiltonians of subsystems in quantum systems -- in the context of mechanical wave propagation problems. The formalism allows for the direct formal representation of effective Hamiltonians of finite systems which are interacting with their environment. This results in a smaller set of equations which isolate the dynamics of the system from the rest of the larger problem that is usually infinite size. We then present the procedure to calculate the Green's function of effective Hamiltonian. Finally we solve the scattering problem in 1D discrete systems using the Green's function method.
Show less
- Title
- TWO ESSAYS IN SUSTAINABILITY AND ASSET RETURN PREDICTABILITY
- Creator
- Nguyen, Lanh Vu Thuc
- Date
- 2021
- Description
-
Our paper consists of two chapters in Financial Modeling for Sustainability and Asset Return Predictability. Recent developments in data...
Show moreOur paper consists of two chapters in Financial Modeling for Sustainability and Asset Return Predictability. Recent developments in data scraping and analytical methods have enhanced the possibility to construct the data and modeling required to examine the topics in each chapter. Chapter 1 proposes a simple yet strategic model involving a personal financial system to achieve a sustainable and prosperous future. The proposed model emphasizes the optimization of carbon footprints of one person at a time through the decentralization of the electricity use. While describing steps to develop a decentralized system considering electricity as a credit product, the model also underlines the importance of geographic economic dimensions and energy market prices due to their anticipated impact on the effectiveness of designing strategies for optimizing individuals’ energy use habits. Geographical conditions as well as market electricity prices can be used to signal individual energy use scores over time, therefore could also be instrumental in customizing energy use habits as the users realize variations in their energy use scores resulting from hourly electricity price changes at their locations. In other words, not only the changes in the individual’s behavior, but also the changes in the geographical conditions and community of users will affect the improvement of energy use behaviors of an individual over time using our model. We believe that the proposed model can be efficiently adopted to take on challenges threatening the future sustainability. While describing the basic characteristics of the model, we also open the possibility for future studies its capabilities to reduce carbon footprints from other societal choices, for example, using water, managing waste, or designing sustainable transportation systems. In Chapter 2, we examine asset return predictability, which is an important topic in finance with rich literature. Much of the current literature considers dividend yield as the main predictor for expected returns, and the main discussion centers around confirming or rejecting the predictive power of dividend yield with mixed evidence. However, dividend payments have been consistently declining and public firms have been increasingly using stock repurchase as the alternative to return values to shareholders. We aim to contribute to the literature by investigating a panel data of total equity payout, which takes into account not only dividend payout but also other forms of payment such as stock repurchase, as the main predictor for expected returns. In the asset return predictability literature, existing studies gather stock repurchase data from financial statements. In this paper, we manually construct our database of returns and payouts of public companies from various sources to create precise firm-level total equity payout dataset without relying on approximations from annual financial statements. This study adds to understanding of total equity payout and stock returns by analyzing a finer granularity than an annum and cross section of stock returns.
Show less
- Title
- Asztalos_iit_0091N_11584
- Title
- Ausloos_iit_0091N_11542
- Title
- DEEP LEARNING IMAGE-DENOISING FOR IMPROVING DIAGNOSTIC ACCURACY IN CARDIAC SPECT
- Creator
- Liu, Junchi
- Date
- 2022
- Description
-
Myocardial perfusion imaging (MPI) using single-photon emission computed tomography (SPECT) is a noninvasive imaging modality widely utilized...
Show moreMyocardial perfusion imaging (MPI) using single-photon emission computed tomography (SPECT) is a noninvasive imaging modality widely utilized for diagnosis of coronary artery diseases (CAD) in nuclear medicine. Because of the concern of potential radiation risks, the imaging dose administered to patients is limited in SPECT-MPI. Due to the low count statistics in acquired data, SPECT images can suffer from high levels of noise. In this study, we investigate the potential benefit of applying deep learning (DL) techniques for denoising in SPECT-MPI studies. Owing to the lack of ground truth in clinical studies, we adopt a noise-to-noise (N2N) training approach for denoising in full-dose studies. Afterwards, we investigate the benefit of applying N2N DL on reduced-dose studies to improve the detection accuracy of perfusion defects. To address the great variability in noise level among different subjects, we propose a scheme to account for the inter-subject variabilities in training a DL denoising network to improve its generalizability. In addition, we propose a dose-blind training approach for denoising at multiple reduced-dose levels. Moreover, we investigate several training schemes to address the issue that defect and non-defect image regions are highly unbalanced in a data set, where the overwhelming majority by non-defect regions tends to have a more pronounced contribution to the conventional loss function. We investigate whether these training schemes can effectively improve preservation of perfusion defects and yield better defect detection accuracy. In the experiments we demonstrated the proposed approaches with a set of 895 clinical acquisitions. The results show promising performance in denoising and improving the detectability of perfusion-defects with the proposed approaches.
Show less
- Title
- Evolution and adaptations to host plants in the beetle genus Diabrotica
- Creator
- Lata, Dimpal
- Date
- 2022
- Description
-
Corn rootworms (Diabrotica spp.) are among the most destructive pests impacting agriculture in the U.S and are an emerging model for insect...
Show moreCorn rootworms (Diabrotica spp.) are among the most destructive pests impacting agriculture in the U.S and are an emerging model for insect-plant interactions. We have a limited understanding of the genome-scale level difference between specialist and generalist corn rootworm species and their interaction with their host plants. Genome sizesof several species in the genus Diabrotica and an outgroup were estimated using flow cytometry. Results indicated that there has been a recent expansion in genome size in the common ancestor of the virgifera group leading to Diabrotica barberi, Diabrotica virgifera virgifera, and Diabrotica virgifera zeae. Comparative genomic studies between the fucata and virgifera groups of Diabrotica revealed that repeat elements, mostly miniature inverted-transposable elements (MITEs) and gypsy-like long terminal repeat (LTR) retroelements, contributed to genome size expansion. The initial transcriptional profile in western corn rootworm neonates when fed on different potential host plants demonstrated a strong association between western corn rootworm and maize, which was very distinct from other possible hosts and non-host plants. The results also showed presence of several larval development related transcripts unique to host plants and the presence of several muscle development and stress response related transcripts unique to non-host plants. The effect of the maize defensive metabolite DIMBOA on corn rootworms was studied using a novel plant-free system. The survival of both southern and western corn rootworms was not affected at a low concentration of DIMBOA. However, the concentration above the physiological dose found in plants affected the survival of corn rootworms. DIMBOA had no plant independent effect on these corn rootworms weight gain.
Show less
- Title
- Understanding and Combating Filter Bubbles in News Recommender Systems
- Creator
- Liu, Ping
- Date
- 2022
- Description
-
Algorithmic personalization of news and social media content aims to improve user experience. However, there is evidence that this filtering...
Show moreAlgorithmic personalization of news and social media content aims to improve user experience. However, there is evidence that this filtering can have the unintended side effect of creating homogeneous ``filter bubbles'' in which users are over-exposed to ideas that conform with their pre-existing perceptions and beliefs. In this thesis, I investigate this phenomenon in political news recommendation algorithms, which have important implications for civil discourse.I first collect and curate a collection of over 900K news articles from over 40 sources. The dataset was annotated in the topic and partisan leaning dimensions by conducting an initial pilot study and later via Amazon Mturk. This dataset is studied and used consistently throughout this thesis. In the first part of the thesis, I conduct simulation studies to investigate how different algorithmic strategies affect filter bubble formation. Drawing on Pew studies of political typologies, we identify heterogeneous effects based on the user's pre-existing preferences. For example, I find that i) users with more extreme preferences are shown less diverse content but have higher click-through rates than users with less extreme preferences, ii) content-based and collaborative-filtering recommenders result in markedly different filter bubbles, and iii) when users have divergent views on different topics, recommenders tend to have a homogenization effect.Secondly, I conduct a content analysis of the news to understand language usage among and across various topics and political stances. I examine words and phrases used by the liberal media and by the conservative media on each topic. I first study what differentiates the liberal media from the conservative media on each topic. I then study common phrases that are used by the liberals and the conservatives on different topics. For example, I examine which phrases are shared by the liberal articles on guns and conservative articles on abortion. Finally, I compare and visualize these words using different clustering algorithms and supervised classification methods.In the last chapter, I conduct an extensive user study to find possible solutions to combat the filter bubbles in the political news recommender systems. I designed a self-contained website that enables a content-based news recommender system and indexed 40,000 U.S.~political articles. I recruited over 800 U.S.~participants from Amazon Mechanical Turk (approved by IRB). The qualified participants are split into control and treatment groups. The users in the treatment group are provided transparency and interaction mechanisms, which grant them more control over the recommendations. Our results show that providing interaction and transparency a) increases click-through rates, b) has the potential to reduce the filter bubbles, and c) raises more awareness about filter bubbles.
Show less
- Title
- SYNTHESIS AND APPLICATION OF ORGANOMETALLIC PRECURSORS FOR TUNGSTEN AND MOLYBDENUM SULFIDE
- Creator
- Liu, Bo
- Date
- 2021
- Description
-
Transition metal chalcogenides (TMCs) have unique properties. They are promising materials for the next generation electrical devices due to...
Show moreTransition metal chalcogenides (TMCs) have unique properties. They are promising materials for the next generation electrical devices due to their suitable band gap, outstanding electron mobility, and controllable atomic thickness. In the last few decades, atomic layer deposition (ALD) has been one of the hottest research frontiers for the fabrication of TMCs films. Signification progress has been made on the varieties of material grown by ALD and the improvement of ALD equipment. However, the fast-evolving microelectronic industry set higher requirements for the ALD application. In the potential electronic fabrication process, low-temperature preparation and non-corrosive procedure are critical for the advanced device architecture. Thus the novel precursor development and the investigation of reaction mechanism are necessary. In addition, as the comprehensive research of film deposition, the prevailing crystallographic defects on the as-prepared films are another appealing thing for us to think about and try to eliminate for better film quality. Therefore, this dissertation will describe the precursor ligand design and its effect on the morphology, the development of W/Mo precursors for tungsten/molybdenum disulfide, and the defect passivation of tungsten diselenide films.In chapter 2, a series of heteroleptic tungsten precursors of tetrathiotungstates (WS42-) were prepared through the facile ligand transfer method. Ligand variation has a significant effect on the crystallinity of the resulting tetrathiotungstate products. Crystalline tetrathiotungstates with preferred orientation were prepared from the reaction of synthesized precursors with H2S at room temperature. Results indicated the morphologies and crystallinities of the tetrathiotungstates can be well controlled by their ligand behaviors which give us a better understanding of the growth mechanism. Chapters 3 and 4 focus on the development of W and Mo precursors for W/Mo disulfide and their performance in wet chemistry reactions and ALD. WS2 can be synthesized at the ambient temperature in solution by the non-redox reaction. WS2 film growth can be achieved at the exciting low temperature of 125°C by ALD. Based on the performance of the tungsten precursor, a new molybdenum dimer precursor with improved reactivity was synthesized, and MoSx can be prepared at the ambient temperature in seconds. X-ray absorption spectroscopy (XAS) was also utilized to investigate the interaction between the organometallic precursor and the SiO2 surface. Chapter 5 will focus on the defect passivation of WSe2 films for the improvement of their electrical performance. Precursors were synthesized, and the wet chemistry method was designed for oxidation removal and vacancy healing. Raman spectroscopy was used as the express characterization method to reveal the treatment results. A promising healing reagent was screened out, and the repaired films were fabricated to field-effect transistors (FETs) for electrical measurements. The final results showed the electrical performance of the WSe2 films was improved after the convenient chemical treatment.
Show less
- Title
- DO GENERAL EDUCATION HIGH SCHOOL STUDENTS IN A BASIC PHYSICAL SCIENCE COURSE IMPROVE UPON ATTITUDES TOWARD SCIENCE LEARNING AND CONTENT MASTERY FOLLOWING VIRTUAL/REMOTE FLIPPED INSTRUCTION OR VIRTUAL/REMOTE NON – FLIPPED INQUIRY – BASED INSTRUCTION?
- Creator
- Martino, Robert S.
- Date
- 2022
- Description
-
As we progress further into the 21st Century, high school science is being challenged on how to best deliver instruction to students. Teacher ...
Show moreAs we progress further into the 21st Century, high school science is being challenged on how to best deliver instruction to students. Teacher – centered instruction has long been de – emphasized in favor of inquiry – based instruction, although teacher – centered instruction still exists to a noticeable extent. Inquiry – based instruction, while more student – centered in its common practice, still involves the teacher as a guide during classroom direct instruction. Research has been ongoing to identify new and dynamic forms of science concept delivery that serve the needs of diversified science instruction (Keys & Bryan, 2001; Saldanha, 2007). Virtual instruction has become more commonplace, and it was fully implemented during this study. It has become incumbent upon science education researchers to explore and identify the most effective means of virtual instruction, means that are student – centered, engaging, interesting, and that both improve student science content understanding and attitudes toward science. Flipped instruction is a more recently – incorporated form of student – centered instruction that has students experiencing classroom routines at home and homework routines in class, and that is why this instruction is referred to as being “flipped.” Hunley (2016) examined teacher and student perception of flipped instruction in a science classroom, while Howell (2013) explored it in a ninth – grade physical science honors classroom. At the onset of this study, relatively few studies were available about this newer form of instruction within high school science instruction, no studies were available that involved high school general education physical science courses, and certainly no studies were available that compared virtual/flipped and non – flipped general education physical science instruction at the onset of this study. This study researched the effect of virtually – implemented flipped instruction on high school students’ understanding and attitude toward science. Instruction was completely virtual/remote (online), and at home, for all students in this study. In investigating the effect of this type of instruction, this study examined student academic performance and attitudes (and intentions and beliefs) toward science in two units of a high school Integrated Chemistry and Physics (Physical Science) course. Sixty – six students from Southlake High School, a midwestern U.S. high school, took part in the study. Sixty – four of those students took the unit assessments. Half of the students (test group) were instructed via virtual/remote flipped instruction and the other half (control group) were instructed via virtual/remote non – flipped, inquiry – based instruction during the first unit. During the second unit, the test group students who were instructed via virtual/remote flipped instruction switched with the control group and were instructed via virtual/remote non-flipped inquiry – based instruction, while the control group students who were instructed via virtual/remote non-flipped instruction were instructed via virtual/remote flipped instruction. The students in both groups were surveyed three times, using the Behaviors, Related Attitudes, and Intentions Toward Science (BRAINS) (Summers, 2016) instrument student questionnaire and survey for their attitudes (and beliefs and intentions) toward science (once prior to the first unit, once after the first unit, and once following the second unit). Student test results and survey responses were then analyzed to identify which instructional style was more effective for student learning and whether student attitudes (and intentions, and beliefs) favored one instructional style over the other. Student science attitudes (and beliefs and intentions) and academic performance were evaluated throughout the study. There was an increase in control group student science attitudes (and beliefs and intentions), from the pre – study survey to the post – unit 1 survey following their receipt of non – flipped virtual/remote instruction in the first unit. There was a lower increase in test group student science attitudes (and beliefs and intentions), from lower pre – study attitudes (compared with the control group) following the test group’s receipt of flipped virtual/remote instruction in the first unit,. Following the second unit, both the control group and test group again showed increases in attitude (and beliefs and intentions) compared with the pre – study survey results, with the control group again showing greater increases than the study group. Student academic performance favored the control group as it outperformed the test group in both the first unit and the second unit, even when the test group received the virtually – delivered flipped instruction in the first unit. The findings of the study showed that virtually implemented flipped instruction resulted in no advantage for the test group in terms of greater improvement in attitudes (or beliefs or intentions) toward science and no advantage for the test group in terms of learning science content in general education Integrated Chemistry and Physics (Physical Science). These results indicate that this form of teaching may not be effective in improving general education Physical Science student learning and student attitudes (and beliefs and intentions) toward science. Therefore, the use of virtually implemented flipped instruction in this general education science course will need to be further studied to determine its effect on student learning and student attitudes (or even beliefs and intentions) toward science.
Show less
- Title
- Sensemaking for Power Asymmetries in Anti-Oppressive Design Practice
- Creator
- Meharry, Jessica J
- Date
- 2022
- Description
-
Within professional design practice in capitalist market contexts, the goals of user-centered and human-centered design methodologies is to...
Show moreWithin professional design practice in capitalist market contexts, the goals of user-centered and human-centered design methodologies is to make algorithmically-based technologies understandable for users, satisfy customer needs and desires, and thereby increase corporate profitability. However, there is growing concern that the computational methods, data management, and business models that drive these technologies are leading to global asymmetries of knowledge, information, and power. The asymmetries of power generated by these designed interactions can be considered the kind of wicked problem that design seeks to address. Yet the dominant goals and methods of professional design practice limit their ability to design ethically within market contexts. These methodologies fail to adequately consider systemic context and power relations, potential for bias in algorithmic computation, and specific forms of systemic oppression. These gaps then lead to inadequate design solutions. This study explores these gaps in design methodologies that could be transferable to a range of professional (and non-professional) practices by looking at potential new levers within familiar design methods and their effectiveness as facilitating problem reframing towards equitable solutions. This dissertation advances knowledge in design by exploring how professional designers can better understand how to use sensemaking processes for salience of power asymmetries, algorithmic materiality, and systemic oppression. It proposes an anti-oppressive design framework that is rooted in a critically-informed design praxis. These orientations rethink and recreate design knowledge by helping professional designers shift the market-focused paradigm for which they are designing.
Show less
- Title
- Comparing the effects of an adjunct brief action planning intervention to standard treatment in a heterogeneous sample of chronic pain patients
- Creator
- Mikrut, Cassandra Leona
- Date
- 2022
- Description
-
Objectives: Behavioral treatments for chronic pain have been associated with positive outcomes, but they are often time consuming in nature....
Show moreObjectives: Behavioral treatments for chronic pain have been associated with positive outcomes, but they are often time consuming in nature. The aim of the present study was to investigate the effectiveness of a brief behavioral treatment for chronic pain and compare Brief Action Planning used in conjunction with treatment as usual (BAP + TAU) to TAU, on changes in pain severity, pain interference, pain self-efficacy, quality of life, and anxiety and depression in a heterogeneous sample of chronic pain patients. Methods: A total of 172 participants were recruited from an urban pain clinic. Eighty-five participants were quasi-randomly assigned to the BAP + TAU group and 87 participants were quasi-randomly assigned to the TAU control group. After completing T1 measures, two iterations of the BAP protocol were delivered to the intervention group by a trained clinician over the phone, with two weeks in between iterations. The TAU group received check-in calls, collecting brief mood and pain scores, to control for clinician contact. All participants completed T2 measures following the last phone call. Validated measures were used at T1 and T2 to examine participant outcomes. Results: Two-way repeated measures analysis of variance (ANOVA) tests were used to test the primary hypotheses that there would be a Group x Time interaction, on pain severity, pain interference, pain self-efficacy, quality of life (QOL), and anxiety and depression, such that participants assigned to the BAP + TAU group would endorse improved scores from T1 to T2, while TAU participants would not. Results showed a significant Group x Time interaction on pain severity and anxiety and depression. However, there was not a significant Group x Time interaction on pain interference, pain self-efficacy, or QOL. Discussion: These findings provide preliminary support for the effectiveness of BAP, as an adjunctive treatment to TAU, when provided by a trained clinician, as a treatment for reducing pain severity and anxiety and depression, in a heterogeneous chronic pain population. These results advance the current BAP literature, providing preliminary support for using BAP with individuals with a wide variety of chronic pain diagnoses.
Show less
- Title
- EMBEDDING RELATIONSHIPS: THE INDIRECT EFFECTS OF WORK RELATIONSHIPS ON TURNOVER INTENT
- Creator
- McDonald, Jordan C.
- Date
- 2022
- Description
-
With the onset of the “Great Resignation” following the onset of the COVID-19 pandemic, employees are quitting jobs at unprecedented levels....
Show moreWith the onset of the “Great Resignation” following the onset of the COVID-19 pandemic, employees are quitting jobs at unprecedented levels. Although the traditional model of turnover (Mobley, 1977; Mobley, Griffeth, Hand, & Meglino, 1979) links job attitudes and turnover intentions as key determinants in understanding the turnover process, there is a growing recognition of the importance of studying contextual variables, namely social relations, in expanding our understanding of employee turnover and retention. Job embeddedness (Mitchell et al., 2001) and social capital theories (Granovetter, 1973; Burt, 1992; Lin, 1982) implicate employees’ social networks as additional factors worth investigating in understanding employee turnover. The aim of the current study was to study an expanded model of turnover by examining whether different types of social relationships at work differentially related to work experiences and attitudes that, in turn, related to turnover intentions. The current research leveraged an ego-centric method to collect information on employees’ social networks at work along with work experience and attitudinal constructs. The results of the study found that expressive relationship networks (i.e., friendship networks) had a positive, significant effect on employees’ job embeddedness, with an indication of a marginal indirect effect with organizational commitment. Surprisingly, employees’ instrumental networks were not significantly related to any work experience or attitudinal factors. There was no support for the hypothesized indirect effects linking social networks, work experiences and attitudes, and turnover intentions. Practical implications and directions for future research are discussed.
Show less
- Title
- Intelligent Job Scheduling on High Performance Computing Systems
- Creator
- Fan, Yuping
- Date
- 2021
- Description
-
Job scheduler is a crucial component in high-performance computing (HPC) systems. It sorts and allocates jobs according to site policies and...
Show moreJob scheduler is a crucial component in high-performance computing (HPC) systems. It sorts and allocates jobs according to site policies and resource availability. It plays an important role in the efficient use of system resources and users satisfaction. Existing HPC job schedulers typically leverage simple heuristics to schedule jobs. However, the rapid growth in system infrastructure and the introduction of diverse workloads pose serious challenges to the traditional heuristic approaches. First, the current approaches concentrate on CPU footprint and ignore the performance of other resources. Second, the scheduling policies are manually designed and only consider some isolated job information, such as job size and runtime estimate. Such a manual design process prevents the schedulers from making informative decisions by extracting the abundant environment information (i.e., system and queue information). Moreover, they can hardly adapt to workload changes, leading to degraded scheduling performance. These challenges call for a new job scheduling framework that can extract useful information from diverse workloads and the increasingly complicated system environment, and finally make well-informed scheduling decisions in real time.In this work, we propose an intelligent HPC job scheduling framework to address these emerging challenges. Our research takes advantage of advanced machine learning and optimization methods to extract useful workload- and system-specific information and to further educate the framework to make efficient scheduling decisions under various system configurations and diverse workloads. The framework contains four major efforts. First, we focus on providing more accurate job runtime estimations. Estimated job runtime is one of the most important factors affecting scheduling decisions. However, user provided runtime estimates are highly inaccurate and existing solutions are prone to underestimation which causes jobs to be killed. We leverage and enhance a machine learning method called Tobit model to improve the accuracy of job runtime estimates at the same time reduce underestimation rate. More importantly, using TRIP’s improved job runtime estimates boosts scheduling performance by up to 45%. Second, we conduct research on multi-resource scheduling. HPC systems are undergoing significant changes in recent years. New hardware devices, such as GPU and burst buffer, have been integrated into production HPC systems, which significantly expands the schedulable resources. Unfortunately, the current production schedulers allocate jobs solely based on CPU footprint, which severely hurts system performance. In our work, we propose a framework taking all scalable resources into consideration by transforming this problem into multi-objective optimization (MOO) problem and rapid solving it via genetic algorithm. Next, we leverage reinforcement learning (RL) to automatically learn efficient workload- and system-specific scheduling policies. Existing HPC schedulers either use generalized and simple heuristics or optimization methods that ignore workload and system characteristics. To overcome this issue, we design a new scheduling agent DRAS to automatically learn efficient scheduling policies. DRAS leverages the advance in deep reinforcement learning and incorporates the key features of HPC scheduling in the form of a hierarchical neural network structure. We develop a three-phase training process to help DRAS effectively learn the scheduling environment (i.e., the system and its workloads) and to rapidly converge to an optimal policy. Finally, we explore the problem of scheduling mixed workloads, i.e., rigid, malleable and on-demand workloads, on a single HPC system. Traditionally, rigid jobs are the main tenants of HPC systems. In recent years, malleable applications, i.e., jobs that can change sizes before and during execution, are emerging on HPC systems. In addition, dedicated clusters were the main platforms to run on-demand jobs, i.e., jobs needed to be completed in the shortest time possible. As the sizes of on-demand jobs are growing, HPC systems become more cost-efficient platforms for on-demand jobs. However, existing studies do not consider the problem of scheduling all three types of workloads. In our work, we propose six mechanisms, which combine checkpointing, shrink, expansion techniques, to schedule the mixed workloads on one HPC system.
Show less
- Title
- Developing Novel Optimization Algorithms Applied To Building Energy Performance and Indoor Air Quality
- Creator
- Faramarzi, Afshin
- Date
- 2021
- Description
-
Residential and commercial buildings account for 23% of global energy use. In the United States, space heating, cooling, and lighting energy...
Show moreResidential and commercial buildings account for 23% of global energy use. In the United States, space heating, cooling, and lighting energy use accounts for 38%, 9%, and 7% of building energy consumption, which results in 54% of the total energy consumption of the building. Energy efficiency improvements in buildings require consideration of optimal design, operation, and control of building components (e.g., mechanical and envelope systems). We can address this task by taking advantage of computational optimization methods throughout the design, operation, and control processes.Non-gradient metaheuristic optimization methods known as metaheuristics are some of the most popular and widely used optimization methods in Building Performance Optimization (BPO) problems. Conventional metaheuristics usually have simple mathematical models with low rate of convergence. On the other hand, high-performance metaheuristic optimizers are efficient and usually have a fast rate of convergence, but their mathematical models are hard to understand and implement. As such, researchers are usually not inclined to employ them in solving their problems. To this end, we aimed at developing optimization algorithms which borrow simplicity from conventional methods and efficiency from high-performance optimizers to solve problems fast and efficiently while being welcomed by users from throughout the world. Therefore, the overarching objective of this work is defined to first develop novel optimization algorithms which are simple in mathematical models and still efficient in solving optimization benchmark problems and then apply the methods to building energy performance and indoor air quality (IAQ) problems. In the first objective of this work, which is the development phase, two continuous optimization methods and one binary optimizer are developed and are separately described in three different tasks. The first method called Equilibrium Optimizer (EO) is a simple method inspired by the mass balance equation in a control volume. The second optimization method called Marine Predators Algorithm (MPA) is a more complicated method compared to EO and is inspired by widespread foraging strategies between marine predators in the ocean ecosystem. Finally, the third method is the binary version of an already developed equilibrium optimizer called Binary Equilibrium Optimizer (BEO). The second objective of the dissertation is the application phase which focuses on the application of the developed methods and other widely used methods in research and industry for solving the almost new BPO and IAQ problems. The results showed that the developed methods were able to either reach more energy-efficient solutions compared to the other methods or to show a considerably faster rate of convergence compared to other methods in the problems in which the optimal solutions are similarly obtained by different methods.
Show less
- Title
- The Feasibility of Honeycomb Structure to Enhance Daylighting and Energy Performance for High-Rise Buildings
- Creator
- Geng, Camelia Mina
- Date
- 2022
- Description
-
The world population is increasing at a fast rate and the projection is that there will be more than 12 billion people by the year 2050. It is...
Show moreThe world population is increasing at a fast rate and the projection is that there will be more than 12 billion people by the year 2050. It is also expected that at least 70% of the population will reside and work in urban areas (mostly cities) in some sort of high-rise building. At the same time, the climate is rapidly changing to increase the effects of man-made global warming. Conceivably, energy conservation, daylighting performance, thermal comfort and environmentally friendly high-rise buildings are necessary to facilitate sustainable working and living environments. The roles of the architects and planners are paramount at this critical era of history of mankind; for one thing they are responsible for the planning and design of sustainable high-rise buildings.Recently, there has been significant research to connect a branch of Biophilia design, which is Biomorphic architecture. This has developed a wonderful design approach, termed the Biomorphic idea. This focuses on the enhancement of the physical and psychological connection with nature, to acquire more natural light and the outside connection targeting energy saving. More and more, high-rise buildings are being designed following Biomorphic approaches. As such, these buildings are defined as sustainable and primarily, because they are energy efficient and, and in many cases tend to minimize the use of fossil fuels while promoting the use of renewable and clean energy sources. As such, a honeycomb structure approach successfully applies to high-rise building design. The intend of this research document is to simulate Biomorphic honeycomb structure which is the hexagonal rotation ring structure including 32 stories in18 different hexagon high-rise building configurations, to develop true daylighting and energy. performance. This is achieved by the using Grasshopper-Climate Studio simulation tool and multiple fuzzy mathematics for decision making. This document will provide a comparison of daylighting including sDA, ASE, sDG and the illuminance results from these 3 series of the 18 models configuring different honeycomb structures of high-rise buildings. The results prove that the hexagon honeycomb structure for high-rise building is feasibility and targets green buildings standards such as LEED V4.1 The success of the method depends on developing multiple criteria of Poisson ratio and Gaussian curvature within the hexagon structure to create different honeycomb facades and rotation of the ring for office high-rise building which is also a qualitative nature of the Biomorphic design parameters.
Show less
- Title
- Two Essays on Cryptocurrency Markets
- Creator
- Fan, Lei
- Date
- 2022
- Description
-
Understanding the dependence relationships among cryptocurrencies and equity markets is of interest to both academics and researchers. This...
Show moreUnderstanding the dependence relationships among cryptocurrencies and equity markets is of interest to both academics and researchers. This dissertation is comprised of two essays to add to this understanding. In the first essay, I investigate the interdependencies among the level of informational efficiency of four cryptocurrencies. I examine the correlations between the market efficiencies of cryptocurrencies using the rolling window method. I find that the correlations between those levels of market efficiencies are time-varying and influenced by the market condition and external events. I extend the study by employing Granger causality tests to analyze the causal relationships among these levels of market efficiency. I find that the Granger causalities among the levels of the cryptocurrency market efficiencies are time-varying and impacted by the level of the market efficiencies. In the second essay, I investigate the pairwise dependencies and causalities between the returns of the cryptocurrencies and six equity market indices. I examine the pairwise dependencies between the returns of cryptocurrencies and those of the equity indices by using the DCC-GARCH framework. I find the dynamic conditional correlations between the cryptocurrencies and equity indices are time-varying and generally weak. Furthermore, I study the causal relationship between cryptocurrencies and equity indices by employing the rolling Granger causality test. I find that the Granger causalities between cryptocurrencies and equity indices are time-varying, and more unidirectional Granger causalities are found from cryptocurrencies to equity indices. In addition, I examine the impact of cryptocurrency returns on the correlations between the equity market indices, and likewise, the impact of equity market returns on the correlations between the cryptocurrencies. I find that the cryptocurrency price fluctuations have minimal impact on the correlations between equity indices. Moreover, the dynamic conditional correlation between cryptocurrencies is unaffected by equity price innovations except for some extreme events. These findings could have implications for understanding the relationships among cryptocurrencies and equity markets and for investors wishing to incorporate these relationships in their portfolio choices.
Show less
- Title
- PLAYER MOTIVATION AND TRAINING EFFECTIVENESS: INSIGHTS FROM A STRUCTURAL MODEL OF GAME-BASED LEARNING
- Creator
- Gandara, Daniel A.
- Date
- 2022
- Description
-
Digital game-based learning (DGBL) delivers training through video games. Practitioners are using DGBL in attempts to increase motivation,...
Show moreDigital game-based learning (DGBL) delivers training through video games. Practitioners are using DGBL in attempts to increase motivation, promote learning, and increase transfer in training. Theory and models of DGBL aim to explain how motivation is created to yield these benefits, and studies have compared DGBL to traditional methods, yet the tenets of these theories remain largely unexamined. The present study tested the process-outcome link of Garris et al.’s (2002) input-process-outcome model, examined the effect of positive and negative user judgments on behavior and learning, and expanded the model to include trainee reactions and adaptive transfer. Participants (N = 254) learned about identifying misinformation online by playing Fake It to Make It, a social-impact game that teaches core critical thinking skills. Autoregressive cross-lagged (ARCL) panel analysis was used to analyze and compare models to test the hypothesized relationships among judgments and behavior scores across six game levels in predicting six learning outcomes, including adaptive transfer tasks evaluating online sources. Findings indicated that each judgment was predicted by its own lagged judgment and lagged behavior. Additionally, positive user judgments predicted reactions, post-training self-efficacy, and motivation to transfer, while frustration inhibited declarative knowledge. Results also demonstrated that behavior and declarative knowledge predicted performance on the adaptive transfer tasks. Research recommendations and practice implications are discussed relative to using games to deliver training with emphasis on motivational properties and targeted outcomes.
Show less
- Title
- COMPUTATIONAL FLUID DYNAMICS SIMULATION OF CARBON CAPTURE UNIT USING AN AMINE-BASED SOLID SORBENT
- Creator
- Esmaeili Rad, Farnaz
- Date
- 2021
- Description
-
Carbon capture and sequestration (CCS) is one of the key technologies to reduce the emission of carbon dioxide, including that from exiting...
Show moreCarbon capture and sequestration (CCS) is one of the key technologies to reduce the emission of carbon dioxide, including that from exiting flue gas of fossil fuel-fired power plants. The goal of this project is the development of a computational fluid dynamics (CFD) model to predict the extent of CO2 capture in a circulating fluidized bed carbon capture unit using novel amine-based solid sorbents.In this study, first the hydrodynamics of the carbonation section of the carbon capture unit was investigated. Then, the performance of the amine-based solid sorbents toward capturing carbon dioxide from flue gas and the extent of CO2 adsorption in the carbonation section were studied. At the second stage of the study, the regeneration of the sorbents and desorption of carbon dioxide from carbonated solid sorbents in the regeneration section of the carbon capture unit was investigated. At the third stage of the study, the hydrodynamics of the entire loop of the integrated carbonation and regeneration sections were simulated. Two-dimensional non-reactive CFD simulations of the entire loop, including the carbonator, regenerator, and two loop-seal fluidized beds, were performed to study the details of the solid circulation in the system in a stable operational condition. At the fourth stage of the study, the effect of the carbonated solids’ residence time in the regeneration section was investigated by extending the regenerator fluidized bed height and adding to the volume of the system. Heated surfaces, which resembled heating coils in the regenerator cylinder, were also added to the system to investigate the effect of the temperature. The heated surface of the immersed coils in the bed provided sufficient energy for the endothermic regeneration reaction to keep the temperature of the bed at the desired temperature. Finally, the verified models of the carbonation section, the regenerations section, and non-reactive simulation of the CFB loop were used to simulate the entire circulating fluidized bed carbon capture unit, with an integrated carbonator and regenerator system using amine-based solid sorbents. The extent of CO2 capture in the carbonation section and desorption of carbon dioxide in the regeneration section were predicted. Our study showed the potential of continuous carbon capture by amine-based solid sorbents through the circulating fluidized bed CO2 capture unit.
Show less
- Title
- Algorithms for Discrete Data in Statistics and Operations Research
- Creator
- Schwartz, William K.
- Date
- 2021
- Description
-
This thesis develops mathematical background for the design of algorithms for discrete-data problems, two in statistics and one in operations...
Show moreThis thesis develops mathematical background for the design of algorithms for discrete-data problems, two in statistics and one in operations research. Chapter 1 gives some background on what chapters 2 to 4 have in common. It also defines some basic terminology that the other chapters use.Chapter 2 offers a general approach to modeling longitudinal network data, including exponential random graph models (ERGMs), that vary according to certain discrete-time Markov chains (The abstract of chapter 2 borrows heavily from the abstract of Schwartz et al., 2021). It connects conditional and Markovian exponential families, permutation- uniform Markov chains, various (temporal) ERGMs, and statistical considerations such as dyadic independence and exchangeability. Markovian exponential families are explored in depth to prove that they and only they have exponential family finite sample distributions with the same parameter as that of the transition probabilities. Many new statistical and algebraic properties of permutation-uniform Markov chains are derived. We introduce exponential random ?-multigraph models, motivated by our result on replacing ? observations of a permutation-uniform Markov chain of graphs with a single observation of a corresponding multigraph. Our approach simplifies analysis of some network and autoregressive models from the literature. Removing models’ temporal dependence but not interpretability permitted us to offer closed-form expressions for maximum likelihood estimators that previously did not have closed-form expression available. Chapter 3 designs novel, exact, conditional tests of statistical goodness-of-fit for mixed membership stochastic block models (MMSBMs) of networks, both directed and undirected. The tests employ a ?²-like statistic from which we define p-values for the general null hypothesis that the observed network’s distribution is in the MMSBM as well as for the simple null hypothesis that the distribution is in the MMSBM with specified parameters. For both tests the alternative hypothesis is that the distribution is unconstrained, and they both assume we have observed the block assignments. As exact tests that avoid asymptotic arguments, they are suitable for both small and large networks. Further we provide and analyze a Monte Carlo algorithm to compute the p-value for the simple null hypothesis. In addition to our rigorous results, simulations demonstrate the validity of the test and the convergence of the algorithm. As a conditional test, it requires the algorithm sample the fiber of a sufficient statistic. In contrast to the Markov chain Monte Carlo samplers common in the literature, our algorithm is an exact simulation, so it is faster, more accurate, and easier to implement. Computing the p-value for the general null hypothesis remains an open problem because it depends on an intractable optimization problem. We discuss the two schools of thought evident in the literature on how to deal with such problems, and we recommend a future research program to bridge the gap those two schools. Chapter 4 investigates an auctioneer’s revenue maximization problem in combinatorial auctions. In combinatorial auctions bidders express demand for discrete packages of multiple units of multiple, indivisible goods. The auctioneer’s NP-complete winner determination problem (WDP) is to fit these packages together within the available supply to maximize the bids’ sum. To shorten the path practitioners traverse from from legalese auction rules to computer code, we offer a new wdp formalism to reflect how government auctioneers sell billions of dollars of radio-spectrum licenses in combinatorial auctions today. It models common tie-breaking rules by maximizing a sum of bid vectors lexicographically. After a novel pre-solving technique based on package bids’ marginal values, we develop an algorithm for the WDP. In developing the algorithm’s branch-and-bound part adapted to lexicographic maximization, we discover a partial explanation of why classical WDP has been successful in using the linear programming relaxation: it equals the Lagrangian dual. We adapt the relaxation to lexicographic maximization. The algorithm’s dynamic-programming part retrieves already computed partial solutions from a novel data structure suited specifically to our WDP formalism. Finally we show that the data structure can “warm start” a popular algorithm for solving for opportunity-cost prices.
Show less