Search results
(2,461 - 2,480 of 2,716)
Pages
- Title
- EMBEDDING RELATIONSHIPS: THE INDIRECT EFFECTS OF WORK RELATIONSHIPS ON TURNOVER INTENT
- Creator
- McDonald, Jordan C.
- Date
- 2022
- Description
-
With the onset of the “Great Resignation” following the onset of the COVID-19 pandemic, employees are quitting jobs at unprecedented levels....
Show moreWith the onset of the “Great Resignation” following the onset of the COVID-19 pandemic, employees are quitting jobs at unprecedented levels. Although the traditional model of turnover (Mobley, 1977; Mobley, Griffeth, Hand, & Meglino, 1979) links job attitudes and turnover intentions as key determinants in understanding the turnover process, there is a growing recognition of the importance of studying contextual variables, namely social relations, in expanding our understanding of employee turnover and retention. Job embeddedness (Mitchell et al., 2001) and social capital theories (Granovetter, 1973; Burt, 1992; Lin, 1982) implicate employees’ social networks as additional factors worth investigating in understanding employee turnover. The aim of the current study was to study an expanded model of turnover by examining whether different types of social relationships at work differentially related to work experiences and attitudes that, in turn, related to turnover intentions. The current research leveraged an ego-centric method to collect information on employees’ social networks at work along with work experience and attitudinal constructs. The results of the study found that expressive relationship networks (i.e., friendship networks) had a positive, significant effect on employees’ job embeddedness, with an indication of a marginal indirect effect with organizational commitment. Surprisingly, employees’ instrumental networks were not significantly related to any work experience or attitudinal factors. There was no support for the hypothesized indirect effects linking social networks, work experiences and attitudes, and turnover intentions. Practical implications and directions for future research are discussed.
Show less
- Title
- AN IMPROVED VALIDATED METHOD FOR THE DETERMINATION OF SHORT-CHAIN FATTY ACIDS IN HUMAN FECAL SAMPLES BY GC-FID
- Creator
- Freeman, Morganne M
- Date
- 2022
- Description
-
Short-chain fatty acids (SCFAs) are metabolites produced by the gut microbiota through the fermentation of non-digestible carbohydrates....
Show moreShort-chain fatty acids (SCFAs) are metabolites produced by the gut microbiota through the fermentation of non-digestible carbohydrates. Recent studies suggest that gut microbiota composition, diet and metabolic status play an important role in the production of SCFAs. Current methods for the analysis of SCFAs are complex and inconsistent between research studies. The primary objective of this study was to develop a simplified method for standardized SCFA analysis in human fecal samples by gas chromatography with flame ionization detection (GC-FID). A secondary objective was to apply the method to fecal samples from a previous randomized, crossover clinical trial comparing participants with pre-diabetes mellitus and insulin resistance (IR-group, n=20) to a metabolically healthy reference group (R-group, n=9) after daily consumption of a red raspberry smoothie (RRB, 1 cup fresh-weight equivalent) with or without fructo-oligosaccharide (RRB + FOS, 1 cup RRB + 8g FOS) over a 4-week intervention period. Extraction parameters, including solvent selection and water content of the sample, were investigated before finalizing the method. Freeze-dried fecal samples (0.5 g) were suspended in 5 mL of milli-Q water, vortexed and centrifuged at 3,214 x g for 10 minutes. The supernatant was transferred to a clean tube, acidified with 5.0 M HCl and centrifuged again at 12,857 x g for 5 minutes. The resulting supernatant was transferred to a GC vial for analysis by GC-FID. Linear regression data for standards at concentrations 5-2000 ppm ranged from 0.99994-0.99998. Limit of detection (LOD) ranged from 0.02-0.23 µg/mL. Limit of quantification (LOQ) ranged from 0.08-0.78 µg/mL. The validated method was then applied to fecal samples collected from a previously conducted study. Nine SCFAs were identified and quantified (acetic, propionic, iso-butyric, butyric, iso-valeric, valeric, 4-methyl valeric, hexanoic and heptanoic acids). Statistical analysis (Student’s t-test, ANCOVA) was performed on PC-SAS 9.4 (SAS Institute). Acetic acid was significantly lower in the IR-group compared to the R-group before starting intervention (baseline, Week 0, IR v R-group, p=0.014). Intervention analysis comparing RRB to RRB + FOS at 4 weeks (WK4) showed a significant difference in 4-methyl valeric acid (p = 0.040) in the R-group. Trends of decreased SCFA content after 4-weeks of RRB and RRB + FOS compared to baseline were observed in both groups, though changes were not significantly different between dietary interventions at 4 weeks (p>0.05). Metabolic status and dietary intervention are discussed in relation to their impact on SCFA content in fecal samples and mechanisms of biological use as a metabolite. Limitations of the study include sample size and using only feces and not other biological samples for SCFAs analysis, which may be considered for future research.
Show less
- Title
- Intelligent Job Scheduling on High Performance Computing Systems
- Creator
- Fan, Yuping
- Date
- 2021
- Description
-
Job scheduler is a crucial component in high-performance computing (HPC) systems. It sorts and allocates jobs according to site policies and...
Show moreJob scheduler is a crucial component in high-performance computing (HPC) systems. It sorts and allocates jobs according to site policies and resource availability. It plays an important role in the efficient use of system resources and users satisfaction. Existing HPC job schedulers typically leverage simple heuristics to schedule jobs. However, the rapid growth in system infrastructure and the introduction of diverse workloads pose serious challenges to the traditional heuristic approaches. First, the current approaches concentrate on CPU footprint and ignore the performance of other resources. Second, the scheduling policies are manually designed and only consider some isolated job information, such as job size and runtime estimate. Such a manual design process prevents the schedulers from making informative decisions by extracting the abundant environment information (i.e., system and queue information). Moreover, they can hardly adapt to workload changes, leading to degraded scheduling performance. These challenges call for a new job scheduling framework that can extract useful information from diverse workloads and the increasingly complicated system environment, and finally make well-informed scheduling decisions in real time.In this work, we propose an intelligent HPC job scheduling framework to address these emerging challenges. Our research takes advantage of advanced machine learning and optimization methods to extract useful workload- and system-specific information and to further educate the framework to make efficient scheduling decisions under various system configurations and diverse workloads. The framework contains four major efforts. First, we focus on providing more accurate job runtime estimations. Estimated job runtime is one of the most important factors affecting scheduling decisions. However, user provided runtime estimates are highly inaccurate and existing solutions are prone to underestimation which causes jobs to be killed. We leverage and enhance a machine learning method called Tobit model to improve the accuracy of job runtime estimates at the same time reduce underestimation rate. More importantly, using TRIP’s improved job runtime estimates boosts scheduling performance by up to 45%. Second, we conduct research on multi-resource scheduling. HPC systems are undergoing significant changes in recent years. New hardware devices, such as GPU and burst buffer, have been integrated into production HPC systems, which significantly expands the schedulable resources. Unfortunately, the current production schedulers allocate jobs solely based on CPU footprint, which severely hurts system performance. In our work, we propose a framework taking all scalable resources into consideration by transforming this problem into multi-objective optimization (MOO) problem and rapid solving it via genetic algorithm. Next, we leverage reinforcement learning (RL) to automatically learn efficient workload- and system-specific scheduling policies. Existing HPC schedulers either use generalized and simple heuristics or optimization methods that ignore workload and system characteristics. To overcome this issue, we design a new scheduling agent DRAS to automatically learn efficient scheduling policies. DRAS leverages the advance in deep reinforcement learning and incorporates the key features of HPC scheduling in the form of a hierarchical neural network structure. We develop a three-phase training process to help DRAS effectively learn the scheduling environment (i.e., the system and its workloads) and to rapidly converge to an optimal policy. Finally, we explore the problem of scheduling mixed workloads, i.e., rigid, malleable and on-demand workloads, on a single HPC system. Traditionally, rigid jobs are the main tenants of HPC systems. In recent years, malleable applications, i.e., jobs that can change sizes before and during execution, are emerging on HPC systems. In addition, dedicated clusters were the main platforms to run on-demand jobs, i.e., jobs needed to be completed in the shortest time possible. As the sizes of on-demand jobs are growing, HPC systems become more cost-efficient platforms for on-demand jobs. However, existing studies do not consider the problem of scheduling all three types of workloads. In our work, we propose six mechanisms, which combine checkpointing, shrink, expansion techniques, to schedule the mixed workloads on one HPC system.
Show less
- Title
- Developing Novel Optimization Algorithms Applied To Building Energy Performance and Indoor Air Quality
- Creator
- Faramarzi, Afshin
- Date
- 2021
- Description
-
Residential and commercial buildings account for 23% of global energy use. In the United States, space heating, cooling, and lighting energy...
Show moreResidential and commercial buildings account for 23% of global energy use. In the United States, space heating, cooling, and lighting energy use accounts for 38%, 9%, and 7% of building energy consumption, which results in 54% of the total energy consumption of the building. Energy efficiency improvements in buildings require consideration of optimal design, operation, and control of building components (e.g., mechanical and envelope systems). We can address this task by taking advantage of computational optimization methods throughout the design, operation, and control processes.Non-gradient metaheuristic optimization methods known as metaheuristics are some of the most popular and widely used optimization methods in Building Performance Optimization (BPO) problems. Conventional metaheuristics usually have simple mathematical models with low rate of convergence. On the other hand, high-performance metaheuristic optimizers are efficient and usually have a fast rate of convergence, but their mathematical models are hard to understand and implement. As such, researchers are usually not inclined to employ them in solving their problems. To this end, we aimed at developing optimization algorithms which borrow simplicity from conventional methods and efficiency from high-performance optimizers to solve problems fast and efficiently while being welcomed by users from throughout the world. Therefore, the overarching objective of this work is defined to first develop novel optimization algorithms which are simple in mathematical models and still efficient in solving optimization benchmark problems and then apply the methods to building energy performance and indoor air quality (IAQ) problems. In the first objective of this work, which is the development phase, two continuous optimization methods and one binary optimizer are developed and are separately described in three different tasks. The first method called Equilibrium Optimizer (EO) is a simple method inspired by the mass balance equation in a control volume. The second optimization method called Marine Predators Algorithm (MPA) is a more complicated method compared to EO and is inspired by widespread foraging strategies between marine predators in the ocean ecosystem. Finally, the third method is the binary version of an already developed equilibrium optimizer called Binary Equilibrium Optimizer (BEO). The second objective of the dissertation is the application phase which focuses on the application of the developed methods and other widely used methods in research and industry for solving the almost new BPO and IAQ problems. The results showed that the developed methods were able to either reach more energy-efficient solutions compared to the other methods or to show a considerably faster rate of convergence compared to other methods in the problems in which the optimal solutions are similarly obtained by different methods.
Show less
- Title
- The Feasibility of Honeycomb Structure to Enhance Daylighting and Energy Performance for High-Rise Buildings
- Creator
- Geng, Camelia Mina
- Date
- 2022
- Description
-
The world population is increasing at a fast rate and the projection is that there will be more than 12 billion people by the year 2050. It is...
Show moreThe world population is increasing at a fast rate and the projection is that there will be more than 12 billion people by the year 2050. It is also expected that at least 70% of the population will reside and work in urban areas (mostly cities) in some sort of high-rise building. At the same time, the climate is rapidly changing to increase the effects of man-made global warming. Conceivably, energy conservation, daylighting performance, thermal comfort and environmentally friendly high-rise buildings are necessary to facilitate sustainable working and living environments. The roles of the architects and planners are paramount at this critical era of history of mankind; for one thing they are responsible for the planning and design of sustainable high-rise buildings.Recently, there has been significant research to connect a branch of Biophilia design, which is Biomorphic architecture. This has developed a wonderful design approach, termed the Biomorphic idea. This focuses on the enhancement of the physical and psychological connection with nature, to acquire more natural light and the outside connection targeting energy saving. More and more, high-rise buildings are being designed following Biomorphic approaches. As such, these buildings are defined as sustainable and primarily, because they are energy efficient and, and in many cases tend to minimize the use of fossil fuels while promoting the use of renewable and clean energy sources. As such, a honeycomb structure approach successfully applies to high-rise building design. The intend of this research document is to simulate Biomorphic honeycomb structure which is the hexagonal rotation ring structure including 32 stories in18 different hexagon high-rise building configurations, to develop true daylighting and energy. performance. This is achieved by the using Grasshopper-Climate Studio simulation tool and multiple fuzzy mathematics for decision making. This document will provide a comparison of daylighting including sDA, ASE, sDG and the illuminance results from these 3 series of the 18 models configuring different honeycomb structures of high-rise buildings. The results prove that the hexagon honeycomb structure for high-rise building is feasibility and targets green buildings standards such as LEED V4.1 The success of the method depends on developing multiple criteria of Poisson ratio and Gaussian curvature within the hexagon structure to create different honeycomb facades and rotation of the ring for office high-rise building which is also a qualitative nature of the Biomorphic design parameters.
Show less
- Title
- Two Essays on Cryptocurrency Markets
- Creator
- Fan, Lei
- Date
- 2022
- Description
-
Understanding the dependence relationships among cryptocurrencies and equity markets is of interest to both academics and researchers. This...
Show moreUnderstanding the dependence relationships among cryptocurrencies and equity markets is of interest to both academics and researchers. This dissertation is comprised of two essays to add to this understanding. In the first essay, I investigate the interdependencies among the level of informational efficiency of four cryptocurrencies. I examine the correlations between the market efficiencies of cryptocurrencies using the rolling window method. I find that the correlations between those levels of market efficiencies are time-varying and influenced by the market condition and external events. I extend the study by employing Granger causality tests to analyze the causal relationships among these levels of market efficiency. I find that the Granger causalities among the levels of the cryptocurrency market efficiencies are time-varying and impacted by the level of the market efficiencies. In the second essay, I investigate the pairwise dependencies and causalities between the returns of the cryptocurrencies and six equity market indices. I examine the pairwise dependencies between the returns of cryptocurrencies and those of the equity indices by using the DCC-GARCH framework. I find the dynamic conditional correlations between the cryptocurrencies and equity indices are time-varying and generally weak. Furthermore, I study the causal relationship between cryptocurrencies and equity indices by employing the rolling Granger causality test. I find that the Granger causalities between cryptocurrencies and equity indices are time-varying, and more unidirectional Granger causalities are found from cryptocurrencies to equity indices. In addition, I examine the impact of cryptocurrency returns on the correlations between the equity market indices, and likewise, the impact of equity market returns on the correlations between the cryptocurrencies. I find that the cryptocurrency price fluctuations have minimal impact on the correlations between equity indices. Moreover, the dynamic conditional correlation between cryptocurrencies is unaffected by equity price innovations except for some extreme events. These findings could have implications for understanding the relationships among cryptocurrencies and equity markets and for investors wishing to incorporate these relationships in their portfolio choices.
Show less
- Title
- PLAYER MOTIVATION AND TRAINING EFFECTIVENESS: INSIGHTS FROM A STRUCTURAL MODEL OF GAME-BASED LEARNING
- Creator
- Gandara, Daniel A.
- Date
- 2022
- Description
-
Digital game-based learning (DGBL) delivers training through video games. Practitioners are using DGBL in attempts to increase motivation,...
Show moreDigital game-based learning (DGBL) delivers training through video games. Practitioners are using DGBL in attempts to increase motivation, promote learning, and increase transfer in training. Theory and models of DGBL aim to explain how motivation is created to yield these benefits, and studies have compared DGBL to traditional methods, yet the tenets of these theories remain largely unexamined. The present study tested the process-outcome link of Garris et al.’s (2002) input-process-outcome model, examined the effect of positive and negative user judgments on behavior and learning, and expanded the model to include trainee reactions and adaptive transfer. Participants (N = 254) learned about identifying misinformation online by playing Fake It to Make It, a social-impact game that teaches core critical thinking skills. Autoregressive cross-lagged (ARCL) panel analysis was used to analyze and compare models to test the hypothesized relationships among judgments and behavior scores across six game levels in predicting six learning outcomes, including adaptive transfer tasks evaluating online sources. Findings indicated that each judgment was predicted by its own lagged judgment and lagged behavior. Additionally, positive user judgments predicted reactions, post-training self-efficacy, and motivation to transfer, while frustration inhibited declarative knowledge. Results also demonstrated that behavior and declarative knowledge predicted performance on the adaptive transfer tasks. Research recommendations and practice implications are discussed relative to using games to deliver training with emphasis on motivational properties and targeted outcomes.
Show less
- Title
- Child Temperament, Attachment, and Loneliness: The Mediating Effects of Social Competence
- Creator
- Evans, Lindsey M
- Date
- 2021
- Description
-
Chronic loneliness is a risk factor associated with adverse psychological, physical, and academic outcomes. Converging evidence suggests that...
Show moreChronic loneliness is a risk factor associated with adverse psychological, physical, and academic outcomes. Converging evidence suggests that young children experience and can reliably report on their own loneliness. Due to the significant negative sequalae associated with childhood loneliness, it is critically important to examine risk factors for child loneliness. The aims of this study were two-fold: (a) to examine if temperament (i.e., negative affect, effortful control, and inhibitory control) and attachment security assessed at 4 years of age predict loneliness at age 6; and (b) to determine if social competence at age 5 mediates the relation between temperament and attachment security at age 4 and loneliness at age 6. Participants included a diverse sample of 796 4-year old children, about half of whom were male. At age 4, temperament was assessed with the Rothbart Child Behavior Questionnaire and three inhibitory control tasks, and attachment security was assessed with the Attachment Q-Sort. At age 5, the Social Skills Rating Scale was used to assess social competence, and, at age 6, loneliness was assessed with the Loneliness and Social Dissatisfaction Questionnaire. Results of hierarchical regression analyses indicated that lower levels of effortful control and inhibitory control at age 4 significantly predicted higher levels of loneliness at age 6. Also, lower levels of negative affect and higher levels of effortful control and attachment security at age 4 significantly predicted higher levels of social competence at age 5. However, social competence at age 5 did not predict loneliness at age 6. There was no evidence that social competence at age 5 mediated the relation between age 4 temperament, attachment security and age 6 loneliness. These findings reveal that early self-regulation is associated with later child-reported loneliness and that intervention for children who struggle with cognitive regulation may be effective in decreasing risk for later loneliness.
Show less
- Title
- COMPUTATIONAL FLUID DYNAMICS SIMULATION OF CARBON CAPTURE UNIT USING AN AMINE-BASED SOLID SORBENT
- Creator
- Esmaeili Rad, Farnaz
- Date
- 2021
- Description
-
Carbon capture and sequestration (CCS) is one of the key technologies to reduce the emission of carbon dioxide, including that from exiting...
Show moreCarbon capture and sequestration (CCS) is one of the key technologies to reduce the emission of carbon dioxide, including that from exiting flue gas of fossil fuel-fired power plants. The goal of this project is the development of a computational fluid dynamics (CFD) model to predict the extent of CO2 capture in a circulating fluidized bed carbon capture unit using novel amine-based solid sorbents.In this study, first the hydrodynamics of the carbonation section of the carbon capture unit was investigated. Then, the performance of the amine-based solid sorbents toward capturing carbon dioxide from flue gas and the extent of CO2 adsorption in the carbonation section were studied. At the second stage of the study, the regeneration of the sorbents and desorption of carbon dioxide from carbonated solid sorbents in the regeneration section of the carbon capture unit was investigated. At the third stage of the study, the hydrodynamics of the entire loop of the integrated carbonation and regeneration sections were simulated. Two-dimensional non-reactive CFD simulations of the entire loop, including the carbonator, regenerator, and two loop-seal fluidized beds, were performed to study the details of the solid circulation in the system in a stable operational condition. At the fourth stage of the study, the effect of the carbonated solids’ residence time in the regeneration section was investigated by extending the regenerator fluidized bed height and adding to the volume of the system. Heated surfaces, which resembled heating coils in the regenerator cylinder, were also added to the system to investigate the effect of the temperature. The heated surface of the immersed coils in the bed provided sufficient energy for the endothermic regeneration reaction to keep the temperature of the bed at the desired temperature. Finally, the verified models of the carbonation section, the regenerations section, and non-reactive simulation of the CFB loop were used to simulate the entire circulating fluidized bed carbon capture unit, with an integrated carbonator and regenerator system using amine-based solid sorbents. The extent of CO2 capture in the carbonation section and desorption of carbon dioxide in the regeneration section were predicted. Our study showed the potential of continuous carbon capture by amine-based solid sorbents through the circulating fluidized bed CO2 capture unit.
Show less
- Title
- Improvement and Validation of Multiyear Auroral Analysis to Categorize Scintillation Event Layer
- Creator
- English, Breanna R.
- Date
- 2022
- Description
-
Ionospheric irregularities scintillate electromagnetic waves, such as Global Positioning System (GPS) signals, as they pass through the...
Show moreIonospheric irregularities scintillate electromagnetic waves, such as Global Positioning System (GPS) signals, as they pass through the ionosphere, especially in auroral zones. A previous method was developed to determine which layer of the ionosphere these scintillation events occurred in by analyzing optical all sky images (ASI). The results of determining the ionospheric scattering layer using the ratio of 630 nm (red) intensity to 428 nm (blue) intensity were compared to a radar-based method of determining the scintillation layer, and it was found that the results disagreed. In this work, the ASI method is critically analyzed to identify possible errors or sensitivities in the original method that might resolve the discrepancy. This is done by improving and validating the nighttime auroral cloud detection method by comparing to National Oceanic and Atmospheric Administration (NOAA) satellite cloud data. Then a sensitivity analysis is performed on the ASI method to determine which parameters of the method the results are sensitive to. The keogram cloud detection method is improved by automating the selection of the keogram time points that are used to calculate a flat-field gain correction, and by calculating the flat field gain for each year rather than calculatingit once and using it for all years of the study. Keogram cloud detection using the coefficient of variation is verified by comparing the keogram results to true sky conditions based on NOAA cloud mask data, and using detection theory to determine the optimal coefficient of variation threshold. We find that the ideal keogram threshold was 0.37 producing a disagreement rate of 22.4%. The ASI image analysis criteria tested are: the ASI azimuth and elevation mapping files, the magnetic zenith limit, the number of pixels of the ASI that are being analyzed, the duration of the scintillation event that is analyzed, and the red-to-blue ratio threshold. It is found that only changing the red-to-blue ratio threshold has a significant effect on the ASI method, with the red-to-blue ratio that minimizes the number of misattributed layers found to be 1.43.
Show less
- Title
- Algorithms for Discrete Data in Statistics and Operations Research
- Creator
- Schwartz, William K.
- Date
- 2021
- Description
-
This thesis develops mathematical background for the design of algorithms for discrete-data problems, two in statistics and one in operations...
Show moreThis thesis develops mathematical background for the design of algorithms for discrete-data problems, two in statistics and one in operations research. Chapter 1 gives some background on what chapters 2 to 4 have in common. It also defines some basic terminology that the other chapters use.Chapter 2 offers a general approach to modeling longitudinal network data, including exponential random graph models (ERGMs), that vary according to certain discrete-time Markov chains (The abstract of chapter 2 borrows heavily from the abstract of Schwartz et al., 2021). It connects conditional and Markovian exponential families, permutation- uniform Markov chains, various (temporal) ERGMs, and statistical considerations such as dyadic independence and exchangeability. Markovian exponential families are explored in depth to prove that they and only they have exponential family finite sample distributions with the same parameter as that of the transition probabilities. Many new statistical and algebraic properties of permutation-uniform Markov chains are derived. We introduce exponential random ?-multigraph models, motivated by our result on replacing ? observations of a permutation-uniform Markov chain of graphs with a single observation of a corresponding multigraph. Our approach simplifies analysis of some network and autoregressive models from the literature. Removing models’ temporal dependence but not interpretability permitted us to offer closed-form expressions for maximum likelihood estimators that previously did not have closed-form expression available. Chapter 3 designs novel, exact, conditional tests of statistical goodness-of-fit for mixed membership stochastic block models (MMSBMs) of networks, both directed and undirected. The tests employ a ?²-like statistic from which we define p-values for the general null hypothesis that the observed network’s distribution is in the MMSBM as well as for the simple null hypothesis that the distribution is in the MMSBM with specified parameters. For both tests the alternative hypothesis is that the distribution is unconstrained, and they both assume we have observed the block assignments. As exact tests that avoid asymptotic arguments, they are suitable for both small and large networks. Further we provide and analyze a Monte Carlo algorithm to compute the p-value for the simple null hypothesis. In addition to our rigorous results, simulations demonstrate the validity of the test and the convergence of the algorithm. As a conditional test, it requires the algorithm sample the fiber of a sufficient statistic. In contrast to the Markov chain Monte Carlo samplers common in the literature, our algorithm is an exact simulation, so it is faster, more accurate, and easier to implement. Computing the p-value for the general null hypothesis remains an open problem because it depends on an intractable optimization problem. We discuss the two schools of thought evident in the literature on how to deal with such problems, and we recommend a future research program to bridge the gap those two schools. Chapter 4 investigates an auctioneer’s revenue maximization problem in combinatorial auctions. In combinatorial auctions bidders express demand for discrete packages of multiple units of multiple, indivisible goods. The auctioneer’s NP-complete winner determination problem (WDP) is to fit these packages together within the available supply to maximize the bids’ sum. To shorten the path practitioners traverse from from legalese auction rules to computer code, we offer a new wdp formalism to reflect how government auctioneers sell billions of dollars of radio-spectrum licenses in combinatorial auctions today. It models common tie-breaking rules by maximizing a sum of bid vectors lexicographically. After a novel pre-solving technique based on package bids’ marginal values, we develop an algorithm for the WDP. In developing the algorithm’s branch-and-bound part adapted to lexicographic maximization, we discover a partial explanation of why classical WDP has been successful in using the linear programming relaxation: it equals the Lagrangian dual. We adapt the relaxation to lexicographic maximization. The algorithm’s dynamic-programming part retrieves already computed partial solutions from a novel data structure suited specifically to our WDP formalism. Finally we show that the data structure can “warm start” a popular algorithm for solving for opportunity-cost prices.
Show less
- Title
- TOPICDP – ENSURING DIFFERENTIAL PRIVACY FOR TOPIC MINING
- Creator
- Sharma, Jayashree
- Date
- 2021
- Description
-
Topic mining enables applications to recognize patterns and draw insights from text data, which can be used for applications such as sentiment...
Show moreTopic mining enables applications to recognize patterns and draw insights from text data, which can be used for applications such as sentiment analysis, building of recommender systems and classifiers. The text data can be a set of documents or emails or product feedback and reviews. Each document is analysed using probabilistic models and statistical analysis to discover patterns that reflects underlying topics.TopicDP is a differentially private topic mining technique, which injects well-calibrated Gaussian noise into the matrix output of the topic mining model generated from LDA algorithm. This method ensures differential privacy and good utility of the topic mining model. We derive smooth sensitivity for the Gaussian mechanism via sensitivity sampling, which resses the major challenges of high sensitivity in case of topic mining for differential privacy. Furthermore, we theoretically prove the differential privacy guarantee and utility error bounds of TopicDP. Finally, we conduct extensive experiments on two real-word text datasets (Enron email and Amazon Product Reviews), and the experimental results demonstrate that TopicDP can generate better privacy preserving performance for topic mining as compared against other state-of-the-art differential privacy mechanisms.
Show less
- Title
- PARENTAL RELATIONSHIP FACTORS, ACADEMIC EXPECTATIONS, AND MENTAL HEALTH OUTCOMES IN INDIVIDUALS WITH ADHD
- Creator
- Small, Eva E.
- Date
- 2022
- Description
-
Individuals with Attention Deficit Hyperactivity Disorder (ADHD) are at a higher risk for developing comorbid psychological conditions...
Show moreIndividuals with Attention Deficit Hyperactivity Disorder (ADHD) are at a higher risk for developing comorbid psychological conditions including depression and anxiety by the time they reach adulthood. While there has been some research on potentially beneficial aspects of parent-child relationships that can help to improve the mental health of pediatric populations with ADHD, less work has been done to assess the long-term influence of the parent -child relationship in adults with ADHD. The purpose of this study was to add to previous research by utilizing the National Longitudinal Study of Adolescent to Adult Health (Add Health) to investigate how parenting relationship and family factors (i.e., parental warmth, behavioral autonomy, family cohesion, and parental academic expectations) predict symptoms of stress and depression in adults with ADHD. Using data from Waves I, III, and IV of the Add Health study, analyses examined whether positive parenting relationship factors were related to levels of depression symptoms and stress in a sample of participants with self-reported ADHD (N = 316). Results indicated that higher levels of family cohesion experienced in adolescence were associated with lower depression symptoms reported in adulthood, thus suggesting that family cohesion is a beneficial for individuals with ADHD. Future research should continue to examine the role that child-relationship factors can have on long term mental health outcomes in individuals with ADHD
Show less
- Title
- ORGANOFUNCTIONALIZED OXOMETALATES: SYNTHESIS, STRUCTURE, AND PROPERTIES OF A NEW CLASS OF MIXED-METAL TETRAMETALATE CLUSTERS
- Creator
- Shuaib, Damola Taye
- Date
- 2022
- Description
-
Oxometalates (OMs) are metal-oxide clusters with addenda mental atom mainly V, Mo, and W and bridged by oxide anions. Prototypical examples...
Show moreOxometalates (OMs) are metal-oxide clusters with addenda mental atom mainly V, Mo, and W and bridged by oxide anions. Prototypical examples like polyoxometalates (POMs) are completely inorganic. While clusters with nuclearities ranging from 6 to 18 are common for purely inorganic examples, those with less than nuclearity 6 are rare. Therefore, functionalization by covalent interaction with organic moiety via self-assembly has been utilized as a viable route for making compact clusters with nuclearity of 4 and below. These compounds constitute the organo-functionalized examples of the purely inorganic structure ([XMaOb]n-) POM. Reports of organo-functionalized tetrametalates (TMs), ([MxOyLz])n- (where M = metal, x = 4 and L represents an organic ligand) are sparse. Mixed metal species are especially interesting as potential redox active materials as they contain energetically distinct potential redox centers. OMs have ability to accept electrons in a chemically reversible manner through the terminal oxo-ligand (M=Ot) leading to dπ–pπ electron transfer. Considering the rich structural and electronic properties of these complexes, four neutral mixed-metal (M-V) tetrametalate clusters, [(CoIICl)2(VIVO)2{((HOCH2CH2)(H)N(CH2CH2O))(HN(CH2CH2O)2}2] (1), [(ZnIICl)2(VIVO)2{((HOCH2CH2)(H)N(CH2CH2O))(HN(CH2CH2O)2}2] (2), [CoII2(VIVOF)2{((HOCH2CH2)(H)N(CH2CH2O))(HN(CH2CH2O)2)}2] (3), and [ZnII2(VIVOF)2{((HOCH2CH2)(H)N(CH2CH2O))(HN(CH2CH2O)2)}2] (4) containing unprecedented oxometallocyclic {M2V2X2N4O8}(M = Co, Zn; X = F, Cl) frameworks decorated with diethanolamine ligand in bidentate and tridentate manners. The type of halo-ligand has direct influence on the geometry of the metal M and UV-Vis reflectance spectra revealed changes in electronic structure consistent with charge transfer processes expected. Computational and magnetic properties studies revealed that the ground state multiplicity of 1 is confirmed as an open-shell singlet with a prediction of an isotropic exchange coupling of -6.6 cm-1 but less clear for 2. The vanadium centers are best described as a V(IV) center and the cobalt centers are high-spin Co(II) centers. Less orbital destabilization was observed due to weaker interaction of Cl- ligand on Co than what was observed for O2- ligand on V centers. In 2, there are four weakly coupled spin centers, where the isotropic exchange couplings are defined as J1, J2’, and J2’’. These couplings are approximated as J1 = 1.5/+11.7 cm-1, J2’ = -22.1/-14.8 cm-1, and J2’’ = +4.2/+4.8 cm-1. Although J2’’ is predicted to be weakly ferromagnetic in nature, whereas the fit suggested a weak antiferromagnetic interaction for each of the V(IV)-Co(II) couplings. The low-temperature magnetic susceptibility suggests a Type III spin frustration present in the system. However, competing magnetic interactions are known to be operative in tetranuclear system which is even observed to be more prominent in the mixed-metal tetranuclear system considering the edge-sharing consequence on magnetic behavior. A new route to metal complex synthesis via in situ ligand transformation from diethanolamine to bicine by disproportionation and oxidation reactions yielded three isostructural mononuclear clusters Bis[N,N-bis(2-hydroxyethyl)glycinato]-Cobalt(II) 5, Bis[N,N-bis(2-hydroxyethyl)glycinato]-Nickel(II) 6, and Bis[N,N-bis(2-hydroxyethyl)glycinato]-Copper(II) 7. The observed transformation is predicted to proceed through nucleophilic substitution (SN2) as expected for substituted ammines. These metal complexes are characterized by various analytical techniques such as, FT-IR and UV-Vis spectroscopies, single crystal and powdered X-ray diffraction analyses, Energy-Dispersed X-ray spectroscopy, magnetic properties measurements, thermal gravimetric analysis, bond valence sum calculations etc. Based on their features and detailed structure-property-application analyses, the clusters showed great potentials for catalysis, materials for digital tools, chemical sensing, molecular magnets and precursors as molecular building blocks for extended open frameworks.
Show less
- Title
- THE IMPACT OF SHARED RECRUITMENT INFORMATION ON APPLICANT OUTCOMES AND THE INFLUENCE OF MODERATING VARIABLES
- Creator
- Savage, Catherine M.
- Date
- 2022
- Description
-
Organizations are currently experiencing one of the most challenging environments when it comes to recruiting talent. What started in the...
Show moreOrganizations are currently experiencing one of the most challenging environments when it comes to recruiting talent. What started in the 1990s as the “War for Talent,” in which organizations faced fierce competition when hiring and retaining employees, has persisted, and grown more competitive, post-pandemic. As a result, organizations must re-evaluate their recruitment strategies and find ways to connect with job candidates that will increase the probability that they will pursue open job positions. Thus, we examined how sharing different information regarding pay, diversity statements, and mentoring benefits with 250 potential job applicants, based in the US, may influence their attraction to an organization, perceived person-organization fit, and their intention to pursue the job that was posted. We also examined how ethnicity, gender, and age can influence the job candidates’ perception of the information provided. Results from this research partially supported our hypothesized outcomes. Presenting more information to participants (rather than less) generally had a positive impact on organization attraction and intentions to pursue the position posted in the job advertisement. However, the amount of information shared to participants did not influence perceptions of person-organization fit. Additionally, while ethnicity did not moderate the relationship between amount of information shared and the outcome variables, gender and age were found to influence participants’ reaction to the information provided and their subsequent level of organizational attraction and intention to pursue. Implications and avenues for future research are discussed.
Show less
- Title
- Control and Operation of Microgrids and Networked Microgrids
- Creator
- Sheikholeslami, Mehrdad
- Date
- 2022
- Description
-
This dissertation presents the practical operation and control of microgrids and networked microgrids, particularly, the networked IIT Campus...
Show moreThis dissertation presents the practical operation and control of microgrids and networked microgrids, particularly, the networked IIT Campus Microgrid (ICM) and Bronzeville Community Microgrid (BCM). Microgrids (MGs) provide a potential solution to accommodating renewable and distributed energy resources (DERs). MGs and the networked form of MGs, i.e., networked microgrids or NMGs, have received significant attention in the past two decades. However, several details are often neglected in the literature that need to be considered for the practical operations of MGs and NMGs. First, there is a need for a step-by-step sequence of operations (SOO) that clearly defines the procedures for changing the operation modes of MGs and NMGs for their reliable and resilient operation. Second, there is a need to develop new control strategies for the centralized and distributed control of MGs and NMGs that are resilient to extreme events and are also more sustainable than the ones available in the literature. Third, there is a need for developing the model of MGs and NMGs in a real-time simulator to safely evaluate the performance of the control and operation of MGs and NMGs. Finally, to close the engineering loop, there is a need to connect the digital and physical layers which are known as digital twins. This dissertation proposes solutions for these four requirements and presents results to evaluate the performance of the proposed solutions. First, an SOO is proposed to enable the reliable and safe transition between different microgrid operation modes. The proposed SOO is adaptable to any MG and NMG with minor modifications. Second, for the centralized control, a DER control model is proposed that allows for the regulated power exchange between networked MGs to ensure information privacy and respect the electrical boundary of each MG. For the distributed control, two control schemes are proposed that are resilient to extreme cases, allow the integration of renewable energy resources (RES), and require the minimum intervention of the operators. Third, several techniques are proposed that can be adopted for developing the real-time models of MGs and NMGs. Finally, as a proof of concept, a digital twin of a microgrid with connections between the physical and digital layers is implemented and tested. The IIT Campus Microgrid (ICM) and Bronzeville Community Microgrid (BCM), as well as their networked form (networked ICM-BCM), are selected as the practical testbeds and are modeled in Real-time Digital Simulator (RTDS). The RTDS model is interfaced with microgrid master controllers (MMC) for real-time data exchange and the performance of the MMCs and the distributed control strategies are tested to illustrate the importance of adopted methods in the real-time control of MGs and NMGs. Finally, a proof of concept for the digital twin of ICM is presented.
Show less
- Title
- ELECTROSPUN SILKWORM SILK FIBROIN - INDOCYANINE GREEN BIOCOMPOSITE FIBERS: FABRICATION, CHARACTERIZATION AND APPLICATION TOWARDS HEMORRHAGE CONTROL
- Creator
- Siddiqua, Ayesha
- Date
- 2022
- Description
-
Silk fibroin (SF), a structural protein found in the Bombyx mori cocoons has gained attention in several biomedical applications as tissue...
Show moreSilk fibroin (SF), a structural protein found in the Bombyx mori cocoons has gained attention in several biomedical applications as tissue engineering scaffolds and wound dressings owing to its properties such as biocompatibility, water vapor permeability and biodegradability. Indocyanine Green (ICG) is an FDA approved tricarbocyanine dye used in medical diagnostics due to its unique photothermal and fluorescent properties. Electrospinning is a highly efficient, easy, and inexpensive technique used to generate nanometer to micrometer thick fibers. In this study, SF and ICG were co-spun to generate flexible microfibers with high surface area to volume ratios. Pure silk, SF-ICG (0.1%) and SF-ICG (0.4%) were chosen for the purpose of this study. Since, as-spun fibers are unstable in aqueous solutions, post treatment methods were explored to enhance the durability of the fibers and to minimize ICG leaching. It was found that ethanol vapor treatment (EVT) not only induced β-sheet formation in SF but also improved the SF-ICG interaction thereby reducing ICG leaching from the composite fibers. Ethanol vapor treated SF-ICG fibers showed less ICG leaching than liquid ethanol treated (LET) SF-ICG fibers indicating the efficacy of the EVT. The increase in SF solution viscosity with ICG concentration suggested a strong silk-ICG interaction which was further confirmed by DSC. The 1h water uptake and the three-day mass loss experiments indicated that the fibers are stable and highly absorbent material. Heat evolution was evaluated by measuring the temperature change in water of a fixed volume after irradiation with a 500 mW, 808 nm diode laser. The heat evolved by the flat fiber scaffolds was higher than the 3D fiber balls, indicating improved light penetration in the former. Pure silk produced negligible heat and it was used as a control. With 14.9 W/cm2 irradiation, the post-treated SF-ICG (0.4%) 3D fibrous ball of 2-3 mg dry weight, solidified a drop of bovine blood in 40 s. In contrast, a single layer fiber matrix required 3 min. to achieve the same clotting effect. Fibers folded into flat scaffolds were able to solidify a blood drop in 25 s. Pure silk fibers in all the cases showed negligible change after irradiation. The results suggest that a larger contact area of fibers is desirable for faster blood clotting, and EVT prompted better ICG retention in SF fibers. Based on the above results, SF-ICG (0.4%) fibers were utilized in a device developed to mimic blood flowing at a rate of 0.5 mL/h through a damaged blood vessel. It was found that irradiation of SF-ICG locally placed at the “damage” region effectively stopped “bleeding” whereas irradiated pure silk was unable to control the blood flow, which demonstrated the success of our SF-ICG fibers towards hemorrhage control.
Show less
- Title
- WHAT IMPACT DO NUMBER TALKS HAVE ON ELEMENTARY CLASSROOM MATHEMATICAL DISCOURSE AND STUDENT AND TEACHER ATTITUDES TOWARD MATHEMATICS?
- Creator
- Sleezer, Meghan V
- Date
- 2021
- Description
-
Number Talks, created in the early 1990s by Ruth Parker and Kathy Richardson, have gained popularity in the mathematics education community...
Show moreNumber Talks, created in the early 1990s by Ruth Parker and Kathy Richardson, have gained popularity in the mathematics education community over the past decade with the publication of the book series Number Talks (Parrish, 2010, 2014), and especially since the publication of Making Number Talks Matter (Humphreys & Parker, 2015). All in all, the authors contend Number Talks can bring joy into the classroom (Humphreys and Parker, 2015, p. 6), improving student attitudes about mathematics and ultimately allowing for a more productive disposition. The characteristic that separates Number Talks from other pedagogical tools is the disconnectedness from the rest of the lesson: Number Talks need not build up to or build upon the day’s objective. Thus, what the authors argue is that the activity of Number Talks itself – albeit disconnected from the day’s objective – improves all of the aforementioned skills, regardless of what occurs during the remainder of each class session.Eight teachers from five different Chicago-area private grade schools implemented Number Talks in their 3rd-5th grade classrooms for four to six weeks in the early part of the year 2020. Student attitudes toward mathematics and toward mathematical discourse were assessed by way of survey and classroom observation before and after implementation. Classroom interactions and levels of mathematical discourse during the normal class time (outside of the Number Talk session) were assessed before and during implementation. No significant changes (positive or negative) relating to any measure were found. Teachers noticed that students who enjoyed math before the implementation also enjoyed Number Talks, while students who struggled with math were mostly disenchanted with Number Talks. Future research includes exploring whether tailoring Number Talks to relate to the upcoming lesson improves the positive effects advertised by the authors. Teacher professional development related to ambitious teaching practices (NCTM, 2017) and growth mindset (Boaler, 2016b) may complement the use of Number Talks to result in improved attitudes and discourse.
Show less
- Title
- Do Numeric Performance Ratings Have Any Merit?
- Creator
- Sanders, Emily Kathleen
- Date
- 2022
- Description
-
Numeric performance ratings have been a component of performance evaluation for decades (Prowse & Prowse, 2009; Pulakos, Mueller-Hanson & Arad...
Show moreNumeric performance ratings have been a component of performance evaluation for decades (Prowse & Prowse, 2009; Pulakos, Mueller-Hanson & Arad, 2019). Yet, in recent years their necessity has been questioned (Adler, Campion, Colquitt, Grubb, Murphy, Ollander-Krane, & Pulakos, 2016), with some organizations going so far as to remove numeric ratings entirely (Capelli & Tavis, 2016; Rock, Davis & Jones, 2014; Burkus, 2016). Unfortunately, this practice has been largely unexamined in an empirical manner. The present study tested whether the claim – that numeric ratings do not matter – holds up in all cases. This is done by exploring whether the presence or absence of numeric ratings, impacts employee perceptions of fairness associated with the appraisal. As numeric ratings are argued to be a mechanism for communicating a fair, standard, and consistent practice, the study aimed to understand if the mere presence of numeric ratings may offset some of the negative reaction employees have toward performance appraisal when they have poor-quality relationships with their supervisors. Findings indicated that while employee-manager relationship quality (assessed via Leader-Member Exchange) has a direct relationship with perceptions of fairness associated with the appraisal, the presence of numeric ratings did not moderate this relationship. Practical implications and future research recommendations are discussed.
Show less
- Title
- Toward an Extraordinary Ecotourism Destination on The Shoreline of Aseer Region, Kingdom Of Saudi Arabia
- Creator
- Saleh, Abdulmalik Mohammad S.
- Date
- 2022
- Description
-
Since the dawn of the Anthropocene epoch, human activities have been adversely influencing our globe and becoming a controversial phenomenon....
Show moreSince the dawn of the Anthropocene epoch, human activities have been adversely influencing our globe and becoming a controversial phenomenon. However, as a counterforce, multiple adoptions of sustainable green movements worldwide are continually attempting alternate resolutions to preserve nature. As the tourism industry grows, ecotourism, for instance, is a specific eco-friendly approach that asserts minimizing human impacts and conserving captivating nature, improving the livelihood of local communities, and involving interpretation and education. A demi-decade ago, Saudi Arabia’s 2030 vision (the post-oil plan) was launched to diversify its GDP and develop public service sectors such as tourism. This thesis investigates the relationship between architecture and the possibilities of ecotourism principles, besides the governmental program, under multiple tourism indicators along the untouched Aseer shoreline, which has valuable attractions and amenities; it is faced with several issues, including informal planning, limited infrastructure, and low-income community. We built a suggested project based on a collection of written materials on the area’s environmental and culturally diverse aspects and case studies; architecture-to-ecotourism is thriving, but there is still potential for methodological development. The thesis findings demonstrate that architecture can immensely contribute to sustainable development through the ecotourism concept and can have a tangible impact on the project. Simultaneously, architecture, through ecotourism, is successful by improving the economic aspect of the host societies, reducing environmental consequences, and strengthening heritage identity. This research needs further studies on the correlation, which remains highly debated, between architecture and ecotourism norms to sustain nature.
Show less