Search results
(701 - 720 of 996)
Pages
- Title
- MEN, WOMEN, AND LEADERS: THE EFFECT OF GENDER-LEADER CATEGORY CONGRUENCE ON SUPERVISOR EVALUATIONS
- Creator
- Lauritsen, Matthew William
- Date
- 2020
- Description
-
Researchers employing Schein’s (1973, 1975) paradigm, ubiquitously conclude that the greater conceptual distance between leaders and women...
Show moreResearchers employing Schein’s (1973, 1975) paradigm, ubiquitously conclude that the greater conceptual distance between leaders and women compared to leaders and men is problematic for women in leadership roles. Six hundred eighty participants were recruited from MTurk to rate men, women, and leaders on agency and communion. Using polynomial regression analysis, the category congruence hypothesis was tested using two theories as interpretive frameworks: implicit leadership theory (ILT) and role congruity theory (RCT). A strict congruence effect was not found for any of the models. The results generally supported ILT, supervisor evaluations were highest when perceived supervisor characteristics exceeded the respondents’ leader category expectations. The results did not support RCT’s hypothesis about the negative effects of incongruence of women and leader category. Supervisor evaluations were highest when respondents held traditional gender stereotypes, not when they were congruent with the leader prototype. However, a general incongruence effect was found between male communion stereotypes and leader communion stereotypes leading to lower evaluations for male supervisors. That is, for men supervisors, the highest ratings were associated with high communion ratings of both men and leader categories. The results of this study are further discussed in relation to gender-leader category congruence and leadership.
Show less
- Title
- HOW DO SECONDARY STUDENTS MAKE DECISIONS ON SOCIOSCIENTIFIC ISSUES: WHAT DO THEY CONSIDER IMPORTANT?
- Creator
- LePretre, Dawnne M
- Date
- 2019
- Description
-
Linking science and action is the epitome of scientific literacy (Hurd, 1972; Kuhn, 1972; Watson, 1969). Before becoming acting citizens,...
Show moreLinking science and action is the epitome of scientific literacy (Hurd, 1972; Kuhn, 1972; Watson, 1969). Before becoming acting citizens, students need to balance subject matter knowledge, personal values, and societal norms in decision-making (DM) on Socioscientific Issues (SSI) (Aikenhead, 1985; Grace & Ratcliffe, 2002; Kolstø, 2001; Zeidler, 1984). Existing literature suggests a variety of models and strategies to guide how students should think about SSI topics versus beginning with what students are thinking concerning SSI! This study aimed to identify the DM factors students considered across a variety of SSI and to determine if DM factors were common across topics or specific to a SSI. Students in grades 10-12 participated from seven schools and ten regular science classrooms, primarily located in a large Midwest city (n=498). The sample was 50% female, 50% male, and roughly 33% of students from each grade level.Across 60 enacted lessons on six different SSI topics, multiple sources of data were collected, including student artifacts, audiotapes of class discussions/interviews, field notes, and teacher surveys. Students engaged in a minimum of three different SSI topical lessons, implemented over a period between one to nine weeks for an average instructional time of 115 minutes per topic. Decision-making differed across students in various groupings, indicating that secondary students used both general and specific factors when making decisions on SSI. Further, trends emerged, indicating various student groups' valued DM factors differently. On several topics, students of different gender, grade levels, ethnicities, and school type considered different DM factors to different levels of support. For example, on the topic of plastics and pollution, 10th grade, female, and Hispanic students tended to identify concern for animals and sea life as their most prominent DM factor. Another trend included larger class sizes tending to cite more DM factors on a topic than students in smaller sized classrooms engaged on the same topic. Overall, 15 common or shared DM factors emerged that students considered when making decisions across multiple SSI contexts. In addition, each specific SSI context had between one and 15 specific or exclusive DM factors cited directly by students in this study.
Show less
- Title
- DATA SHARING WITH PRIVACY AND SECURITY
- Creator
- Qian, Jianwei
- Date
- 2019
- Description
-
Data is a non-exclusive resource and has synergistic effects. Open data sharing will enhance the utilization of big data’s value and...
Show moreData is a non-exclusive resource and has synergistic effects. Open data sharing will enhance the utilization of big data’s value and tremendously boost economic growth and transparency. Data sharing platforms have emerged worldwide, but with very limited services. Security is one of the main reasons why most data are not commonly shared. This dissertation aims to tackle several security issues in building a trustworthy data sharing ecosystem. First, I reveal the privacy risks in data sharing by designing de-anonymization and privacy inference attacks. Second, I present an analysis of the relationship between the attacker's knowledge and the privacy risk of data sharing, and try quantifying and estimating the risk. Then, I propose anonymization algorithms to protect the privacy of participants in data sharing. Finally, I survey the status quo, privacy and security concerns, and opportunities in data trading. This dissertation involves various data types with a focus on graph data and speech data; it also involves various forms of data sharing including collection, publishing, query, and trading.
Show less
- Title
- Sustainable Solutions in Complex Spaces of Innovation
- Creator
- Nogueira, André Martins
- Date
- 2019
- Description
-
Even though the interconnectivity between human activities and the integrity of ecological systems has long been recognized, the development...
Show moreEven though the interconnectivity between human activities and the integrity of ecological systems has long been recognized, the development of design practices that account for such interconnectivity can be considered relatively new. As such, contemporary institutions and their arrangements were not designed accordingly to their potential to promote sustainable and equitable flows of different types of resources; they lack the capability and structure to operate in the speed and scale in which humans are dynamically interacting with themselves, and with the natural environment. As the world has passed the 7.5 billion mark, such a condition is generating unintended socio-ecological-technical consequences being empowered by the fast-changing technology industry. New lenses and models for understanding the connectivity of social, ecological and technical systems underlying contemporary institutional arrangements are required to advance expertise in redirecting the flow of different types of resources for the sustainability of these systems. However, how humans perceive systems is largely framed by who is included in the discussion and the experiences and interests that they bring to bear. Even though there will always be a discrepancy between what is perceived, and the actual system at play, there are greater opportunities to expand such perception by drawing more deeply on systems thinking and the notion of resources. This dissertation advances design knowledge in the pursuit of bridging the gap between theoretical discourses and the pragmatism necessary to intervene socio-ecological-technical dynamics by exploring how designers might embed principles of sustainability into choice-making processes for innovation, and it proposes a new approach through which designers can advance their practices in enabling more sustainable flows of resources.
Show less
- Title
- Nanopore Detection of Heavy Metal Ions
- Creator
- MohammadiRoozbahani, Golbarg
- Date
- 2019
- Description
-
Nanopore sensing is an emerging analytical technique for measuring single molecules. Under an applied potential bias, analyte molecules are...
Show moreNanopore sensing is an emerging analytical technique for measuring single molecules. Under an applied potential bias, analyte molecules are transported through the nanopore and cause ionic current modulations. Accordingly, the fingerprint of the analyte is reflected in the signature of the current blockage events. Due to its advantages such as lable-free and multi-analyte detection, nanopore sensing technology has been utilized as an attractive versatile tool to study a variety of topics, including biosensing of different species, such as DNA, RNA, proteins, peptides, anions, and metal ions.Metal ions play a crucial role in human health and environmental safety. Although metal ions are essential for numerous biological processes, the presence of the wrong metal, or even the essential metals in the wrong concentration or location, can lead to undesirable results and serious health concerns, including antibiotic resistance, metabolic disorders, mental retardation, and even cancer. Therefore, it is still of prime importance to develop highly sensitive and selective sensors for metal ions.In this dissertation, various nanopore sensing strategies to detect metal ions will first be discussed. These include: a) construction of metal ion binding sites in the nanopore inner surface; b) utilization of a biomolecule as a ligand probe; and c) employing enzymatic reactions. Then, three projects will be summarized. Among them, two projects are involved with detection of non-essential metal ions: uranyl and thorium ions, while the other is targeted at essential element, zinc ion. To be more specific, uranyl and thorium ions are detected by taking advantage of peptide molecules as ligand probes. In this case, the event signatures of peptide molecules in the nanopore are significantly different in the absence and presence of metal ions, which might be attributed to the conformational change of the biomolecules induced by the metal ion-biomolecule interaction. On the other hand, zinc ion is detected based on enzymatic reaction: without Zn2+, ADAM17 (a zinc dependent protease) is inactive and cannot cleave peptide substrate molecules; in contrast, with Zn2+ ion in the solution, the enzyme was activated, and its cleavage of the peptide substrate produced new types of blockage events with smaller residence time and amplitude values than those the peptide substrate.
Show less
- Title
- STAKEHOLDER FEEDBACK ON A NOVEL EMOTION REGULATION INTERVENTION FOR PRESCHOOL-AGE CHILDREN WITH DISRUPTIVE BEHAVIOR PROBLEMS: A THEMATIC ANALYSIS
- Creator
- Lossia, Amanda
- Date
- 2019
- Description
-
Disruptive behavior disorders are among the most prevalent psychological disorders in preschoolers. There are evidence-based treatments for...
Show moreDisruptive behavior disorders are among the most prevalent psychological disorders in preschoolers. There are evidence-based treatments for these disorders, but clinically significant behavior problems persist in approximately one-fourth to one-third of children after treatment. These treatments consist of behavioral parenting interventions and are not designed to directly address children’s affective dysregulation, which is a core component of behavior problems. To address this limitation, a manualized intervention was developed to treat disruptive behavior in preschool-age children by specifically targeting their emotion regulation abilities as the mechanism of change by coaching the caregiver to scaffold the child’s emotion regulation strategy use. The purpose of the present study was to further the development of this intervention by obtaining feedback from key stakeholders (i.e., caregivers and therapists) on the intervention’s focus, content, and procedures. Obtaining this feedback is an essential component of developing a novel psychosocial intervention. A qualitative thematic analysis of in-depth focus group discussions was conducted. Data were organized into the following broad themes: Intervention approach (support for targeting emotion regulation but ensuring the approach is an appropriate fit and considering the important role of behavioral strategies; additional focus on facilitating a positive caregiver-child relationship; developing some independent regulation skills in the child), Intervention structure and session content (making the intervention structure more flexible or modular; retaining the main intervention components with modifications to enhance acceptability, relevance, and developmental appropriateness), The caregiver’s role (the caregiver’s role is of primary importance and should be active throughout all sessions; ensuring adequate caregiver preparation and skill development; additional primary focus on facilitating the caregiver’s own emotion regulation; attention to the caregiver’s own therapeutic needs), Individualized approach (individualizing the content and timing of all sessions to account for individual needs), Generalizability (ensuring generalization of skills to home and other settings through effective at-home practice and including other primary caregivers and family members in sessions), and Learning and skill development (considering individual differences in how children and caregivers learn and modifying activities accordingly). These themes and stakeholders’ specific feedback will guide revisions to the intervention manual prior to pilot testing and further examination of efficacy and effectiveness.
Show less
- Title
- The Role of Ethnic Similarity, Perceived Communication Style Deviation, and Cultural Intelligence in Leader-Member Exchange and Trust
- Creator
- Polyashuk, Yelena
- Date
- 2019
- Description
-
This study examined those factors that contribute to a better working relationship between a leader and a subordinate or make that working...
Show moreThis study examined those factors that contribute to a better working relationship between a leader and a subordinate or make that working relationship challenging. Specifically, we investigated the effect of ethnic configuration within the leader-subordinate dyad and perceived dissimilarity on Leader-Member Exchange (LMX) and trust. Communication style deviation was tested as a mediator between actual, as well as perceived dissimilarity and relational outcomes. Cultural Intelligence (CQ) was included as a moderator, the presence of which could ameliorate the negative impact of dissimilarity on LMX and trust. In order to test these predictions, a survey was administered to 614 participants. Participants were working students at an urban, Midwestern, public university. Results showed that in presence of low CQ among respondents, there was a negative impact of ethnic dissimilarity on LMX. However, no impact of ethnic similarity/dissimilarity on trust was found. Specific dyad composition of the leader-subordinate dyad had no significant impact on LMX or trust. Finally, communication style deviation partially mediated the relationship between perceived dissimilarity and the two outcome variables of LMX and trust. These findings revealed that in order to build a high-quality relationship within an ethnically diverse leader-subordinate dyad, both CQ and alignment in communication style are of consequence.
Show less
- Title
- Fast Automatic Bayesian Cubature Using Matching Kernels and Designs
- Creator
- Rathinavel, Jagadeeswaran
- Date
- 2019
- Description
-
Automatic cubatures approximate multidimensional integrals to user-specified error tolerances. In many real-world integration problems, the...
Show moreAutomatic cubatures approximate multidimensional integrals to user-specified error tolerances. In many real-world integration problems, the analytical solution is either unavailable or difficult to compute. To overcome this, one can use numerical algorithms that approximately estimate the value of the integral. For high dimensional integrals, quasi-Monte Carlo (QMC) methods are very popular. QMC methods are equal-weight quadrature rules where the quadrature points are chosen deterministically, unlike Monte Carlo (MC) methods where the points are chosen randomly.The families of integration lattice nodes and digital nets are the most popular quadrature points used. These methods consider the integrand to be a deterministic function. An alternative approach, called Bayesian cubature, postulates the integrand to be an instance of a Gaussian stochastic process. For high dimensional problems, it is difficult to adaptively change the sampling pattern. But one can automatically determine the sample size, $n$, given a fixed and reasonable sampling pattern. We take this approach using a Bayesian perspective. We assume a Gaussian process parameterized by a constant mean and a covariance function defined by a scale parameter and a function specifying how the integrand values at two different points in the domain are related. These parameters are estimated from integrand values or are given non-informative priors. This leads to a credible interval for the integral. The sample size, $n$, is chosen to make the credible interval for the Bayesian posterior error no greater than the desired error tolerance. However, the process just outlined typically requires vector-matrix operations with a computational cost of $O(n^3)$. Our innovation is to pair low discrepancy nodes with matching kernels, which lowers the computational cost to $O(n \log n)$. We begin the thesis by introducing the Bayesian approach to calculate the posterior cubature error and define our automatic Bayesian cubature. Although much of this material is known, it is used to develop the necessary foundations. Some of the major contributions of this thesis include the following: 1) The fast Bayesian transform is introduced. This generalizes the techniques that speedup Bayesian cubature when the kernel matches low discrepancy nodes. 2) The fast Bayesian transform approach is demonstrated using two methods: a) rank-1 lattice sequences and shift-invariant kernels, and b) Sobol' sequences and Walsh kernels. These two methods are implemented as fast automatic Bayesian cubature algorithms in the Guaranteed Automatic Integration Library (GAIL). 3) We develop additional numerical implementation techniques: a) rewriting the covariance kernel to avoid cancellation error, b) gradient descent for hyperparameter search, and c) non-integer kernel order selection.The thesis concludes by applying our fast automatic Bayesian cubature algorithms to three sample integration problems. We show that our algorithms are faster than the basic Bayesian cubature and that they provide answers within the error tolerance in most cases. The Bayesian cubatures that we develop are guaranteed for integrands belonging to a cone of functions that reside in the middle of the sample space. The concept of a cone of functions is also explained briefly.
Show less
- Title
- A SYSTEMATIC APPROACH TO UNDERSTANDING ALIGNMENT BETWEEN THE EXISTING AND SELF-ADOPTED ENVIRONMENTAL EDUCATION STANDARDS: UNITED STATES SIXTH TO TWELFTH GRADE ENVIRONMENTAL SCIENCE STANDARDS
- Creator
- Connell, Margaretann Grace
- Date
- 2019
- Description
-
The purpose of this thesis was to conduct a systematic approach to determine the alignment between the existing and self-adopted science 6th...
Show moreThe purpose of this thesis was to conduct a systematic approach to determine the alignment between the existing and self-adopted science 6th-12th grade EE science standards for 10 U.S. National States (6th-8th [AZ; ID; MA; WY]) and (9th-12th [NE; NYS; OH; PA; SC; TX]). The criteria for States’ selection were based on States with SASS (non-NGSS adoption) and 2) demographics - random selection from the 10 U.S. EPA Regions. The Existing Environmental Education Standards (EEES) (GCDEE, Hungerford et al., 1980; NAAEE Guidelines, Simmons, 2010a; Tbilisi, UNESCO, 1978) were aligned with the 10 States. The investigation was conducted by a DCA (Mayring, 2002). Data were analyzed using MAXQDA 2018.1(VERBI, 2017), judged by a Content Match (La Marca et al., 2000), and measured by the adapted criteria for Categorical Concurrence and Range of Knowledge Correspondence (Webb, 1999). Instruments to score the output were: 1). CEEI – Tbilisi/GCDEE (K-12), and EEI – NAAEE Guidelines (6-8; 9-12). Results for the Content Match of the EEES revealed that 50% of the States were Partly Aligned and other 50% were Not Aligned with the NAAEE Guidelines Code Coverage. Additionally, the Content Match with Tbilisi/GCDEE revealed that 20% of the States (OH, PA) were Fully Aligned and the other 80% Partly Aligned . The States’ science standards ability to reach appropriate levels of alignment was due to the scientific specificity of those States with implicit EE standards. Moreover, it was difficult to come to a common ground to expect complete alignment based on the socioecological approaches and interdisciplinary nature (Kyburz-Graber, 2013; Simmons, 2010a) of the EEES. Therefore, it is now left up to the policymakers at the State levels to work with stakeholders and come to a consensus in support of EE standards that are relevant, fair, and balanced with multidisciplinary, socioecological approaches to promote of an environmentally literate citizenry.
Show less
- Title
- KINETIC MODEL FRAMEWORKS OF ANIMAL CELL CULTURES FOR CONTROL AND OPTIMIZATION
- Creator
- Yilmaz, Denizhan
- Date
- 2019
- Description
-
This dissertation proposes four different kinetic model frameworks that havebeen developed for optimization and control of monoclonal antibody...
Show moreThis dissertation proposes four different kinetic model frameworks that havebeen developed for optimization and control of monoclonal antibody producing mammalian cell cultures to improve biopharmaceutical production by decreasing the costof trial and error experimentation. The developed models mainly describe the transient metabolic behavior of mammalian cell culture under different culture conditionsand predicts cell growth and death, cell metabolism, and monoclonal antibody synthesis, and production. These models are developed via ordinary differential equationsbased on the assumption of well-mixing reactor. All developed models were calibrated, and their predictive capabilities were tested with experimental reports published in the literature. Good agreement was obtained between model predictions and experimental data. The presented results illustrate that the developed models successfully describe and predict the transient behavior of mammalian cell cultures and can be a useful tool for biopharmaceutical production.
Show less
- Title
- TRANSIENT STABILITY SIMULATION OF COMBINED THREE-PHASE UNBALANCED TRANSMISSION AND DISTRIBUTION NETWORKS
- Creator
- Alsharief, Yagoob
- Date
- 2019
- Description
-
Historically, transmission (T) system and distribution (D) system analysis has been done separately. The main reasons are 1) different...
Show moreHistorically, transmission (T) system and distribution (D) system analysis has been done separately. The main reasons are 1) different modeling frameworks, i.e., positive-sequence versus three-phase unbalanced, 2) system size, and 3) lack of dynamic two-way interaction between T&D. The typical power system usually consists of tens of thousands of transmission buses and thousands of distribution feeders with hundreds of customers per feeder. In the past, distribution networks have been largely passive with relatively little dynamic interaction with the transmission network. However, due to the new trends that the electric grid has been witnessing in the last decade with the installation of distributed energy resources (DERs) on the distribution level, such as behind-the-meter generation and energy storage units, electric vehicles, etc., dynamic simulation tools for combined T&D will become necessary in the near future. These tools will aid system operators and planning engineers in understanding the impact of these new trends on large-scale power systems. Taking advantage of the advancements in the field of high performance computing and parallel computing could enable accurate, wide-area T&D dynamics simulation. These comprehensive simulation capabilities would dramatically improve our ability to predict the complex interactions among DERs, customer loads and traditional utility control devices, thereby allowing higher penetrations of renewable energy, electric vehicles and energy storage.
Show less
- Title
- DUST MITIGATION OF MICRO-STRUCTURED (GECKO-LIKE) ADHESIVES
- Creator
- Alizadehyazdi, Vahid
- Date
- 2019
- Description
-
Controllable adhesives (i.e. those capable of being turned on and off) are used in a wide range of applications including robotic grippers and...
Show moreControllable adhesives (i.e. those capable of being turned on and off) are used in a wide range of applications including robotic grippers and climbing robots. Electromagnets, suction, and microspines have been used to meet this demand, but are typically limited to a specific substrate roughness or material. Microstructured (gecko-like) adhesives on the other hand, offer the potential to be the most universal among controllable adhesives since they can work on a wide variety of surfaces. The development of microstructured (gecko-like) adhesives has focused almost solely on their adhesive strength. However, for practical applications, especially in real-world environments, the adhesive's long-term performance is arguably equally important. One impediment to long-term viability is the adhesive's susceptibility to contamination, which decreases adhesion significantly. To have practical microstructure adhesives in real-world environments, the detrimental effect of dust and other contaminants should be dealt with. The first general approach involves removing adhered dust particles. The second approach is to create adhesives that minimize dust adsorption such that extensive cleaning is not necessary or they can be removed easily. Regarding the first approach, this research describes the use of electrostatic forces and ultrasonic vibration to repel dust particles. Results are non-destructive, non-contact cleaning methods that can be used in conjunction with other cleaning techniques, many of which rely on physical contact between the fibrillar adhesive and substrate. Electrostatic cleaning results show that a two-phase square wave with the lowest practically feasible frequency has the best cleaning results. Combining electrostatic and ultrasonic cleaning results in far higher efficiency than when using electrostatic repulsion or ultrasonic alone. Moreover, I showed that the piezoelectric element in the ultrasonic cleaning method can also be used as a releasing mechanism to turn the adhesive off and as a force/contact sensor. Regarding the second approach, I experimentally explored the effect of the modulus of elasticity, work of separation, and work of adhesion (adhesion energy) on the shear stress and particle detachment capabilities of microstructured adhesives. Particle removal is evaluated using both non-contact cleaning methods (centripetal force and electrostatic particle repulsion) and a dry contact cleaning method (load-drag-unload test). Results show that for a material with a high work of separation, high elastic modulus, and low work of adhesion, it is possible to create a microstructured adhesive with both high shear stress strength and low adhesion to dust particles. Results also show that, for dry contact cleaning, shear stress recovery mostly stems from particle rolling and not particle sliding. Moreover, shear test results show that augmenting the microstructured adhesive with electrostatic adhesion can reduce the negative effects on adhesion of a high elastic modulus materials' conformability to a substrate by providing a preload to the microstructured elements. Finally, I applied mentioned dust mitigation methods on two different gecko-like adhesives grippers. The first design was used to pick up flat objects, while the second one is designed to grip curved objects of different shapes and sizes. Since the second gripper is flexible and piezoelectric is stiff (it can only be applied to rigid backings), only electrostatic dust mitigation is applicable.
Show less
- Title
- MULTIVARIABLE SIMULATION PLATFORM FOR TYPE 1 DIABETES AND AUTOMATIC MEAL HANDLING IN ARTIFICIAL PANCREAS SYSTEMS
- Creator
- Samadi, Sediqeh
- Date
- 2019
- Description
-
Artificial pancreas (AP) systems are designed to automate the glucose control in type 1 diabetes mellitus (T1DM). Multivariable artificial...
Show moreArtificial pancreas (AP) systems are designed to automate the glucose control in type 1 diabetes mellitus (T1DM). Multivariable artificial pancreas systems have evolved to incorporate various additional physiological measurements beyond the conventional continuous glucose monitoring measurements to better integrate information on the metabolic state of the patients affecting the glycemic dynamics. The changes in the physiological measurements such as heart rate, energy expenditure, skin temperature, and skin conductance measured by wearable devices are indicative of the changes in the metabolic state. The controller receives the physiological measurements in the feed forward manner which accelerates the remedy control decision in response to the disturbances. Although various AP systems are proposed in the literature to accommodate these additional sources of information, the testing and evaluation of these advanced multivariable AP systems are hindered by the requirements of conducting time-consuming and expensive clinical trials. Development of a simulation platform for rapid prototyping and iterative development of AP systems is one of the main contributions of this study. Simulation platform for T1DM includes a compartmental model generating glucose concentration in response to physical activity in addition to meals and infused insulin. The proposed exercise-glucose-insulin model is an extension of the previously developed glucose-insulin model to derive transient variations in glycemic dynamics caused by physical activity and to improve the glucose prediction accuracy. Physiological variables affected by physical activity, such as heart rate, skin temperature, and blood volume pulse are generated in addition to the glucose concentration in the simulator. The simulation platform includes several virtual patients providing a reliable platform for in silico evaluation of different algorithms proposed for automation of glucose control in T1DM. The multivariable simulator will accelerate the development of next-generation artificial pancreas systems.The development of a disturbance detection algorithm is the other contribution of this study. Meals are major disturbances to the glucose homeostasis, and automated detection of meal consumption and carbohydrate estimation of the consumed meal are critical for fully automated artificial pancreas control systems. In this study, a detection algorithm integrating fuzzy logic classification and qualitative analysis is proposed. A fuzzy logic system estimates the carbohydrate content of the meal.
Show less
- Title
- STRATEGIES TO MAXIMIZE DOSE REDUCTION IN SPECT MYOCARDIAL PERFUSION IMAGING
- Creator
- Juan Ramon, Albert
- Date
- 2019
- Description
-
Radiation exposure in medical imaging has become a topic of major concern, gaining intense attention within the clinical and research...
Show moreRadiation exposure in medical imaging has become a topic of major concern, gaining intense attention within the clinical and research communities. In 2009, the National Council on Radiation Protection and Measurements (NCRP) announced radiation exposure of patients via medical imaging increased more than sixfold between the 1980s and 2006, with cardiac nuclear medicine, specifically myocardial perfusion imaging (MPI) with single-photon emission computed tomography (SPECT) being the second biggest culprit. The goal of this work is to evaluate several strategies to enable radiation dose to be minimized while maintaining current levels of diagnostic accuracy in the clinic. We achieve dose reduction through optimization of advanced image reconstruction strategies, to obtain higher-quality images at a given dose (noise) level, through a machine learning approach to predict the optimal dose for each patient, and through advanced deep learning (DL) algorithms to improve the quality of reconstructed images. Our ultimate objective is to provide the nuclear cardiology field with a new set of algorithms and guidelines for selecting administered activity levels and image reconstruction procedures in the clinic. The project is based on a clinical study in which imaging and various other data are being collected for a set of patients. The project has the following components. First, we investigate a global dose-reduction approach (i.e., reducing dose by a uniform proportion across all patients) via optimization of image reconstruction strategies. Specifically, we maximize perfusion-defect detection (diagnostic accuracy) over a range of simulated dose levels using clinical data into which we have introduced simulated defects. We measure diagnostic performance using clinically validated model observers from the Quantitative Perfusion SPECT (QPS) software package. We investigate the diagnostic accuracy over a range of dose levels ranging from those currently used in the clinic down to one-eighth of this level. We consider the following image-reconstruction: filtered-backprojection (FBP) with no correction for physics effects, and ordered-subsets expectation-maximization (OS-EM) with several combinations of attenuation correction (AC), scatter correction (SC), and resolution correction (RC).Second, we propose a patient-specific ("personalized") dose reduction approach based on machine learning that aims to predict the minimum radiation dose needed to obtain consistent perfusion-defect detection accuracy for each individual patient. This prediction is based on patient attributes, especially body measurements, and various clinical variables. We compare the diagnostic accuracy produced by predicted personalized doses to that produced by standard clinical dose levels to validate the predictive models.Third, we verify that the dose minimization results obtained in the context of perfusion-defect detection also maintain diagnostic accuracy in evaluating cardiac function, as characterized by myocardial motion.Finally, we propose a deep learning (DL) method to denoise SPECT-MPI reconstructed images. The method is a 3D convolutional neural network trained to predict standard-dose images from low-dose images. We quantify the extent to which dose reduction can be achieved using the proposed DL structure when dose is reduced uniformly across patients or by means of our patient-specific approach.
Show less
- Title
- SI NANOSTRUCTURED COMPOSITE AS HIGH PERFORMANCE ANODE MATERIAL FOR NEXT GENERATION LITHIUM-ION BATTERIES
- Creator
- He, Qianran
- Date
- 2019
- Description
-
Silicon has attracted huge attention in the last decade as the anode material for Li-ion batteries because it has a theoretical capacity ∼10...
Show moreSilicon has attracted huge attention in the last decade as the anode material for Li-ion batteries because it has a theoretical capacity ∼10 times that of graphite. However, the practical application of Si is hindered by three major challenges: large volume expansion during cycling (∼300%), low electrical conductivity, and instability of the SEI layer caused by repeated volume changes of the Si material. Our study focused on novel design and synthesis of Si anodes that can solve all the key problems of Si anodes simultaneously. The Si micro-reactors we designed and synthesized contain well-designed internal structures, including (i) nanoscale Si building blocks, (ii) the engineered void space, and (iii) a conductive carbon shell. Because of these internal structures and nitrogen doped carbon shell, these sub micrometer-sized Si particles are termed as Si micro-reactors and denoted as Si@void@C(N). According to our electrochemical results, the as-synthesized Si micro-reactors could live up to 1000 charge/discharge cycles at high current densities (up to 8 A/g) while still providing a higher specific capacity than the state-of-the-art carbonaceous anodes. Our investigation shows that the unique design of Si@void@C(N) has a relatively low specific surface area (SSA) which significantly reduces the undesired surface side reactions and increases ICE to 91%, while the engineered voids with nano-channel shape inside the structure can accommodate Si volume expansion and keep the structure and SEI layer stable. Furthermore, the porous N-doped carbon shell along with nano-channeled voids allows rapid lithiation of the Si micro-reactor without Li plating during ultrafast charging. As a result, Si@void@C(N) exhibits ultrafast charging capability with high ICE, superior specific capacity and long cycle life.
Show less
- Title
- INDUSTRIALIZED BUILDING CONSTRUCTION MODELS FOR TORNADO AFTERMATH RECOVERY
- Creator
- Alves de Carvalho, Augusto
- Date
- 2019
- Description
-
Some researchers have reported that the number of disasters is expanding in scale and occurrences. Today, humanity occupies more land than...
Show moreSome researchers have reported that the number of disasters is expanding in scale and occurrences. Today, humanity occupies more land than forty years ago. Due to this, existing communities are prone to higher chances of being affected by disasters. Consequently, the number of natural disasters and losses have increased through time. Recent research work indicates that construction of new houses takes the majority of the recovery time; for example, In Joplin tornado aftermath, the development of new houses took the longest part of the recovery time (D. J. Smith & Sutter, 2013). The disaster industry sees housing and shelter as a product. The procurement is done on a necessity basis. The product --tents, inter-shelters, trailers, permanent dwellings, or any property to rent-- has to be ready whenever required. Therefore, after calculating the construction capacity in tornado regions, a methodology is proposed to compare four different robust industrialized building construction alternatives, keeping components, modules, and pieces in stock. Comparing them will provide information about which format is more appropriate for a profitable company or even a public entity, to respond and recover from a disaster faster.
Show less
- Title
- A SCALABLE SIMULATION AND MODELING FRAMEWORK FOR EVALUATION OF SOFTWARE-DEFINED NETWORKING DESIGN AND SECURITY APPLICATIONS
- Creator
- Yan, Jiaqi
- Date
- 2019
- Description
-
The world today is densely connected by many large-scale computer networks, supporting military applications, social communications, power...
Show moreThe world today is densely connected by many large-scale computer networks, supporting military applications, social communications, power grid facilities, cloud services, and other critical infrastructures. However, a gap has grown between the complexity of the system and the increasing need for security and resilience. We believe this gap is now reaching a tipping point, resulting in a dramatic change in the way that networks and applications are architected, developed, monitored, and protected. This trend calls for a scalable and high-fidelity network testing and evaluation platform to facilitate the transformation from in-house research ideas to real-world working solutions. With this objective, we investigate means to build a scalable and high-fidelity network testbed using container-based emulation and parallel simulation; our study focuses on the emerging software-defined networking (SDN) technology. Existing evaluation platforms facilitate the adoption of the SDN architecture and applications to production systems. However, the performance of those platforms is highly dependent on the underlying physical hardware resources. Insufficient resources would lead to undesired results, such as low experimental fidelity or slow execution speed, especially with large-scale network settings. To improve the testbed fidelity, we first develop a lightweight virtual time system for Linux container and integrate the system into a widely-used SDN emulator. A key issue with an ordinary container-based emulator is that it uses the system clock across all the containers even if a container is not being scheduled to run, which leads to the issue of both performance and temporal fidelity, especially with high workloads. We investigate virtual time approaches by precisely scaling the time of interactions between containers and physical devices. Our evaluation results indicate a definite improvement in fidelity and scalability. To improve the testbed scalability, we investigate how the centralized paradigm of SDN can be utilized to reduce the simulation workload. We explore a model abstraction technique that effectively transforms the SDN network devices to one virtualized switch model. While significantly reducing the model execution time and enabling the real-time simulation capability, our abstracted model also preserves the end-to-end forwarding behavior of the original network.With enhanced fidelity and scalability, it is realistic to utilize our network testbed to perform a security evaluation of various SDN applications. We notice that the communication network generates and processes a huge amount of data. The logically-centralized SDN control plane, on the one hand, has to process both critical control traffic and potentially big data traffic, and on the other hand, enables many efficient security solutions, such as intrusion detection, mitigation, and prevention. Recently, deep neural networks achieve state-of-the-art results across a range of hard problem spaces. We study how to utilize the big data and deep learning to secure communication networks and host entities. For classifying malicious network traffic, we have performed the feasibility study of off-line deep-learning based intrusion detection by constructing the detection engine with multiple advanced deep learning models. For malware classification on individual hosts, another necessity to secure computer systems, existing machine learning-based malware classification methods rely on handcrafted features extracted from raw binary files or disassembled code. The diversity of such features created has made it hard to build generic malware classification systems that work effectively across different operational environments. To strike a balance between generality and performance, we explore new graph convolutional neural network techniques to effectively yet efficiently classify malware programs represented as their control flow graphs.
Show less
- Title
- Comparison of an Ideal Point and Dominance IRT Model on the Detection of Differential Item Functioning with DFIT
- Creator
- Spizzuco Jr, Daniel
- Date
- 2019
- Description
-
Item response theory (IRT) models can assume a variety of forms including,notably, dominance and ideal point-based probability distributions....
Show moreItem response theory (IRT) models can assume a variety of forms including,notably, dominance and ideal point-based probability distributions. But researchers haveonly recently begun to explore issues related to the above distinction. The current studytherefore examines whether model-data fit and rates of differential item functioning (DIF)detection remain comparable when data are analyzed via the ideal point-based generalizedgraded unfolding model (GGUM) vs. the dominance-based graded response model (GRM).To address these issues, item response data were simulated to contain dominance,ideal point and mixed response processes, and DIF and impact scenarios. Results indicatedthat model-data fit and DIF detection accuracy were not as closely aligned as anticipated.Overall, the GGUM fit data better than the GRM to the extent that any ideal point processeswere present, while the GRM was slightly better at fitting dominance-only data. With noimpact, however, the GGUM fit all embedded response data types better than the GRM.Results were mixed among impact scenarios. This pattern was found in both no DIF and DIFscenarios.Several points were made with respect to the DIF portion of the study. First, Type 1error rates were in most cases quite conservative for both models. Second, study-wide,more power emerged with dominance as compared to ideal point data for both models.Moreover, in no impact conditions, slightly more power accrued via the GGUM fordominance and ideal point data. With impact, however, the GRM produced somewhat morepower across data types. Third, in terms of DIF patterns/sources, power was high for bothmodels when DIF was embedded on the full set of location/threshold parameters, andlower with fewer differentially functioning (DF) location/threshold parameters. Notably,the GGUM was slightly more powerful in the fewest DF location/threshold scenarios, andthe GRM was more powerful in the most DF location/threshold scenarios. Fourth, neithermodel performed well in the complex within-item cancelling DIF scenarios. These patternsgenerally occurred in both uniform and non-uniform scenarios. The paper concludes with apresentation of recommendations, study limitations and issues for future research.
Show less
- Title
- HETEROGENEOUS CATALYST FOR ALKANE DEHYDYGENATION AND IMPLEMENTING TO SOLID OXIDE FUEL CELL
- Creator
- Xu, Yunjie
- Date
- 2019
- Description
-
In the past decade, shale gas has become the most import source of natural gas in the United States. Large amounts of light alkanes in shale...
Show moreIn the past decade, shale gas has become the most import source of natural gas in the United States. Large amounts of light alkanes in shale gas, such as methane, ethane, and propane are available as an industrial source of chemicals through the catalyzed, on-purpose light alkane dehydrogenation to olefins. Therefore, it is obvious there is a benefit to developing catalysts to directly convert shale gas to olefins. However, alkane dehydrogenation and non-oxidative methane coupling are thermodynamically unfavorable reactions at low temperatures. The energy requirements make these reactions less attractive for shale gas utilization. In principle, consuming the hydrogen product with a fuel cell can drive the thermodynamically unfavorable reaction by reducing the hydrogen partial pressure in the anode and by heat generating by the fuel cell, while also generating electricity in the process. Moreover, catalyst integration with fuel cell can facilitate the transfer of charge in anode which is rate determine step in the fuel cell. This thesis will focus on catalyst development for alkane dehydrogenation and exploring a way to integrate these catalysts with fuel cells.Chapters 2, 3 and 4 focus on designing, characterizing, and studying catalysts for non-oxidative coupling of methane (NOCM) and propane dehydrogenation (PDH). PtM (M is a transition metal) alloys were found to efficiently decrease the desorption energy of olefin products and avoid deeper C-H bond activation compared to metallic Pt. Based on the previous study of single cobalt on silica, a novel synthesis of PtCo3 was developed to further increase the activity of the PDH reaction. The Pt bimetallic catalyst made by novel synthesis route was proven to be one of several types of alloy. It was observed that extremely high conversion of PDH and high selectivity of target olefin were catalyzed by PtCo3/SiO2. Ga, as another promotor to replace Co, was also investigated. As expected, PtGa3 alloy was formed by a similar synthesis, and it showed extraordinary stability and activity for propane dehydrogenation. A Mo-Pt dual-metal catalyst was found to catalyze methane coupling even though Pt-Mo bimetallic alloys do not form. We hypothesize that Pt catalyzed C-H bond cleavage of CH4 to form methyl radical, and a MoOC species, formed by MoO3 reacting with CH4, could effectively facilitate methyl radical coupling to form larger alkanes and alkenes. Pt-Mo dual-metal catalyst had higher catalytic activity for methane coupling than a physical mixture of Pt and Mo and genuine PtMo alloy. Chapter 5 details our efforts to transplant PtM catalysts from silica support to target fuel cell material--(La,Sr)(Cr,Fe)O3 as a support. Different catalyst structures were observed, and, in this case, second transition metals become a barrier to prevent Pt aggregation. When using propane as fuel for fuel cell, we observed electrochemical redox reactions occurred via electrochemical analysis. However, the resistance of cell is comparatively high and limited overall system performance. Chapter 6 details a study of the impact of the electrode oxide phase on overall cell performance. In this case, we conducted a fundamental study of degradation of cathode material, (La,Sr)(Co,Fe)O3. We found that raw material and cells can degrade even under room temperature. Thus, the storage of raw powder and fabricated cells is critical for performance studies. This also indicates that our high cell resistance in previous electrochemical measurements could come from the insulating compound formation during storage. Some directions for future research on catalyst integration and electrochemical testing are outlined.
Show less
- Title
- LOW-DOSE CARDIAC SPECT USING POST-FILTERING, DEEP LEARNING, AND MOTION CORRECTION
- Creator
- Song, Chao
- Date
- 2019
- Description
-
Single photon emission computed tomography (SPECT) is an important technique in use today for the detection and evaluation of coronary artery...
Show moreSingle photon emission computed tomography (SPECT) is an important technique in use today for the detection and evaluation of coronary artery diseases. The image quality in cardiac SPECT can be adversely affected by cardiac motion and respiratory motion, both of which can lead to motion blur and non-uniform heart wall. In this thesis, we mainly investigate imaging de-noising algorithms and motion correction methods for improving the image quality in cardiac SPECT on both standard dose and reduced dose.First, we investigate a spatiotemporal post-processing approach based on a non-local means (NLM) filter for suppressing the noise in cardiac-gated SPECT images. Since in recent years low-dose studies have gained increased attention in cardiac SPECT owing to its potential radiation risk, to further improve the image quality on reduced dose, we investigate a novel de-noising method for low-dose cardiac-gated SPECT by using a three dimensional residual convolutional neural network (CNN). Furthermore, to reduce the negative effect of respiratory-binned acquisitions and assess the benefit of this approach in both standard dose and reduced dose using simulated acquisitions. Inspired by the success in respiratory correction, we investigate the potential benefit of cardiac motion correction for improving the detectability of perfusion defects. Finally, to combine the benefit of above two types of motion correction, dual-gated data acquisitions are implemented, wherein the acquired list-mode data are further binned into a number of intervals within cardiac and respiratory cycle according to the electrocardiography (ECG) signal and amplitude of the respiratory motion.
Show less