Search results
(781 - 800 of 1,037)
Pages
- Title
- Gas Turbine Vane Heat Transfer and Cooling Under Freestream Turbulence
- Creator
- Kanani, Yousef
- Date
- 2020
- Description
-
The effects of the inflow turbulence on the fluid flow and heat transfer of a gas turbine passage flow have been investigated using wall...
Show moreThe effects of the inflow turbulence on the fluid flow and heat transfer of a gas turbine passage flow have been investigated using wall-resolved large eddy simulations. Numerical simulations are conducted in a linear vane cascade at different levels of inflow turbulence up to 12.4% at nominal exit chord Reynolds number of 500,000. At this Reynolds number and without any inflow turbulence, the boundary layer remains laminar on both sides of the vane. The presence of the velocity disturbances at the inlet augments the heat transfer on the leading edge and pressure side, triggers transition to turbulence over the suction side and alters the structure of the secondary flow in the turbine passage.The detailed analysis of the flow field indicates formation of large scale leading edge structures that wrap around the large leading edge and extend into both suction and pressure sides of the vane. These structures disturb the boundary layer and form streaky structures which augment the heat transfer on the pressure side. The perturbed boundary layer on the suction side eventually breaks up to turbulence due to the inner mode secondary instability which was reported earlier in a handful of studies.The vane and endwall heat transfer in regions affected by the secondary flows in the turbine passage are also studied in detail. A new representation on the origin and evolution of the passage vortex is presented. The passage vortex in the current geometry is originated from the pressure side passage circulation and not the pressure leg of the horseshoe vortex at the leading edge. Furthermore, it is observed that the distribution of the heat transfer coefficient on the endwall is significantly altered by the change in the level of the freestream turbulence and the approach boundary layer thickness. Finally, the effect of the freestream turbulence on the effectiveness of a slot cooling system in a symmetrical airfoil is studied. The large eddy simulations are conducted for a Reynolds number of 250,000 (based on the approach velocity and the leading edge diameter) and freestream turbulence levels of up to 13.7%. Current predictions capture the decay of the film cooling effectiveness at higher turbulence levels due to the higher mixing of the incoming hot gases and the coolant. It is been shown that the presence of arrays of pin fins in the preconditioning section of the slot cooling system plays a major role in the near field film cooling effectiveness and surface temperature distribution.
Show less
- Title
- WHY AND WHY-NOT PROVENANCE FOR QUERIES WITH NEGATION
- Creator
- Lee, Seokki
- Date
- 2020
- Description
-
Explaining why an answer is in the result of a query or why it is missing from the result is important for many applications including...
Show moreExplaining why an answer is in the result of a query or why it is missing from the result is important for many applications including auditing, debugging data and queries, hypothetical reasoning about data, and data exploration. Both types of questions, i.e., why and why-not provenance, have been studied extensively, but mostly in isolation. A recent study shows that unification of why and why-not provenance can be achieved by developing a provenance model for queries with negation. In many complex queries, negation is natural and yields more expressive power. Thus, supporting both types of provenance and negation together can be useful for, e.g., debugging (missing) data over complex queries with negation. However, why-not provenance and — to a lesser degree — why provenance, can be very large resulting in severe scalability and usability challenges.In this thesis, we introduce a framework that unifies why and why-not provenance. We develop a graph-based provenance model that is powerful enough to encode the evaluation of queries with negation (First-Order queries). We demonstrate that our model generalizes a wide range of provenance models from the literature. Using our model, we present the first practical approach that efficiently generates explanations, i.e., parts of the provenance that are relevant to the query outputs of interest. Furthermore, we present a novel approximate summarization technique to address the scalability and usability challenges. Our technique efficiently computes pattern-based provenance summaries that balance informativeness, conciseness, and completeness. To achieve scalability, we integrate sampling techniques into provenance capture and summarization. We implement these techniques in our PUG (Provenance Unification through Graphs) system which runs on top of a relational database. We demonstrate through extensive experiments that our approach scales to large datasets and produces comprehensive and meaningful (summaries of) provenance.
Show less
- Title
- Foregrounding Temporality to Design with Emerging Futures
- Creator
- Heidaripour, Maryam
- Date
- 2020
- Description
-
The rhetoric of today’s economy has framed entrepreneurship as a key contributor to inventing the future, which raises questions about who is...
Show moreThe rhetoric of today’s economy has framed entrepreneurship as a key contributor to inventing the future, which raises questions about who is counted as an insider, how the future is being designed, and for whom. The concentration of future-making has too long been in the hands of a few, given future’s tremendous impact on the many. This dissertation joins the growing body of scholarly explorations on channeling the design capacity to transition toward a future with a plural world system, where the economy offers a multiplicity of possibilities. Central to this exploration is to rethink how shaping futures might be done differently, with different people, and in different forms.By incorporating feminist temporality, I challenge the established mode of design investigation. My empirical chapters demonstrate the ways in which sharpening our temporal sensitivity could impact what we study, how we study it, and what we can find. In particular, I rearrange the power dynamics in design activities by opening up the position of knower to the emerging collectives. I then introduce the concept of designing a time-space yet to come that makes you wonder—an open invitation to rethink who we are and what we want to become.While it remains to be seen whether this contribution will have a meaningful impact on design knowledge, I argue that it makes a solid case for incorporating feminism in design. Feminist theory offers the theoretical underpinning for ontological reframing of design and helps us understand what other forms of design practice are emerging in this era of increasing complexity. I conclude with my take on an emerging design practice where the fundamental element of design is to enable other ways of knowing to inquire about what they truly want to become.
Show less
- Title
- Socially Responsible Investing and Style Investing
- Creator
- He, Di
- Date
- 2020
- Description
-
This study focuses on two popular investment strategies. The first one is a combination of socially responsible investing and factor investing...
Show moreThis study focuses on two popular investment strategies. The first one is a combination of socially responsible investing and factor investing (SRIF), it is therefore a comparison between factor investing portfolios and their corresponding ESG screened factor investing portfolios, aiming at indicating whether there is an opportunity costs or benefits of being responsible in factor investing. Opportunity cost is regarded if the ESG screened factor investing portfolios have lower raw return, Sharpe ratio, and risk-adjusted return than their respective factor investing portfolios. In addition to simply comparison, I also build an empirical SRI strategy, achieving real outperformance of SRI. For the second strategy, investing in R&D intensity (high technology) stocks results in significant positive alpha over 40 years. However, the alphas decrease significantly after the “Tech Bubble”, because investors nowadays prefer those technology firms who can produce true profits. I provide empirical evidence to investor sentiment, proving both risk bearing and investor sentiment play important roles in the positive association between R&D-intensive and excess return.In the first SRIF strategy, five widely-accepted factors in academic: value, size, profit, investment, and momentum are used to construct original single factor investing portfolio as benchmarks, which can naturally solve the benchmark bias, factor bias in previous literature at some extent. In addition to fulfill empirical industry’s generalities and constraints, this study also covers multi-factor framework and constructs different long-short positions for investment processing. Following considerations of ESG measurement (ESG_net and ESG_Industry, the latter one for calibration of industry bias), sample period (whole period and sub period), portfolio weighting methods (equally weighted and capitalization weighted), and after excluding undiversified portfolio, there are total 192 comparisons between factor investing portfolios and ESG screened factor investing portfolios for each measures of performance. Results suggest that most investors (80% - 90%) have to bear non-statistically significant opportunity costs if they want to be socially responsible in factor investing. In addition, the opportunity costs in sub period (2004-2017) is remarkably less in scale than those in whole period (1992-2017), indicating an obvious “time effect” that investors will have less opportunity costs recently with more and more ESG information is disclosed. For empirical consideration of industry, I build a double sorting factor portfolio on profit and value, and its ESG screened portfolio outperform the single factor portfolio.For the second research, R&D expense is a key component of investment. There is long history literature claim that there is a positive relationship between R&D and stock returns. There are two main explanations of the positive association, which are mispricing and risk bearing. This study separates whole sample into two periods: before “Tech Bubble” and after “Tech Bubble”, indicating that the mispricing is weaker after “Tech Bubble” than that in before “Tech Bubble”, while risk bearing is persistent. In addition, this study finds that the excess returns are relatively high for those highly subjective and difficult to arbitrage technology securities, which are small stocks, high volatility stocks, unprofitable stocks, non-dividend-paying stocks before the “Tech Bubble”, but almost vanish after the “Tech Bubble”. Therefore, investor sentiment does exist. While for those true earning technology securities, their excess returns are persistent, indicating compensation of risk bearing.
Show less
- Title
- MEN, WOMEN, AND LEADERS: THE EFFECT OF GENDER-LEADER CATEGORY CONGRUENCE ON SUPERVISOR EVALUATIONS
- Creator
- Lauritsen, Matthew William
- Date
- 2020
- Description
-
Researchers employing Schein’s (1973, 1975) paradigm, ubiquitously conclude that the greater conceptual distance between leaders and women...
Show moreResearchers employing Schein’s (1973, 1975) paradigm, ubiquitously conclude that the greater conceptual distance between leaders and women compared to leaders and men is problematic for women in leadership roles. Six hundred eighty participants were recruited from MTurk to rate men, women, and leaders on agency and communion. Using polynomial regression analysis, the category congruence hypothesis was tested using two theories as interpretive frameworks: implicit leadership theory (ILT) and role congruity theory (RCT). A strict congruence effect was not found for any of the models. The results generally supported ILT, supervisor evaluations were highest when perceived supervisor characteristics exceeded the respondents’ leader category expectations. The results did not support RCT’s hypothesis about the negative effects of incongruence of women and leader category. Supervisor evaluations were highest when respondents held traditional gender stereotypes, not when they were congruent with the leader prototype. However, a general incongruence effect was found between male communion stereotypes and leader communion stereotypes leading to lower evaluations for male supervisors. That is, for men supervisors, the highest ratings were associated with high communion ratings of both men and leader categories. The results of this study are further discussed in relation to gender-leader category congruence and leadership.
Show less
- Title
- HOW DO SECONDARY STUDENTS MAKE DECISIONS ON SOCIOSCIENTIFIC ISSUES: WHAT DO THEY CONSIDER IMPORTANT?
- Creator
- LePretre, Dawnne M
- Date
- 2019
- Description
-
Linking science and action is the epitome of scientific literacy (Hurd, 1972; Kuhn, 1972; Watson, 1969). Before becoming acting citizens,...
Show moreLinking science and action is the epitome of scientific literacy (Hurd, 1972; Kuhn, 1972; Watson, 1969). Before becoming acting citizens, students need to balance subject matter knowledge, personal values, and societal norms in decision-making (DM) on Socioscientific Issues (SSI) (Aikenhead, 1985; Grace & Ratcliffe, 2002; Kolstø, 2001; Zeidler, 1984). Existing literature suggests a variety of models and strategies to guide how students should think about SSI topics versus beginning with what students are thinking concerning SSI! This study aimed to identify the DM factors students considered across a variety of SSI and to determine if DM factors were common across topics or specific to a SSI. Students in grades 10-12 participated from seven schools and ten regular science classrooms, primarily located in a large Midwest city (n=498). The sample was 50% female, 50% male, and roughly 33% of students from each grade level.Across 60 enacted lessons on six different SSI topics, multiple sources of data were collected, including student artifacts, audiotapes of class discussions/interviews, field notes, and teacher surveys. Students engaged in a minimum of three different SSI topical lessons, implemented over a period between one to nine weeks for an average instructional time of 115 minutes per topic. Decision-making differed across students in various groupings, indicating that secondary students used both general and specific factors when making decisions on SSI. Further, trends emerged, indicating various student groups' valued DM factors differently. On several topics, students of different gender, grade levels, ethnicities, and school type considered different DM factors to different levels of support. For example, on the topic of plastics and pollution, 10th grade, female, and Hispanic students tended to identify concern for animals and sea life as their most prominent DM factor. Another trend included larger class sizes tending to cite more DM factors on a topic than students in smaller sized classrooms engaged on the same topic. Overall, 15 common or shared DM factors emerged that students considered when making decisions across multiple SSI contexts. In addition, each specific SSI context had between one and 15 specific or exclusive DM factors cited directly by students in this study.
Show less
- Title
- DATA SHARING WITH PRIVACY AND SECURITY
- Creator
- Qian, Jianwei
- Date
- 2019
- Description
-
Data is a non-exclusive resource and has synergistic effects. Open data sharing will enhance the utilization of big data’s value and...
Show moreData is a non-exclusive resource and has synergistic effects. Open data sharing will enhance the utilization of big data’s value and tremendously boost economic growth and transparency. Data sharing platforms have emerged worldwide, but with very limited services. Security is one of the main reasons why most data are not commonly shared. This dissertation aims to tackle several security issues in building a trustworthy data sharing ecosystem. First, I reveal the privacy risks in data sharing by designing de-anonymization and privacy inference attacks. Second, I present an analysis of the relationship between the attacker's knowledge and the privacy risk of data sharing, and try quantifying and estimating the risk. Then, I propose anonymization algorithms to protect the privacy of participants in data sharing. Finally, I survey the status quo, privacy and security concerns, and opportunities in data trading. This dissertation involves various data types with a focus on graph data and speech data; it also involves various forms of data sharing including collection, publishing, query, and trading.
Show less
- Title
- Sustainable Solutions in Complex Spaces of Innovation
- Creator
- Nogueira, André Martins
- Date
- 2019
- Description
-
Even though the interconnectivity between human activities and the integrity of ecological systems has long been recognized, the development...
Show moreEven though the interconnectivity between human activities and the integrity of ecological systems has long been recognized, the development of design practices that account for such interconnectivity can be considered relatively new. As such, contemporary institutions and their arrangements were not designed accordingly to their potential to promote sustainable and equitable flows of different types of resources; they lack the capability and structure to operate in the speed and scale in which humans are dynamically interacting with themselves, and with the natural environment. As the world has passed the 7.5 billion mark, such a condition is generating unintended socio-ecological-technical consequences being empowered by the fast-changing technology industry. New lenses and models for understanding the connectivity of social, ecological and technical systems underlying contemporary institutional arrangements are required to advance expertise in redirecting the flow of different types of resources for the sustainability of these systems. However, how humans perceive systems is largely framed by who is included in the discussion and the experiences and interests that they bring to bear. Even though there will always be a discrepancy between what is perceived, and the actual system at play, there are greater opportunities to expand such perception by drawing more deeply on systems thinking and the notion of resources. This dissertation advances design knowledge in the pursuit of bridging the gap between theoretical discourses and the pragmatism necessary to intervene socio-ecological-technical dynamics by exploring how designers might embed principles of sustainability into choice-making processes for innovation, and it proposes a new approach through which designers can advance their practices in enabling more sustainable flows of resources.
Show less
- Title
- Nanopore Detection of Heavy Metal Ions
- Creator
- MohammadiRoozbahani, Golbarg
- Date
- 2019
- Description
-
Nanopore sensing is an emerging analytical technique for measuring single molecules. Under an applied potential bias, analyte molecules are...
Show moreNanopore sensing is an emerging analytical technique for measuring single molecules. Under an applied potential bias, analyte molecules are transported through the nanopore and cause ionic current modulations. Accordingly, the fingerprint of the analyte is reflected in the signature of the current blockage events. Due to its advantages such as lable-free and multi-analyte detection, nanopore sensing technology has been utilized as an attractive versatile tool to study a variety of topics, including biosensing of different species, such as DNA, RNA, proteins, peptides, anions, and metal ions.Metal ions play a crucial role in human health and environmental safety. Although metal ions are essential for numerous biological processes, the presence of the wrong metal, or even the essential metals in the wrong concentration or location, can lead to undesirable results and serious health concerns, including antibiotic resistance, metabolic disorders, mental retardation, and even cancer. Therefore, it is still of prime importance to develop highly sensitive and selective sensors for metal ions.In this dissertation, various nanopore sensing strategies to detect metal ions will first be discussed. These include: a) construction of metal ion binding sites in the nanopore inner surface; b) utilization of a biomolecule as a ligand probe; and c) employing enzymatic reactions. Then, three projects will be summarized. Among them, two projects are involved with detection of non-essential metal ions: uranyl and thorium ions, while the other is targeted at essential element, zinc ion. To be more specific, uranyl and thorium ions are detected by taking advantage of peptide molecules as ligand probes. In this case, the event signatures of peptide molecules in the nanopore are significantly different in the absence and presence of metal ions, which might be attributed to the conformational change of the biomolecules induced by the metal ion-biomolecule interaction. On the other hand, zinc ion is detected based on enzymatic reaction: without Zn2+, ADAM17 (a zinc dependent protease) is inactive and cannot cleave peptide substrate molecules; in contrast, with Zn2+ ion in the solution, the enzyme was activated, and its cleavage of the peptide substrate produced new types of blockage events with smaller residence time and amplitude values than those the peptide substrate.
Show less
- Title
- STAKEHOLDER FEEDBACK ON A NOVEL EMOTION REGULATION INTERVENTION FOR PRESCHOOL-AGE CHILDREN WITH DISRUPTIVE BEHAVIOR PROBLEMS: A THEMATIC ANALYSIS
- Creator
- Lossia, Amanda
- Date
- 2019
- Description
-
Disruptive behavior disorders are among the most prevalent psychological disorders in preschoolers. There are evidence-based treatments for...
Show moreDisruptive behavior disorders are among the most prevalent psychological disorders in preschoolers. There are evidence-based treatments for these disorders, but clinically significant behavior problems persist in approximately one-fourth to one-third of children after treatment. These treatments consist of behavioral parenting interventions and are not designed to directly address children’s affective dysregulation, which is a core component of behavior problems. To address this limitation, a manualized intervention was developed to treat disruptive behavior in preschool-age children by specifically targeting their emotion regulation abilities as the mechanism of change by coaching the caregiver to scaffold the child’s emotion regulation strategy use. The purpose of the present study was to further the development of this intervention by obtaining feedback from key stakeholders (i.e., caregivers and therapists) on the intervention’s focus, content, and procedures. Obtaining this feedback is an essential component of developing a novel psychosocial intervention. A qualitative thematic analysis of in-depth focus group discussions was conducted. Data were organized into the following broad themes: Intervention approach (support for targeting emotion regulation but ensuring the approach is an appropriate fit and considering the important role of behavioral strategies; additional focus on facilitating a positive caregiver-child relationship; developing some independent regulation skills in the child), Intervention structure and session content (making the intervention structure more flexible or modular; retaining the main intervention components with modifications to enhance acceptability, relevance, and developmental appropriateness), The caregiver’s role (the caregiver’s role is of primary importance and should be active throughout all sessions; ensuring adequate caregiver preparation and skill development; additional primary focus on facilitating the caregiver’s own emotion regulation; attention to the caregiver’s own therapeutic needs), Individualized approach (individualizing the content and timing of all sessions to account for individual needs), Generalizability (ensuring generalization of skills to home and other settings through effective at-home practice and including other primary caregivers and family members in sessions), and Learning and skill development (considering individual differences in how children and caregivers learn and modifying activities accordingly). These themes and stakeholders’ specific feedback will guide revisions to the intervention manual prior to pilot testing and further examination of efficacy and effectiveness.
Show less
- Title
- The Role of Ethnic Similarity, Perceived Communication Style Deviation, and Cultural Intelligence in Leader-Member Exchange and Trust
- Creator
- Polyashuk, Yelena
- Date
- 2019
- Description
-
This study examined those factors that contribute to a better working relationship between a leader and a subordinate or make that working...
Show moreThis study examined those factors that contribute to a better working relationship between a leader and a subordinate or make that working relationship challenging. Specifically, we investigated the effect of ethnic configuration within the leader-subordinate dyad and perceived dissimilarity on Leader-Member Exchange (LMX) and trust. Communication style deviation was tested as a mediator between actual, as well as perceived dissimilarity and relational outcomes. Cultural Intelligence (CQ) was included as a moderator, the presence of which could ameliorate the negative impact of dissimilarity on LMX and trust. In order to test these predictions, a survey was administered to 614 participants. Participants were working students at an urban, Midwestern, public university. Results showed that in presence of low CQ among respondents, there was a negative impact of ethnic dissimilarity on LMX. However, no impact of ethnic similarity/dissimilarity on trust was found. Specific dyad composition of the leader-subordinate dyad had no significant impact on LMX or trust. Finally, communication style deviation partially mediated the relationship between perceived dissimilarity and the two outcome variables of LMX and trust. These findings revealed that in order to build a high-quality relationship within an ethnically diverse leader-subordinate dyad, both CQ and alignment in communication style are of consequence.
Show less
- Title
- Fast Automatic Bayesian Cubature Using Matching Kernels and Designs
- Creator
- Rathinavel, Jagadeeswaran
- Date
- 2019
- Description
-
Automatic cubatures approximate multidimensional integrals to user-specified error tolerances. In many real-world integration problems, the...
Show moreAutomatic cubatures approximate multidimensional integrals to user-specified error tolerances. In many real-world integration problems, the analytical solution is either unavailable or difficult to compute. To overcome this, one can use numerical algorithms that approximately estimate the value of the integral. For high dimensional integrals, quasi-Monte Carlo (QMC) methods are very popular. QMC methods are equal-weight quadrature rules where the quadrature points are chosen deterministically, unlike Monte Carlo (MC) methods where the points are chosen randomly.The families of integration lattice nodes and digital nets are the most popular quadrature points used. These methods consider the integrand to be a deterministic function. An alternative approach, called Bayesian cubature, postulates the integrand to be an instance of a Gaussian stochastic process. For high dimensional problems, it is difficult to adaptively change the sampling pattern. But one can automatically determine the sample size, $n$, given a fixed and reasonable sampling pattern. We take this approach using a Bayesian perspective. We assume a Gaussian process parameterized by a constant mean and a covariance function defined by a scale parameter and a function specifying how the integrand values at two different points in the domain are related. These parameters are estimated from integrand values or are given non-informative priors. This leads to a credible interval for the integral. The sample size, $n$, is chosen to make the credible interval for the Bayesian posterior error no greater than the desired error tolerance. However, the process just outlined typically requires vector-matrix operations with a computational cost of $O(n^3)$. Our innovation is to pair low discrepancy nodes with matching kernels, which lowers the computational cost to $O(n \log n)$. We begin the thesis by introducing the Bayesian approach to calculate the posterior cubature error and define our automatic Bayesian cubature. Although much of this material is known, it is used to develop the necessary foundations. Some of the major contributions of this thesis include the following: 1) The fast Bayesian transform is introduced. This generalizes the techniques that speedup Bayesian cubature when the kernel matches low discrepancy nodes. 2) The fast Bayesian transform approach is demonstrated using two methods: a) rank-1 lattice sequences and shift-invariant kernels, and b) Sobol' sequences and Walsh kernels. These two methods are implemented as fast automatic Bayesian cubature algorithms in the Guaranteed Automatic Integration Library (GAIL). 3) We develop additional numerical implementation techniques: a) rewriting the covariance kernel to avoid cancellation error, b) gradient descent for hyperparameter search, and c) non-integer kernel order selection.The thesis concludes by applying our fast automatic Bayesian cubature algorithms to three sample integration problems. We show that our algorithms are faster than the basic Bayesian cubature and that they provide answers within the error tolerance in most cases. The Bayesian cubatures that we develop are guaranteed for integrands belonging to a cone of functions that reside in the middle of the sample space. The concept of a cone of functions is also explained briefly.
Show less
- Title
- A SYSTEMATIC APPROACH TO UNDERSTANDING ALIGNMENT BETWEEN THE EXISTING AND SELF-ADOPTED ENVIRONMENTAL EDUCATION STANDARDS: UNITED STATES SIXTH TO TWELFTH GRADE ENVIRONMENTAL SCIENCE STANDARDS
- Creator
- Connell, Margaretann Grace
- Date
- 2019
- Description
-
The purpose of this thesis was to conduct a systematic approach to determine the alignment between the existing and self-adopted science 6th...
Show moreThe purpose of this thesis was to conduct a systematic approach to determine the alignment between the existing and self-adopted science 6th-12th grade EE science standards for 10 U.S. National States (6th-8th [AZ; ID; MA; WY]) and (9th-12th [NE; NYS; OH; PA; SC; TX]). The criteria for States’ selection were based on States with SASS (non-NGSS adoption) and 2) demographics - random selection from the 10 U.S. EPA Regions. The Existing Environmental Education Standards (EEES) (GCDEE, Hungerford et al., 1980; NAAEE Guidelines, Simmons, 2010a; Tbilisi, UNESCO, 1978) were aligned with the 10 States. The investigation was conducted by a DCA (Mayring, 2002). Data were analyzed using MAXQDA 2018.1(VERBI, 2017), judged by a Content Match (La Marca et al., 2000), and measured by the adapted criteria for Categorical Concurrence and Range of Knowledge Correspondence (Webb, 1999). Instruments to score the output were: 1). CEEI – Tbilisi/GCDEE (K-12), and EEI – NAAEE Guidelines (6-8; 9-12). Results for the Content Match of the EEES revealed that 50% of the States were Partly Aligned and other 50% were Not Aligned with the NAAEE Guidelines Code Coverage. Additionally, the Content Match with Tbilisi/GCDEE revealed that 20% of the States (OH, PA) were Fully Aligned and the other 80% Partly Aligned . The States’ science standards ability to reach appropriate levels of alignment was due to the scientific specificity of those States with implicit EE standards. Moreover, it was difficult to come to a common ground to expect complete alignment based on the socioecological approaches and interdisciplinary nature (Kyburz-Graber, 2013; Simmons, 2010a) of the EEES. Therefore, it is now left up to the policymakers at the State levels to work with stakeholders and come to a consensus in support of EE standards that are relevant, fair, and balanced with multidisciplinary, socioecological approaches to promote of an environmentally literate citizenry.
Show less
- Title
- KINETIC MODEL FRAMEWORKS OF ANIMAL CELL CULTURES FOR CONTROL AND OPTIMIZATION
- Creator
- Yilmaz, Denizhan
- Date
- 2019
- Description
-
This dissertation proposes four different kinetic model frameworks that havebeen developed for optimization and control of monoclonal antibody...
Show moreThis dissertation proposes four different kinetic model frameworks that havebeen developed for optimization and control of monoclonal antibody producing mammalian cell cultures to improve biopharmaceutical production by decreasing the costof trial and error experimentation. The developed models mainly describe the transient metabolic behavior of mammalian cell culture under different culture conditionsand predicts cell growth and death, cell metabolism, and monoclonal antibody synthesis, and production. These models are developed via ordinary differential equationsbased on the assumption of well-mixing reactor. All developed models were calibrated, and their predictive capabilities were tested with experimental reports published in the literature. Good agreement was obtained between model predictions and experimental data. The presented results illustrate that the developed models successfully describe and predict the transient behavior of mammalian cell cultures and can be a useful tool for biopharmaceutical production.
Show less
- Title
- TRANSIENT STABILITY SIMULATION OF COMBINED THREE-PHASE UNBALANCED TRANSMISSION AND DISTRIBUTION NETWORKS
- Creator
- Alsharief, Yagoob
- Date
- 2019
- Description
-
Historically, transmission (T) system and distribution (D) system analysis has been done separately. The main reasons are 1) different...
Show moreHistorically, transmission (T) system and distribution (D) system analysis has been done separately. The main reasons are 1) different modeling frameworks, i.e., positive-sequence versus three-phase unbalanced, 2) system size, and 3) lack of dynamic two-way interaction between T&D. The typical power system usually consists of tens of thousands of transmission buses and thousands of distribution feeders with hundreds of customers per feeder. In the past, distribution networks have been largely passive with relatively little dynamic interaction with the transmission network. However, due to the new trends that the electric grid has been witnessing in the last decade with the installation of distributed energy resources (DERs) on the distribution level, such as behind-the-meter generation and energy storage units, electric vehicles, etc., dynamic simulation tools for combined T&D will become necessary in the near future. These tools will aid system operators and planning engineers in understanding the impact of these new trends on large-scale power systems. Taking advantage of the advancements in the field of high performance computing and parallel computing could enable accurate, wide-area T&D dynamics simulation. These comprehensive simulation capabilities would dramatically improve our ability to predict the complex interactions among DERs, customer loads and traditional utility control devices, thereby allowing higher penetrations of renewable energy, electric vehicles and energy storage.
Show less
- Title
- DUST MITIGATION OF MICRO-STRUCTURED (GECKO-LIKE) ADHESIVES
- Creator
- Alizadehyazdi, Vahid
- Date
- 2019
- Description
-
Controllable adhesives (i.e. those capable of being turned on and off) are used in a wide range of applications including robotic grippers and...
Show moreControllable adhesives (i.e. those capable of being turned on and off) are used in a wide range of applications including robotic grippers and climbing robots. Electromagnets, suction, and microspines have been used to meet this demand, but are typically limited to a specific substrate roughness or material. Microstructured (gecko-like) adhesives on the other hand, offer the potential to be the most universal among controllable adhesives since they can work on a wide variety of surfaces. The development of microstructured (gecko-like) adhesives has focused almost solely on their adhesive strength. However, for practical applications, especially in real-world environments, the adhesive's long-term performance is arguably equally important. One impediment to long-term viability is the adhesive's susceptibility to contamination, which decreases adhesion significantly. To have practical microstructure adhesives in real-world environments, the detrimental effect of dust and other contaminants should be dealt with. The first general approach involves removing adhered dust particles. The second approach is to create adhesives that minimize dust adsorption such that extensive cleaning is not necessary or they can be removed easily. Regarding the first approach, this research describes the use of electrostatic forces and ultrasonic vibration to repel dust particles. Results are non-destructive, non-contact cleaning methods that can be used in conjunction with other cleaning techniques, many of which rely on physical contact between the fibrillar adhesive and substrate. Electrostatic cleaning results show that a two-phase square wave with the lowest practically feasible frequency has the best cleaning results. Combining electrostatic and ultrasonic cleaning results in far higher efficiency than when using electrostatic repulsion or ultrasonic alone. Moreover, I showed that the piezoelectric element in the ultrasonic cleaning method can also be used as a releasing mechanism to turn the adhesive off and as a force/contact sensor. Regarding the second approach, I experimentally explored the effect of the modulus of elasticity, work of separation, and work of adhesion (adhesion energy) on the shear stress and particle detachment capabilities of microstructured adhesives. Particle removal is evaluated using both non-contact cleaning methods (centripetal force and electrostatic particle repulsion) and a dry contact cleaning method (load-drag-unload test). Results show that for a material with a high work of separation, high elastic modulus, and low work of adhesion, it is possible to create a microstructured adhesive with both high shear stress strength and low adhesion to dust particles. Results also show that, for dry contact cleaning, shear stress recovery mostly stems from particle rolling and not particle sliding. Moreover, shear test results show that augmenting the microstructured adhesive with electrostatic adhesion can reduce the negative effects on adhesion of a high elastic modulus materials' conformability to a substrate by providing a preload to the microstructured elements. Finally, I applied mentioned dust mitigation methods on two different gecko-like adhesives grippers. The first design was used to pick up flat objects, while the second one is designed to grip curved objects of different shapes and sizes. Since the second gripper is flexible and piezoelectric is stiff (it can only be applied to rigid backings), only electrostatic dust mitigation is applicable.
Show less
- Title
- MULTIVARIABLE SIMULATION PLATFORM FOR TYPE 1 DIABETES AND AUTOMATIC MEAL HANDLING IN ARTIFICIAL PANCREAS SYSTEMS
- Creator
- Samadi, Sediqeh
- Date
- 2019
- Description
-
Artificial pancreas (AP) systems are designed to automate the glucose control in type 1 diabetes mellitus (T1DM). Multivariable artificial...
Show moreArtificial pancreas (AP) systems are designed to automate the glucose control in type 1 diabetes mellitus (T1DM). Multivariable artificial pancreas systems have evolved to incorporate various additional physiological measurements beyond the conventional continuous glucose monitoring measurements to better integrate information on the metabolic state of the patients affecting the glycemic dynamics. The changes in the physiological measurements such as heart rate, energy expenditure, skin temperature, and skin conductance measured by wearable devices are indicative of the changes in the metabolic state. The controller receives the physiological measurements in the feed forward manner which accelerates the remedy control decision in response to the disturbances. Although various AP systems are proposed in the literature to accommodate these additional sources of information, the testing and evaluation of these advanced multivariable AP systems are hindered by the requirements of conducting time-consuming and expensive clinical trials. Development of a simulation platform for rapid prototyping and iterative development of AP systems is one of the main contributions of this study. Simulation platform for T1DM includes a compartmental model generating glucose concentration in response to physical activity in addition to meals and infused insulin. The proposed exercise-glucose-insulin model is an extension of the previously developed glucose-insulin model to derive transient variations in glycemic dynamics caused by physical activity and to improve the glucose prediction accuracy. Physiological variables affected by physical activity, such as heart rate, skin temperature, and blood volume pulse are generated in addition to the glucose concentration in the simulator. The simulation platform includes several virtual patients providing a reliable platform for in silico evaluation of different algorithms proposed for automation of glucose control in T1DM. The multivariable simulator will accelerate the development of next-generation artificial pancreas systems.The development of a disturbance detection algorithm is the other contribution of this study. Meals are major disturbances to the glucose homeostasis, and automated detection of meal consumption and carbohydrate estimation of the consumed meal are critical for fully automated artificial pancreas control systems. In this study, a detection algorithm integrating fuzzy logic classification and qualitative analysis is proposed. A fuzzy logic system estimates the carbohydrate content of the meal.
Show less
- Title
- STRATEGIES TO MAXIMIZE DOSE REDUCTION IN SPECT MYOCARDIAL PERFUSION IMAGING
- Creator
- Juan Ramon, Albert
- Date
- 2019
- Description
-
Radiation exposure in medical imaging has become a topic of major concern, gaining intense attention within the clinical and research...
Show moreRadiation exposure in medical imaging has become a topic of major concern, gaining intense attention within the clinical and research communities. In 2009, the National Council on Radiation Protection and Measurements (NCRP) announced radiation exposure of patients via medical imaging increased more than sixfold between the 1980s and 2006, with cardiac nuclear medicine, specifically myocardial perfusion imaging (MPI) with single-photon emission computed tomography (SPECT) being the second biggest culprit. The goal of this work is to evaluate several strategies to enable radiation dose to be minimized while maintaining current levels of diagnostic accuracy in the clinic. We achieve dose reduction through optimization of advanced image reconstruction strategies, to obtain higher-quality images at a given dose (noise) level, through a machine learning approach to predict the optimal dose for each patient, and through advanced deep learning (DL) algorithms to improve the quality of reconstructed images. Our ultimate objective is to provide the nuclear cardiology field with a new set of algorithms and guidelines for selecting administered activity levels and image reconstruction procedures in the clinic. The project is based on a clinical study in which imaging and various other data are being collected for a set of patients. The project has the following components. First, we investigate a global dose-reduction approach (i.e., reducing dose by a uniform proportion across all patients) via optimization of image reconstruction strategies. Specifically, we maximize perfusion-defect detection (diagnostic accuracy) over a range of simulated dose levels using clinical data into which we have introduced simulated defects. We measure diagnostic performance using clinically validated model observers from the Quantitative Perfusion SPECT (QPS) software package. We investigate the diagnostic accuracy over a range of dose levels ranging from those currently used in the clinic down to one-eighth of this level. We consider the following image-reconstruction: filtered-backprojection (FBP) with no correction for physics effects, and ordered-subsets expectation-maximization (OS-EM) with several combinations of attenuation correction (AC), scatter correction (SC), and resolution correction (RC).Second, we propose a patient-specific ("personalized") dose reduction approach based on machine learning that aims to predict the minimum radiation dose needed to obtain consistent perfusion-defect detection accuracy for each individual patient. This prediction is based on patient attributes, especially body measurements, and various clinical variables. We compare the diagnostic accuracy produced by predicted personalized doses to that produced by standard clinical dose levels to validate the predictive models.Third, we verify that the dose minimization results obtained in the context of perfusion-defect detection also maintain diagnostic accuracy in evaluating cardiac function, as characterized by myocardial motion.Finally, we propose a deep learning (DL) method to denoise SPECT-MPI reconstructed images. The method is a 3D convolutional neural network trained to predict standard-dose images from low-dose images. We quantify the extent to which dose reduction can be achieved using the proposed DL structure when dose is reduced uniformly across patients or by means of our patient-specific approach.
Show less
- Title
- SI NANOSTRUCTURED COMPOSITE AS HIGH PERFORMANCE ANODE MATERIAL FOR NEXT GENERATION LITHIUM-ION BATTERIES
- Creator
- He, Qianran
- Date
- 2019
- Description
-
Silicon has attracted huge attention in the last decade as the anode material for Li-ion batteries because it has a theoretical capacity ∼10...
Show moreSilicon has attracted huge attention in the last decade as the anode material for Li-ion batteries because it has a theoretical capacity ∼10 times that of graphite. However, the practical application of Si is hindered by three major challenges: large volume expansion during cycling (∼300%), low electrical conductivity, and instability of the SEI layer caused by repeated volume changes of the Si material. Our study focused on novel design and synthesis of Si anodes that can solve all the key problems of Si anodes simultaneously. The Si micro-reactors we designed and synthesized contain well-designed internal structures, including (i) nanoscale Si building blocks, (ii) the engineered void space, and (iii) a conductive carbon shell. Because of these internal structures and nitrogen doped carbon shell, these sub micrometer-sized Si particles are termed as Si micro-reactors and denoted as Si@void@C(N). According to our electrochemical results, the as-synthesized Si micro-reactors could live up to 1000 charge/discharge cycles at high current densities (up to 8 A/g) while still providing a higher specific capacity than the state-of-the-art carbonaceous anodes. Our investigation shows that the unique design of Si@void@C(N) has a relatively low specific surface area (SSA) which significantly reduces the undesired surface side reactions and increases ICE to 91%, while the engineered voids with nano-channel shape inside the structure can accommodate Si volume expansion and keep the structure and SEI layer stable. Furthermore, the porous N-doped carbon shell along with nano-channeled voids allows rapid lithiation of the Si micro-reactor without Li plating during ultrafast charging. As a result, Si@void@C(N) exhibits ultrafast charging capability with high ICE, superior specific capacity and long cycle life.
Show less
- Title
- INDUSTRIALIZED BUILDING CONSTRUCTION MODELS FOR TORNADO AFTERMATH RECOVERY
- Creator
- Alves de Carvalho, Augusto
- Date
- 2019
- Description
-
Some researchers have reported that the number of disasters is expanding in scale and occurrences. Today, humanity occupies more land than...
Show moreSome researchers have reported that the number of disasters is expanding in scale and occurrences. Today, humanity occupies more land than forty years ago. Due to this, existing communities are prone to higher chances of being affected by disasters. Consequently, the number of natural disasters and losses have increased through time. Recent research work indicates that construction of new houses takes the majority of the recovery time; for example, In Joplin tornado aftermath, the development of new houses took the longest part of the recovery time (D. J. Smith & Sutter, 2013). The disaster industry sees housing and shelter as a product. The procurement is done on a necessity basis. The product --tents, inter-shelters, trailers, permanent dwellings, or any property to rent-- has to be ready whenever required. Therefore, after calculating the construction capacity in tornado regions, a methodology is proposed to compare four different robust industrialized building construction alternatives, keeping components, modules, and pieces in stock. Comparing them will provide information about which format is more appropriate for a profitable company or even a public entity, to respond and recover from a disaster faster.
Show less