Search results
(9,561 - 9,580 of 9,629)
Pages
- Title
- A Multi-level Data Integration Approach for the Convergence of HPC and Big Data Systems
- Creator
- Feng, Kun
- Date
- 2020
- Description
-
HPC is moving towards exascale (10^18 operations per second) following the trend that has continued for over half a century. Such an extremely...
Show moreHPC is moving towards exascale (10^18 operations per second) following the trend that has continued for over half a century. Such an extremely compelling computing power brings huge opportunities for scientists to explore their problems with larger sizes and finer granularity. As a result, the data volume produced and consumed by extreme-scale computing has increased dramatically. To gain useful scientific insights, scientists analyze tremendous amounts of data, which stresses the storage systems and requires efficient data access. Besides the data volume increase, the variety of I/O subsystems grows as well to meet the drastically different, often conflicting I/O requirements of numerous applications. HPC and BD, as two major camps of extreme-scale computing, have been developed separately for a long time and diverged from computing and storage paradigms. However, recent developments have proven the convergence of them leads to more efficient scientific output. Hence, unification between these ecosystems is necessary to accelerate extreme-scale computing with the collaboration of applications from both camps. Therefore, integrated I/O has become a major issue that needs to be addressed as the extreme computing community moves forward.This study explores improvement by proposing a new integrated data access system for extreme-scale computing. We enhance the BD framework to adapt to the change of integrated data access requirement by enabling direct processing of scientific data from PFS at the HPC site. Our framework can perform up to 8x faster than the state-of-the-art solutions in representative workloads. We design a new advanced I/O middleware service to utilize data aggregation resources to facilitate integrated data access in scientific workflows with both HPC and BD applications. Our middleware service can reach up to 10x speedup against the default solution and 133% better performance than existing solutions. We propose a novel storage integration solution on the storage side to unite all the storage resources, to unify the namespace across all the storage systems, and provide an ultimate integrated data access service. The integrated solution can speed up a real workflow with integrated data access requirements by up to 6.86x over existing solutions. The three-level integration at the application level, middleware level, and storage level provide us a systematic hierarchical I/O integration. Our implementation results show that the three-level optimized design and implementation is feasible and effective. It improves the state-of-the-art solutions and helps us to achieve an enhanced I/O system towards extreme-scale computing to support both HPC and BD applications.
Show less
- Title
- Effect of Phosphorus Additions on Polycrystalline Ni-base Superalloys
- Creator
- Li, Linhan
- Date
- 2020
- Description
-
In recent years, advanced polycrystalline Ni-base superalloys have been developed with elevated levels of γ′ forming elements and high level...
Show moreIn recent years, advanced polycrystalline Ni-base superalloys have been developed with elevated levels of γ′ forming elements and high level of refractory elements as solid-solution strengtheners in an effort to extend the temperature capability. Moreover, the properties of the grain boundaries become more important and this necessitates the need to study of effects of minor additions of interstitial P for grain structure optimization. Due to the increased level of refractory elements employed, powder-processed Ni-base superalloys tend to have a high propensity to form Topologically Close-Packed (TCP) phases, which was found to be further promoted by the addition of P. A systematic study of the phase stability of high refractory content powder-processed Ni-base superalloys with three levels of P additions revealed an increased tendency to form Laves phase as a function of P additions. Additions of P were discovered to not only depress the incipient melting temperature to stabilize the eutectic Laves phase, but also promote Laves phase formation during the aging heat treatment and the following isothermal exposure. During the thermal exposure, excessive formation of Laves phase promoted the formation of a basket-weave structure comprised of an intertwined mixture of Laves and Sigma phase. The stabilization of the Laves phase structure due to P additions was found to be consistent with Density Functional Theory (DFT) calculations and could be rationalized through structure maps that relate the valence electron concentration and relative size differences. Additionally, a variation of grain structure obtained via either a sub-solvus or super-solvus solution heat treatment was noted to some extent vary the P segregation level at high-angle grain boundaries, thereby affecting the phase stability. For a sub-solvus solutioned grain structure that possessed a high length density of high-angle grain boundaries, the Laves phase formation was depressed for alloys with a low level of P addition. However, the phase stability variation associated with Laves phase formation was moderate when high concentrations of P were present. The effect of P addition on the γ′ microstructure variation is limited, which was confirmed by microstructure observations as well as through the short-term 0.6%-strain stress relaxation tests at high temperature. Heat treatment variations to modify the secondary and tertiary γ′ microstructures were discovered to exert a much more significant influence on the 0.6%-strain stress relaxation behavior. When a higher initial strain of 2% was applied, the stress relaxation behavior of the powder-processed Ni-base superalloys was found to be microstructure independent. The creep ductility of Waspaloy was determined to be notably reduced by the P additions due to the enhanced precipitation of M23C6 carbide at the grain boundaries. Excessive precipitation of M23C6 carbide increased the likelihood of brittle fracture when tested under low temperature/high stress creep conditions. However, the P addition as well as the excessive precipitation of M23C6 carbide did not impact the creep behavior as the dominant deformation was transgranular in nature when tested under high temperature/low stress conditions.
Show less
- Title
- ATOMIC LAYER DEPOSITION STUDIES OF GOLD AND TUNGSTEN DISULFIDE
- Creator
- Liu, Pengfei
- Date
- 2020
- Description
-
In the last few decades, atomic layer deposition (ALD), as a vapor deposition technique and a powerful thin film fabrication method, has...
Show moreIn the last few decades, atomic layer deposition (ALD), as a vapor deposition technique and a powerful thin film fabrication method, has received more and more attention in many fields. A variety of materials can be made by ALD; however, the progress of ALD application is still necessary. Meanwhile, in the process of film fabrication by ALD, the interfacial chemistry is interesting and well worth studying. This dissertation mainly described the process of exploring two materials, gold and tungsten disulfide, fabrication and related content.For the portion of applying ALD in gold thin film deposition, a relatively comprehensive process was explored, studied, analyzed and discussed. Start with the synthesis of the gold precursor, Me2Au(S2CNEt2), the synthetic reaction was explored. By modified the conditions, such as solvent system, twice the yield as previously reported in the literature were achieved. Next, the application of in situ microbalance and infrared spectroscopic technique illuminate the organometallic chemistry during the gold thermal ALD process with Me2Au(S2CNEt2) and ozone. In situ quartz crystal microbalance (QCM) studies give an explanation for the nucleation delay and island growth of gold on a freshly prepared aluminum oxide surface. In situ infrared spectroscopy provides insight to study the surface chemistry during the process, which supports an oxidized gold surface mechanism. The epitaxy of gold thin film was explored by X-ray diffraction. The thermal ALD gold on various substrates reveals out-of-plane orientation, however, in-plane orientation was only existed in the gold film on mica. For the portion of applying ALD in tungsten disulfide fabrication, the early work started with studying the effect of interfaces upon crystallinity. The sulfuration of indium thin film with different interface was explored. Then the idea of “interfaces” was brought into the process of tungsten compounds fabrication. Due to this “indirect” method which made tungsten disulfide by sulfurizing ALD made tungsten compounds (eg. tungsten oxide and tungsten nitride) could not reduce the reaction temperature of tungsten disulfide synthesis to less than 400 °C. Sequently, the “direct” way of tungsten disulfide fabrication which directly utilized tungsten precursor and H2S in ALD system was tested and explored. With the tungsten precursors developed by our group, finally, tungsten disulfide could be fabricated at the temperature as low as 125 °C.
Show less
- Title
- Resilience Enhancement of Critical Cyber-Physical Systems with Advanced Network Control
- Creator
- Liu, Xin
- Date
- 2020
- Description
-
Critical infrastructures are the systems whose failures would have a debilitating impact on national security, economics, public health or...
Show moreCritical infrastructures are the systems whose failures would have a debilitating impact on national security, economics, public health or safety, or any combination of those matters. It is important to improve those systems' resilience, which is the ability to reduce the magnitude and/or duration of disruptive events. However, today’s critical infrastructures, such as electrical power system and transportation system, are deploying advanced control applications with increasing scale and complexity, which leads to the migration of their underlying communication infrastructures from simple and proprietary networks to off-the-shelf network technologies (e.g., IP-based protocols and standards) to handle the intensive and heterogeneous traffic flows. On one hand, this migration provides an opportunity for both academic and industry communities to develop novel ideas on top of existing schemes; on the other hand, it exposes more vulnerabilities for cyber-attacks. Moreover, since the large-scale power system may choose leased networks from Internet service providers (which is a critical infrastructure itself), there exists an interdependency relationship between power and communication infrastructures, where the power transmission control requires message delivery services while the network devices rely on the power supply. These problems raise research challenges to improve the system resilience of critical cyber-physical systems.In this thesis, we focus on resilience enhancement of critical infrastructures from the communication network's aspects. The application domain includes both power and transportation systems. For power systems, we first apply advanced network control techniques (i.e., software-defined network (SDN) and fibbing control scheme) in the transmission grid communication network to improve the grid status restoration process under network failures and cyber-attacks. We develop a unified system model that contains both transmission grid monitoring system (i.e., phasor measurement unit (PMU) network) and communication network, and formalize a mixed-integer linear programming (MILP) problem to minimize the recovery time of system observability with the power and communication domain constraints. We evaluate the system performance regarding the recovery plan generation and installation using IEEE standard systems. However, the advanced network-based control scheme could also lead to problems, since it requires a power supply for the network devices. Thus, we investigate the interdependency relationship between the power grid and communication network and its impact on system resilience. We conduct a survey work that summarizes existing research based on two dimensions: objectives (i.e., failure analysis, vulnerability analysis, failure mitigation, and failure recovery) and methodologies (i.e., analytical solutions, co-simulation, and empirical studies). We also identify the limitations of existing works and propose potential research opportunities in this demanding area. Lastly, based on the review work, we conduct research that focuses on fast power distribution system restoration that involves interdependency constraints. When a natural disaster happens, both power and communication components might be damaged. Furthermore, since they are dependent on each other's service to function correctly, the failures may propagate to the hardware/software that are not affected initially. In this work, we focus on the recovery stage where the failed components in the system are already fully detected and isolated. We construct a mathematical model of the co-existing power and communication system and use optimization techniques to produce a crew dispatch plan that restores power as fast as possible by coordinating damage repairing, switch operation, and communication supply processes. We evaluate the restoration efficiency on the IEEE standard system using both analytical analysis and discrete-event simulation.For the second application domain, railway transportation system, we focus on evaluating the resilience of its communication system that exchanges control and monitoring messages with both on-board driver cabin and remote control center. We use advanced discrete-event simulation techniques to achieve a high-fidelity model of the network which makes the evaluation more concrete and realistic. For the Ethernet-based on-board train communication network (TCN), we develop a parallel simulation platform according to the IEC standard and use it to conduct a case study of a double-tagging VLAN attack on this control network. Another component of the railway communication system is the train-to-ground network that enables the communication between the driving system on the train and the control center that issues commands such as the movement authority messages. We customize the NS3 network simulator to model the LTE-based protocol with a real high-speed train trace dataset from public sources. We evaluate the resilience of the cellular network specifically on the handover process, which happens when the train travels from one base station to another. Due to the high-speed nature, the handover success rate is impacted and there are many protocol-based solutions proposed in this research area. We use the high-fidelity simulation model to evaluate some of them and compare the pros and cons.
Show less
- Title
- The role of fibrillar collagen in tissue function
- Creator
- Ma, Yin
- Date
- 2020
- Description
-
Fibrillar collagen plays an important role in maintaining soft tissue integrity and providing chemical and physical cues for cell fate...
Show moreFibrillar collagen plays an important role in maintaining soft tissue integrity and providing chemical and physical cues for cell fate decisions. Collagen remodeling, which alternates the amount, distribution, and biomechanics of collagen, primarily type I (COLI) and type III (COLIII), can change tissue properties. This process is essential not only in biological developments but also in pathological processes. Thus, it is meaningful to understand the correlation between collagen remodeling and tissue dysfunction and investigate the cells' response to fibrous protein matrices. However, current studies in biochemical analysis of collagen and biomechanical study of tissues were carried out at different scales. So it is hard to correlate the data to draw solid conclusions. In this thesis research, we used two collagen disorder associated pathological conditions, pelvic organ prolapse (POP) and micropapillary serous carcinoma (MPSC) of the fallopian tube, as models to unravel the correlation between tissue dysfunctions and the impaired microenvironment relevant to the composition, nanostructure, and biomechanics of a collagen fibril. In the case of POP, we found the collagen fibers in tissues of POP patients were less abundant but stiffer than those of non-POP individuals, implying a loose and fragile matrix that is weakly integrated with other components of the connective tissue to provide adequate support of the pelvic organs. On the other hand, the collagen D-period, the characteristic banding feature which signals the proper assembly of collagen molecules, decreased in POP tissues. We surmised that the molecular level changes of collagen in POP were accountable for the weak matrix mechanics, verified by a systematic in vitro study. We also examined the collagen matrix alternation in MPSC of the fallopian tube, which is thought to cause ovarian cancer via metastasis. Since cancer metastasis is often related to collagen remodeling, we examined the collagen matrix alternation in this disease. We observed the heterogeneous distribution of COLI and COLIII in the papillae of the tumor tissue. Noticeably, COLI was accumulated at the papillae tip, whereas COLIII was dominant at the papillae base. We also observed the absence of collagen matrix between the micropapillary tip and the fibrosis base. Such an uneven collagen distribution implies that the matrix exerted distinctive forces on the tumor cells to regulate their behaviors, including cell migration, directional growth, and shedding from the primary tumor to initiate metastasis. These conclusions have been supported by the results of our in vitro experiments. In investigating the effect of the microenvironment on cell behavior, we established and validated an AFM-based method to collect and quantitatively analyze the mRNA samples from targeted live cells at the single-cell level. This method overcomes issues, such as severe cell damage or even cell death, the capability of time-dependent and in situ analyses, in current methods. The application of the method in studying heterogeneous gene expression in single cells and the interaction between cancer cells and cancer-associated fibroblasts was demonstrated. We also demonstrated that this method can be potentially used to quantitatively analyze the gene expression level changes in a targeted cell in response to the microenvironment.
Show less
- Title
- FEARING FORGETTING? DEVELOPMENT OF A SCALE TO ASSESS ATTITUDES ABOUT DEMENTIA IN THE LAY POPULATION
- Creator
- Ogu, Precious N
- Date
- 2020
- Description
-
Individuals with dementia show a progressive decline in cognitive functioning which results in an inability to complete activities of daily...
Show moreIndividuals with dementia show a progressive decline in cognitive functioning which results in an inability to complete activities of daily living (American Psychiatric Association, 2013). Early diagnosis of dementia is a positive prognostic indicator (World Alzheimer Report, 2011) and is widely regarded as an important pre-condition for improving dementia care (Kim et al., 2015; Vernooij-Dassen et al., 2005). However, negative attitudes and stigma towards dementia could possibly interfere with an individual’s willingness to recognize or accept the idea of themselves having the disease through label avoidance. The goal of the present study was to contribute to understanding the perception of dementia by developing a quantitatively derived and psychometrically validated measure that encompasses the positive and negative attitudes towards dementia held by people without dementia. This study also explored the potential association between negative attitudes about dementia and lack of familiarity with dementia as familiarity with individuals with mental illness is related to stigmatizing attitudes towards mental illness. These goals were achieved by a principal components analysis (PCA) of 56 modified items from extant and well-validated mental illness attitude scales (Community Attitudes to Mental Illness, CAMI, Taylor & Dear, 1981; Social Distance Scale, SDS, Link, 1986; Depression Stigma Scale, DSS, Griffiths et al., 2004). Convergent validity was assessed by examining the relationship between the final derived measure and a construct associated with negative attitudes about mental illness (Mental Retardation Attitude Inventory-Revised, MRAI-R). Discriminant validity was assessed by examining the relationship between the final measure and a construct that should be unrelated to negative attitudes about mental illness (Belief in a Just World Scale, BJW). Finally, exploratory analyses were conducted to assess if attitudes measured by the newly created scale are related to participants’ familiarity with dementia (Level of Familiarity Scale, LoFS, Corrigan et al., 2001). 400 adults with no history of dementia were recruited through Amazon’s MTurk. Participants were compensated by a credit to their Amazon account upon completion of the survey. The PCA supported 2 conceptually different (not method variance) latent components titled Negative Attitudes and Positive Attitudes. These 2 components comprise the Attitudes to Dementia Inventory (ADI). Construct validity was partially supported for each component of the ADI. Degree of familiarity with dementia was not associated with negative or positive attitudes about dementia. Overall, this study is an important contribution to dementia attitudes research. Given the identification of Negative Attitudes and Positive Attitudes have been identified as distinct dimensions of dementia attitudes, the ADI can be used to further investigate how negative reactions towards dementia might cause delays in initiating medical intervention and treatment, and also to examine whether positive attitudes provide any protections against the probable effects of negative attitudes on stigma and help-seeking behaviors. Since the early recognition and diagnosis of dementia is widely regarded as an important condition for improving dementia care (Kim et al., 2015; Vernooij-Dassen, et al., 2005), the ADI can be used to inform stigma-prevention, which hopefully translates into improved help-seeking behaviors.
Show less
- Title
- IMPACT OF DATA SHAPE, FIDELITY, AND INTER-OBSERVER REPRODUCIBILITY ON CARDIAC MAGNETIC RESONANCE IMAGE PIPELINES
- Creator
- Obioma, Blessing Ngozi
- Date
- 2020
- Description
-
Artificial Intelligence (AI) holds a great promise in the healthcare. It provides a variety of advantages with its application in clinical...
Show moreArtificial Intelligence (AI) holds a great promise in the healthcare. It provides a variety of advantages with its application in clinical diagnosis, disease prediction, and treatment, with such interests intensifying in the medical image field. AI can automate various cumbersome data processing techniques in medical imaging such as segmentation of left ventricular chambers and image-based classification of diseases. However, full clinical implementation and adaptation of emerging AI-based tools face challenges due to the inherently opaque nature of such AI algorithms based on Deep Neural Networks (DNN), for which computer-trained bias is not only difficult to detect by physician users but is also difficult to safely design in software development. In this work, we examine AI application in Cardiac Magnetic Resonance (CMR) using an automated image classification task, and thereby propose an AI quality control framework design that differentially evaluates the black-box DNN via carefully prepared input data with shape and fidelity variations to probe system responses to these variations. Two variants of the Visual Geometric Graphics with 19 neural layers (VGG19) was used for classification, with a total of 60,000 CMR images. Findings from this work provides insights on the importance of quality training data preparation and demonstrates the importance of data shape variability. It also provides gateway for computation performance optimization in training and validation time.
Show less
- Title
- LOW-COVERAGE GENOMES AS AN EFFECTIVE AND ECONOMICAL APPROACH FOR LEPIDOPTERAN MICROSATELLITE ISOLATION
- Creator
- Liang, Huijia
- Date
- 2020
- Description
-
This study aimed to verify that whether a low-coverage genome can work as an effective approach to isolate Lepidopteran microsatellites. As...
Show moreThis study aimed to verify that whether a low-coverage genome can work as an effective approach to isolate Lepidopteran microsatellites. As microsatellites are useful tool to study population genetics, and there are many Lepidopteran agriculture pests which can cause huge economic damages every year, additionally, Lepidoptera have abundant similar flanking sequences making it difficult to develop reliable microsatellites. However, there are not enough published genomes of Lepidoptera species. If low-coverage Lepidopteran genomes can be used to isolate reliable microsatellites, the low-coverage genomes would be an effective and economical approach for microsatellites isolation, because low-coverage genome sequencing is much cheaper and less time-consuming than the published genome sequencing.
Show less
- Title
- Photograph of the Aaron Galleries booth at the Art 20 art fair, including Mary Henry's The Chelsea Way, New York, New York, 2006
- Date
- 2006
- Description
-
Photograph of the Aaron Galleries Booth at the Art 20 exhibition, at Park Place Armory in 2006, including Mary Henry's painting The Chelsea...
Show morePhotograph of the Aaron Galleries Booth at the Art 20 exhibition, at Park Place Armory in 2006, including Mary Henry's painting The Chelsea Way visible at center. Inscription on verso: "Art 20 - Park Ave. Armory 2006 Mary Henry 'The Chelsea Way' on the aisle Aaron Galleries Booth."
Show less - Collection
- Mary Dill Henry Papers, 1913-2021
- Title
- Photograph of the Aaron Galleries booth at the Art 20 art fair, including Mary Henry's The Chelsea Way, New York, New York, 2006
- Date
- 2006
- Description
-
Photograph of the Aaron Galleries Booth at the Art 20 exhibition, at Park Place Armory in 2006, including Mary Henry's painting The Chelsea...
Show morePhotograph of the Aaron Galleries Booth at the Art 20 exhibition, at Park Place Armory in 2006, including Mary Henry's painting The Chelsea Way visible at center right. Inscription on verso: "Art 20 - Park Ave. Armory 2006 Mary Henry 'The Chelsea Way' on the aisle Aaron Galleries Booth."
Show less - Collection
- Mary Dill Henry Papers, 1913-2021
- Title
- Photograph of the Aaron Galleries booth at the Art 20 art fair, including Mary Henry's The Chelsea Way, New York, New York, 2006
- Date
- 2006
- Description
-
Photograph of the Aaron Galleries Booth at the Art 20 exhibition, at Park Place Armory in 2006, including Mary Henry's painting The Chelsea...
Show morePhotograph of the Aaron Galleries Booth at the Art 20 exhibition, at Park Place Armory in 2006, including Mary Henry's painting The Chelsea Way visible at right. Inscription on verso: "Art 20 - Park Ave. Armory 2006 Mary Henry 'The Chelsea Way' on the aisle Aaron Galleries Booth."
Show less - Collection
- Mary Dill Henry Papers, 1913-2021
- Title
- Efficient and Practical Cluster Scheduling for High Performance Computing
- Creator
- Li, Boyang
- Date
- 2023
- Description
-
Cluster scheduling plays a crucial role in the high-performance computing (HPC) area. It is responsible for allocating resources and...
Show moreCluster scheduling plays a crucial role in the high-performance computing (HPC) area. It is responsible for allocating resources and determining the order in which jobs are executed. Existing HPC job schedulers typically leverage simpleheuristics to schedule jobs, but such scheduling policies struggle to keep pace with modern changes and technology trends. The study of this dissertation is motivated by two new trends in HPC community: the rapid growth of heterogeneous system infrastructure and the emergence of artificial intelligence (AI) technologies. First, existing scheduling policies are solely CPU-centric. In contrast, systems become more complex and heterogeneous, and emerging workloads have diverse resource requirements, such as CPU, burst buffer, power, network bandwidth, and so on. Second, previous heuristic scheduling approaches are manually designed. Such a manual design process prevents adaptive and informative scheduling decisions. A recent trend in HPC is to intertwine AI to better leverage the investment of supercomputers. This embrace of AI provides opportunities to design more intelligent scheduling methods. In this dissertation, we propose an efficient and practical cluster scheduling framework for HPC systems. Our framework leverages AI technologies and considers system heterogeneity. The framework comprises four major components. First, shared network systems such as dragonfly-based systems are vulnerable to performance variability due to network sharing. To mitigate workload interference on these shared network systems, we explore a dedicated scheduling policy. Next, emerging workloads in HPC have diverse resource requirements instead of being CPU-centric. To cater to this, we design an intelligent scheduling agent for multi-resource scheduling in HPC leveraging the advanced multi-objective reinforcement learning (MORL) algorithm. Subsequently, we address the issues with existing state encoding approaches in RL-driven scheduling, which either lack critical scheduling information or suffer from poor scalability. To this end, we present an efficient and scalable encoding model. Lastly, the lack of interpretability of RL methods poses a significant challenge to deploying RL-driven scheduling in production systems. In response, we provide a simple, deterministic, and easily understandable model for interpreting RL-driven scheduling. The proposed models and algorithms are evaluated with real job traces from production supercomputers. Experimental results show our schemes can effectively improve job scheduling in terms of both user satisfaction and system utilization.
Show less
- Title
- Testing actor and partner mediation effects of the mindfulness-relationship satisfaction association in long-distance relationships
- Creator
- Manser, Kelly A.
- Date
- 2023
- Description
-
Long-distance romantic relationships (LDR) have become increasingly common as technology and sociocultural norms have evolved. Individuals in...
Show moreLong-distance romantic relationships (LDR) have become increasingly common as technology and sociocultural norms have evolved. Individuals in LDR, many of whom are post-secondary students, report LDR-specific experiences and stressors. Nonetheless, romantic relationship satisfaction (RS) nonetheless appears comparable between LDR and non-LDR relationships, although the underlying mechanisms are not well-understood. Mindfulness, which relates positively to RS and negatively to stress, is minimally studied in LDR. Moreover, despite empirical and theoretical support, few studies have tested stress as a mediator of associations between mindfulness and RS at the within-person level (termed actor effects) or between-person level (partner effects). This study tested a theoretically-grounded, empirically-supported Actor-Partner Interdependence Mediation Model (APIMeM) in a sample (N = 150; 75 dyads) of post-secondary students and their LDR romantic partners. As hypothesized, an partner-actor indirect effect emerged of T1 actor mindfulness on T2 partner RS through decreased T2 partner stress. Unexpectedly, no direct, total, or indirect effects of T1 actor mindfulness on T2 actor stress or T2 actor RS emerged. Findings suggest that within- and between-person associations between mindfulness, stress, and RS may present uniquely in LDR, with implications for research, clinical practice, and policy.
Show less
- Title
- Associations between subjective cognitive decline, neurodegeneration, and vascular neuroimaging markers: Findings from a multiethnic cohort
- Creator
- Gonzalez, Christopher
- Date
- 2023
- Description
-
Mounting evidence suggests that subjective cognitive decline (SCD) may provide a unique target to identify the earliest changes in cognitive...
Show moreMounting evidence suggests that subjective cognitive decline (SCD) may provide a unique target to identify the earliest changes in cognitive function in Alzheimer’s disease (AD). In addition, vascular-related risk factors are also linked to increase the risk of clinical expression of AD, and independently increase the risk for vascular dementia (VaD). However, most investigations have not explored SCD across a multiethnic population. The study investigated 1) the associations between white matter hyperintensities (WMH) and targeted neuroimaging AD markers (hippocampal volume, cortical thickness of AD regions) with SCD amongst a multiethnic cohort, and 2) whether race moderated the relationship between them. A total of 871 older adults ages from 62-96 years old with a mean age of 74.48 (SD = 6.11), mean education of 12.79 years (SD = 4.53), and with 62% identifying as female were recruited from preexisting data from the Washington Heights Inwood Columbia Aging Project (WHICAP). Linear regression model revealed a significant association between WMH and both AD targeted neuroimaging markers across the total sample. Secondary analyses revealed that race did not moderate the relationship between WMH and AD cortical thickness with SCD but did in fact moderate the relationship between hippocampal volume and SCD. Results suggest that cultural biological differences exist in the Hispanic/Latine individuals compared to non-Hispanic White and non-Hispanic Black individuals.
Show less
- Title
- Design for Equivalence: Mutual Learning and Participant Gains in Participatory Design Processes
- Creator
- Geppert, Amanda Anne
- Date
- 2023
- Description
-
The ways in which people are or are not—aware, eligible, able, invited, required, supported, willing, and/or forced, among other conditions—to...
Show moreThe ways in which people are or are not—aware, eligible, able, invited, required, supported, willing, and/or forced, among other conditions—to participate in the procedures or experiences that constitute world-making activities—from voting, policymaking, or designing algorithms, technologies, products, programs, services, interventions, infrastructures, or systems, among other things—that affect their lives—is a central issue of our time. It demands careful consideration and is of great consequence as to whether or not the worlds we create are equitable, sustainable, and just, so that all people have free and equal standing and a real opportunity to belong and flourish. This study took up this issue in the context of participatory design practice and research and the making of sexual and reproductive health interventions with and for adolescents who are marginalized by race, class, ethnicity, gender, and sexuality, in Lucknow, Uttar Pradesh, India, and Chicago, Illinois, United States. The study advances knowledge in design by exploring how problem-focused, front-end participatory design processes expand or constrain the epistemic authority of less powerful actors, more specifically, systematically excluded individuals and groups. The study was conducted in two parallel phases. First, through a theoretical elaboration and critical analysis, it examined the application of Mouffean agonism in recent formulations of participatory design processes to address complex social and political issues with marginalized individuals and groups. The analysis demonstrated that a key construct—the chain of equivalence—is absent and resulted in the failure of these processes to achieve the collective, counter-hegemonic, and emancipatory responses strong enough to counter power as imagined by Chantal Mouffe. Second, an explanatory embedded multiple case study was conducted on two front-end participatory design workshops to understand what less powerful actors gain by engaging in collaborative processes of design and how practices and processes do or do not support their epistemic authority and matters of care. Thematic analysis suggested how the practices of collective information sharing and gathering—mutual learning and learning— affect participant gains and design process outputs. Additionally, thematic analysis informed a theoretical, conceptual, and practical move to expand beyond the original scope of the Mouffean chain of equivalence to include collaborating actors who may not be equivalently disadvantaged by current power relations, but who are committed to participatory design processes that prioritize the issues and matters of care of less powerful actors. When considered together, findings from both research phases inform the development of design for equivalence, at once a theoretical stance and a methodological framework to inform the selection of approaches, theories, processes, methods, practices, and tools for participatory design processes that support the epistemic authority of participants in challenging social and structural inequalities and creating articulations of the common good strong enough to counter dominant paradigms.
Show less
- Title
- Development of Metal Oxide-Based Phosphors for Luminescence Thermometry
- Creator
- Jahanbazi, Forough
- Date
- 2023
- Description
-
Temperature is both a thermodynamic property and a fundamental unit of measurement; one of the seven base quantities of the international...
Show moreTemperature is both a thermodynamic property and a fundamental unit of measurement; one of the seven base quantities of the international system of units (SI). It can be seen simply as the degree of hotness or coldness, a qualitative definition built on the bodily sensation of heat and cold. Today it is readily defined from the principles of classical thermodynamics as the parameter of state that has the same value for any systems which are in thermal equilibrium, and from statistical mechanics as a direct measure of the average kinetic energy of noninteracting particles. Temperature is an intensive quantity, meaning that its value does not depend on the amount of the substance for which it is measured. It is important because it is something we feel and because it influences the smallest aspects of our daily life, from how to adjust our housing and clothing to what we eat for supper. It affects the life cycles of plants and animals, governs rates of chemical reactions, influences tides and so on. For these reasons, it is by far the most measured physical quantity; sensors of temperature account for 80% of all sensors worldwide at present and they are used across a broad spectrum of human activities, such as in medicine, home appliances, meteorology, agriculture, and industrial and military contexts, to mention some of the most significant areas. Thus, the market demand for temperature sensors is increasing due to their extending applications in human activities. Traditional “contact” temperature measurements, which are mainly based on the expansion and contraction of an employed material, encounter difficulties when used in some emerging technologies and environments, such as nanotechnology and biomedicine. Today, an immediate need exists for the “non-contact” thermometry of moving or contact-sensitive objects, difficult to access pieces, bodies in hazardous locations, objects of nano-size dimensions, or living cells and organisms. However, the properties of existing thermometers and sensor platforms limit their use in such environments. Non-contact sensors measure object temperature without the need for physical contact between sensors and objects. Therefore, they have been considered as a great interest for hardly accessible objects. As non-contact thermometry methods, besides pyrometers and radiation thermometers, optical thermometers have drawn extensive attention nowadays. Specifically, among all the optical based thermometry methods, including Raman scattering, optical interferometry, and near field optical scanning microscopy, the one having drawn the most attention is luminescence thermometry in which the temperature detection is based on the luminescent signal accompanied with acceptable spatial resolution.In luminescence thermometry method, temperature can be determined from different features of luminescence using luminescence thermometers. Depending on the temporal nature of these features, the principles of their measurements are classified as either time-integrated (steady-state) or time-resolved ones. The temperature measurement based on the excitation and emission band positions and bandwidths, emission band intensities, luminescence/fluorescent intensity ratio (LIR or FIR, the ratio of the intensities of two emission bands) are classified as time-integrated methods. The temperature measurements based on the emission decay- or rise-times are classified as time-resolved ones. Temperature readouts from LIR and emission lifetime are by far the most exploited methods. Both readouts are self-referenced, so they are not affected by fluctuations in excitation and signal detection. Moreover, thermal sensing ability of many lanthanide-based luminescent materials is not limited to only one read-out method. Some of them can be used as dual/multiple modes via utilizing a combination of two or more read-out methods for temperature measurement. Non-contact luminescence thermometry based on LIR read-out method has attracted much attention due to its excellent accuracy and sensitivity. The intensity ratio is independent of undesirable factors that makes this luminescence thermometry more appropriate. Moreover, the method is self- referencing which removes the need for a temperature standard. In principle, it can be realized with any combinations of the emission lines from lanthanides and transition metallic ions with different temperature dependencies, either from single or multiple luminescent centers. It is the most reported luminescence thermometric read-out method in the past few years. In the past years, researchers have done a lot of work on developing high-efficient LIR thermometers by employing a single center emitting. This ratiometric method is mainly performed based on the principle that governs thermally coupled energy level of the luminescent ions. The electronic distribution between electronic states of closely separated excited levels of the doped element follows the Boltzmann equation. The two excited levels of ions are thermally coupled with a maximum energy gap of 2000 cm-1, which is sufficiently small to allow electrons to transit to high energy level upon thermal excitation and at the same time large enough to have different electronic populations and high sensitivity value. In this case, both high and low excited states share the electronic population according to Boltzmann’s distribution. Therefore, the ratio of the number of electrons between the high and the low excited levels can be defined as follows for LIR-based thermometry utilizing single emitting centers. In addition to LIR between two thermally coupled energy levels of the luminescent ion, in some ions LIR between two other energy levels which are not coupled thermally were employed to reach to a high-sensitive thermometry. The quantitative evaluation of the thermometric performance of a temperature probe is defined by its absolute and relative thermal sensitivities, temperature resolution, and repeatability. The rate of change in thermometric parameters (indicated by Δ) over a temperature changing process (∂T) is defined as absolute thermal sensitivity (Sa). However, absolute sensitivity is not appropriate to compare the performance among thermometers with different employed materials or physical principles. The relative thermal sensitivity (Sr) is defined to eliminate the problem associated with comparison between the performance of thermometers with different natures. Sr of a luminescent thermometer is one of the most important factors which determine its temperature readout accuracy. The smallest temperature change resolvable by a thermometer is defined as temperature resolution or temperature uncertainty (indicated by δT) which is expressed in Kelvin and depends on the characteristic of measuring systems such as the experimental detection setup and the signal-to noise ratio: The reproducibility is defined as the change of the same measurement performed under different conditions such as different methods or devices. The repeatability (indicated by R) is the ability of a thermometer to provide the same result under different conditions. Regarding temperature resolution, most light detection systems, including thermometry systems, suffer from low resolution because of the scattering at both excitation and emission wavelengths. Light scattering of thermometric phosphors is induced by their grain size, shape, and surface roughness. This is a problem particularly associated with conventional phosphors which typically have micrometer grain size. On the other hand, the light scattering by nanoparticles (NPs) is close to zero, which leads to better resolution of luminescence thermometers using NPs. Consequently, nanothermometry has emerged as a hot research area of thermometers for new technological applications with high resolution. Accordingly, below in chapter 1, we discussed a host material, pyrochlore compound of La2Zr2O7, doped with Tb3+ and Eu3+, synthesized in nanoscale (~15 nm) that showed a great potential for LIR temperature sensing with a high resolution based on dual emitting centers. In chapter 2, another sample of this nano powder host, La2Zr2O7 doped with Pr3+, is discovered and discussed for LIR temperature sensing based on single emitting center. Beside the high-resolution thermometry by La2Zr2O7: Pr3+ nano powder, a broad-temperature sensing range was achieved using it. The broad temperature sensing range obtained only by using one LIR-read out mode originated from high-lying charge transfer states with slow thermal-quenching that will be elaborated in chapter 2. Multiple materials employed for luminescence thermometry application, such as organic dyes, quantum dots, metal–organic complexes and frameworks, among which lanthanide or transition metal ion-based phosphors, are most promising. The electronic states of lanthanides are characterized by partially filled 4f orbitals as they are gradually filling up from 4f0 for La3+ to 4f14 for Lu3+. Their luminescence emission occurs due to interconfigurational f-f transitions except some ions like Eu2+ and Ce3+ which have f-d allowed transition emissions. The partially filled 4f orbitals of lanthanide ions are shielded by 5s and 5p subshells from surrounding environment that leads to long lifetime and narrowband emission characteristics. Once excited with UV light, lanthanide-doped materials mostly emit light in visible/near infrared (NIR) range in a downshift (DS) photoluminescence (PL) mechanism. In DS emission, high energy photons are converted into phonons with lower energy. Overall, having excellent repeatability, reproducibility and photostability with thermally and chemically stable structures makes the lanthanide-based materials the most favorite choices for luminescent thermometry applications. Their luminescence is easy to identify and differentiate from other materials. Multiplexing is possible due to their narrow emission bands which are easily identifiable. Host materials also play a crucial role in thermal sensing properties of thermometric phosphors. Various hosts such as fluorides, ceramic oxides, nitrides, chalcogenides, and phosphides have been employed for luminescence thermometry. Ceramic hosts are composed of different elements, thus often require complex synthesis processes which would limit their applicability. Fluoride hosts have a level of toxicity which is harmful for living systems, so they are not environmentally friendly. Nitride compounds are commonly prepared in oxygen/water-free glove boxes and synthesized in harsh synthesis conditions under high pressure/temperature which restrict their large-scale production. Chalcogenides and phosphides may not be sufficiently stable. On the other hand, metal oxide phosphors possess the advantages of convenient preparation, non-toxicity, excellent chemical stability (capable of withstanding sustained exposure to high temperature), and low cost. Moreover, they are preferable in biomedical luminescence thermometry as applications for measuring long-wavelength emissions where tissues are optically transparent and are less affected by scattering and background luminescence. Considering all these aspects, metal oxide-based phosphors are more favorable for luminescent thermometry. One of the goals of research in luminescence thermometry field has been to push the limit of temperature measurement capability to higher temperatures. However, the development of luminescent phosphors with high thermal stability of emission and high sensing efficiency still is a paramount challenge. Thermal stability of photoluminescence (PL) is a property related to the chemical composition, electronic structure, and crystal structure rigidity of phosphors. It is commonly referred to as positive thermal quenching (TQ), that is, the loss of light emission with rising temperature. Most phosphors indicate positive TQ which stems from high non-radiative transition probability at elevating temperatures. This phenomenon severely limits the applications of luminescent phosphors and degrades their devices’ performance. To compensate for the thermally induced emission loss of phosphors, several strategies have been reported, while as will be discussed in chapter 3, mostly have negative impacts on their inherent luminescence properties. From the structural perspective, TQ caused by nonradiative relaxations is closely related to the crystal structure stability. A rigid structural framework with high lattice symmetry has reduced nonradiative transitions at elevated temperatures. As one of the rigid-type hosts, materials possessing a negative thermal expansion (NTE) property have been explored as suitable hosts for anti-TQ phosphors doped with lanthanides. NTE refers to the unique property of some unique and rare materials with their volume abnormally contracting with increasing temperature. Among various reported NTE families, compounds with the general formula of A2M3O12, where A is a trivalent rare earth ion and M stands for W6+ or Mo6+, are well-known with a broad range of compositions and have been explored for anti-TQ in the recent years. Some earlier works reported employing A2M3O12 host to obtain thermally enhanced upconversion (UC) emission. However, the upconversion emission is not the type of widely used emission as they produce weaker emissions mostly limited to a higher wavelength range than most-applicable visible range. Thus, NTE phosphors and thermally enhanced stronger downshift (DS) emissions on visible range are not yet high enough to fulfill their practical application. To explore the applicability of NTE idea for down-shift (DS) emitting phosphors, we reported the anti-TQ performance of single and co-doped samples of Sc2Mo3O12: Eu3+ and Sc2Mo3O12: Tb3+, Eu3+ in chapter 3 and 4, respectively. Specifically, we took advantage of the existence of interionic energy transfer in our NTE host, to achieve superior anti-TQ performance for DS luminescence that can be employed for efficient thermometry at high temperatures range. The structural shrinkage with rising temperature shortens the distance between the host and activator dopant ions, which enhances the host to activator ET and consequently the final emission intensity as will be elaborated in two last chapters. As a highly promising strategy, there is an urgent need to obtain more evidence on how NTE property, associates with the anti-TQ of luminescence that we tried to discover in our works. We explored these compound’s potential for high temperature luminescence thermometry. We tested both LIR and lifetime-based temperature sensing and revealed their great potential for an efficient temperature sensing at high temperature ranges. This study opens a new design strategy and perspective to obtain phosphors with thermally boosted luminescence based on NTE host materials to meet the serious demands for their broad applications at elevated temperatures and harsh conditions.
Show less
- Title
- Application of Blockchain and Artificial Intelligence Methods in Power System Operation and Control
- Creator
- Farhoumandi, Matin
- Date
- 2023
- Description
-
The proliferation of distributed energy resources (DERs) and the large-scale electrification of transportation infrastructure are driving...
Show moreThe proliferation of distributed energy resources (DERs) and the large-scale electrification of transportation infrastructure are driving forces behind the ongoing evolution for transforming traditionally passive consumers into prosumers (both consumers and producers) in a coordinated system of power distribution network (PDN) and urban transportation network (UTN). In this new paradigm, peer-to-peer (P2P) energy trading is a promising energy management strategy for dynamically balancing the supply and demand in electricity markets. In this thesis, we propose the applications of artificial intelligence technology to power system operation and control. First, blockchain (BC) is applied to electric vehicle charging station (EVCS) operations to optimally transact energy in a hierarchical P2P framework. In the proposed framework, a decentralized privacy-preserving clearing mechanism is implemented in the transactive energy market (TEM) in which BC’s smart contracts are applied in a coordinated PDN and UTN operation. The effectiveness of the proposed TEM and its solution approach are validated via numerical simulations which are performed on a modified IEEE 123-bus PDN and a modified Sioux Falls UTN. Second, machine learning and deep learning methods are applied to short-term forecasting of non-conforming net load (STFNL). STFNL plays a vital role in enhancing the secure and efficient operation and control of power systems. However, power system consumption is affected by a variety of external factors and thus includes high levels of variations. These variations cause STFNL to be a challenging task as more DERs are integrated into the power grid. This thesis proposes two commonly used machine learning and deep learning methods, i.e., ensemble bagged and long short-term memory, for STFNL. The advantages, features and applications of these methods are expanded in a proposed fusion forecasting model that improves the STFNL accuracy. Additionally, data engineering and preprocessing options are used to increase the accuracy of the proposed fusion model. A comparative study based on practical load data is performed to demonstrate that the proposed fusion methodology can reach a relatively higher forecasting accuracy with lower error indices. Index Terms—Blockchain, deep learning and machine learning, electric vehicle charging stations, non-conforming net load forecasting, peer-to-peer transactive energy, power distribution and transportation networks, distributed energy resources, behind-the-meter supply resources.
Show less
- Title
- Dynamic Risk and Dynamic Performance Measures Generated by Distortion Functions and Diversification Benefits Optimization
- Creator
- Liu, Hao
- Date
- 2023
- Description
-
This thesis consists of two major parts, and it contributes to the fields of risk management and optimization.One contribution to risk...
Show moreThis thesis consists of two major parts, and it contributes to the fields of risk management and optimization.One contribution to risk management is made via developing dynamic risk measures and dynamic acceptability indices that can be characterized by distortion functions. In particular, we proved a representation theorem illustrating that the class of dynamic coherent risk measures generated by distortion functions coincides with a specific type of dynamic risk measures, the dynamic WV@R. We also investigate thoroughly various types of time consistencies for dynamic risk measures and dynamic acceptability indices in terms of distortion functions. Another contribution to risk management is proving strong consistency and asymptotic normality of two estimators of dynamic WV@R. In contrast to the exist- ing literature, our results do not rely on the assumptions of distribution of random variables. Instead, we investigate the asymptotic normality of estimators in terms of the generating distortion functions. Last but not least, we give counterexample to show that a sufficient condition of asymptotic normality is not necessary. The contribution to optimization is twofold. On the one hand, we formulate the (scalar) diversification optimization problem as a vector optimization problem (VOP), and show that a set-valued Bellman principle is satisfied by this VOP. On the other hand, we derive explicit policy gradient formula and implement the deep neural network to solve diversification optimization problem numerically. This deep learning technique allows to overcome computation difficulty caused by the non-convexity of VOP.
Show less
- Title
- Toward a Network Model of Executive Functioning
- Creator
- Fuller, Jordan S.
- Date
- 2023
- Description
-
The executive functions are the higher-order mental processes that are responsible for organized, strategic behavior. These functions have...
Show moreThe executive functions are the higher-order mental processes that are responsible for organized, strategic behavior. These functions have been a source of significant controversy since their initial introduction. This study sought to create a model of the executive functions utilizing psychological network analysis. Participants completed six measures reflecting inhibition, task switching, and working memory updating, as well as a fluid intelligence measure. A processing speed index was calculated from non-executive trials of various measures. Four networks were generated, including an executive functions network, an executive functions and intelligence network, an executive functions and processing speed network, and a network with all variables included. The resulting networks contained no stable edges between the executive functioning tasks. Stable edges were identified between the intelligence node and the two nodes reflecting working memory updating. There was an additional edge identified between processing speed and one measure of task switching. Results of the study may indicate that there is relative independence among executive functions. However, the management of task impurity in a psychological network analysis also merits further investigation.
Show less
- Title
- Prediction and Control of In-Cylinder Processes in Heavy-Duty Engines Using Alternative Fuels
- Creator
- Pulpeiro Gonzalez, Jorge
- Date
- 2024
- Description
-
This Ph.D. thesis focuses on advancing diagnostic techniques and control-oriented models to enhance the efficiency and performance of internal...
Show moreThis Ph.D. thesis focuses on advancing diagnostic techniques and control-oriented models to enhance the efficiency and performance of internal combustion (IC) engines, particularly heavy-duty engines utilizing alternative fuels. The research endeavors to contribute to the field of model-based control of engines through the development and implementation of innovative methodologies. The primary emphasis is on the development of diagnostic methods, control-oriented models and advanced control strategies for compression ignition engines using alternative fuels. The first key topic explores the determination of the Most Representative Cycle for Combustion Phasing Estimation based on cylinder pressure measurements. The method developed extracts crucial information from experimental data obtained from four distinct engines: the heavy-duty single-cylinder GCI engine, the light-duty multi-cylinder diesel engine, a CFR engine, and a single-cylinder light-duty Spark Ignition (SI) engine. This work lays the foundation for precise combustion phasing estimation, a critical parameter for engine control. The second major contribution involves the development of control-oriented models for Variable Geometry Turbochargers (VGT) and inter-coolers. Two models are established: a data-driven turbocharger model and an empirical inter-cooler model. These models are meticulously calibrated and validated using experimental data from a multi-cylinder light-duty diesel engine, providing valuable insights into the behavior of these components under varying conditions. The outcomes contribute to facilitate predictive control of engine air systems. The third core aspect of the thesis revolves around Model Predictive Control of Combustion Phasing in heavy-duty compression-ignition engines utilizing alternative fuels. A combustion phasing and engine load model is derived from experimental data and incorporated into an MPC framework. The MPC strategy is subsequently tested in the heavy-duty GCI test cell and compared against a conventional Proportional-Integral-Derivative (PID) control strategy. The results showcase the effectiveness of the MPC approach in achieving precise control of combustion phasing, demonstrating its potential for optimizing engine performance. In summary, this Ph.D. thesis contributes significantly to the field of engine controls by advancing diagnostic techniques, control-oriented models, and implementing a cutting-edge MPC-based control strategy for compression ignition engines using alternative fuels. The research findings not only enhance the understanding of in-cylinder processes but also pave the way for more efficient and sustainable heavy-duty engines using alternative fuels.
Show less