Search results
(1,021 - 1,040 of 1,076)
Pages
- Title
- ATOMIC LAYER DEPOSITION STUDIES OF GOLD AND TUNGSTEN DISULFIDE
- Creator
- Liu, Pengfei
- Date
- 2020
- Description
-
In the last few decades, atomic layer deposition (ALD), as a vapor deposition technique and a powerful thin film fabrication method, has...
Show moreIn the last few decades, atomic layer deposition (ALD), as a vapor deposition technique and a powerful thin film fabrication method, has received more and more attention in many fields. A variety of materials can be made by ALD; however, the progress of ALD application is still necessary. Meanwhile, in the process of film fabrication by ALD, the interfacial chemistry is interesting and well worth studying. This dissertation mainly described the process of exploring two materials, gold and tungsten disulfide, fabrication and related content.For the portion of applying ALD in gold thin film deposition, a relatively comprehensive process was explored, studied, analyzed and discussed. Start with the synthesis of the gold precursor, Me2Au(S2CNEt2), the synthetic reaction was explored. By modified the conditions, such as solvent system, twice the yield as previously reported in the literature were achieved. Next, the application of in situ microbalance and infrared spectroscopic technique illuminate the organometallic chemistry during the gold thermal ALD process with Me2Au(S2CNEt2) and ozone. In situ quartz crystal microbalance (QCM) studies give an explanation for the nucleation delay and island growth of gold on a freshly prepared aluminum oxide surface. In situ infrared spectroscopy provides insight to study the surface chemistry during the process, which supports an oxidized gold surface mechanism. The epitaxy of gold thin film was explored by X-ray diffraction. The thermal ALD gold on various substrates reveals out-of-plane orientation, however, in-plane orientation was only existed in the gold film on mica. For the portion of applying ALD in tungsten disulfide fabrication, the early work started with studying the effect of interfaces upon crystallinity. The sulfuration of indium thin film with different interface was explored. Then the idea of “interfaces” was brought into the process of tungsten compounds fabrication. Due to this “indirect” method which made tungsten disulfide by sulfurizing ALD made tungsten compounds (eg. tungsten oxide and tungsten nitride) could not reduce the reaction temperature of tungsten disulfide synthesis to less than 400 °C. Sequently, the “direct” way of tungsten disulfide fabrication which directly utilized tungsten precursor and H2S in ALD system was tested and explored. With the tungsten precursors developed by our group, finally, tungsten disulfide could be fabricated at the temperature as low as 125 °C.
Show less
- Title
- Resilience Enhancement of Critical Cyber-Physical Systems with Advanced Network Control
- Creator
- Liu, Xin
- Date
- 2020
- Description
-
Critical infrastructures are the systems whose failures would have a debilitating impact on national security, economics, public health or...
Show moreCritical infrastructures are the systems whose failures would have a debilitating impact on national security, economics, public health or safety, or any combination of those matters. It is important to improve those systems' resilience, which is the ability to reduce the magnitude and/or duration of disruptive events. However, today’s critical infrastructures, such as electrical power system and transportation system, are deploying advanced control applications with increasing scale and complexity, which leads to the migration of their underlying communication infrastructures from simple and proprietary networks to off-the-shelf network technologies (e.g., IP-based protocols and standards) to handle the intensive and heterogeneous traffic flows. On one hand, this migration provides an opportunity for both academic and industry communities to develop novel ideas on top of existing schemes; on the other hand, it exposes more vulnerabilities for cyber-attacks. Moreover, since the large-scale power system may choose leased networks from Internet service providers (which is a critical infrastructure itself), there exists an interdependency relationship between power and communication infrastructures, where the power transmission control requires message delivery services while the network devices rely on the power supply. These problems raise research challenges to improve the system resilience of critical cyber-physical systems.In this thesis, we focus on resilience enhancement of critical infrastructures from the communication network's aspects. The application domain includes both power and transportation systems. For power systems, we first apply advanced network control techniques (i.e., software-defined network (SDN) and fibbing control scheme) in the transmission grid communication network to improve the grid status restoration process under network failures and cyber-attacks. We develop a unified system model that contains both transmission grid monitoring system (i.e., phasor measurement unit (PMU) network) and communication network, and formalize a mixed-integer linear programming (MILP) problem to minimize the recovery time of system observability with the power and communication domain constraints. We evaluate the system performance regarding the recovery plan generation and installation using IEEE standard systems. However, the advanced network-based control scheme could also lead to problems, since it requires a power supply for the network devices. Thus, we investigate the interdependency relationship between the power grid and communication network and its impact on system resilience. We conduct a survey work that summarizes existing research based on two dimensions: objectives (i.e., failure analysis, vulnerability analysis, failure mitigation, and failure recovery) and methodologies (i.e., analytical solutions, co-simulation, and empirical studies). We also identify the limitations of existing works and propose potential research opportunities in this demanding area. Lastly, based on the review work, we conduct research that focuses on fast power distribution system restoration that involves interdependency constraints. When a natural disaster happens, both power and communication components might be damaged. Furthermore, since they are dependent on each other's service to function correctly, the failures may propagate to the hardware/software that are not affected initially. In this work, we focus on the recovery stage where the failed components in the system are already fully detected and isolated. We construct a mathematical model of the co-existing power and communication system and use optimization techniques to produce a crew dispatch plan that restores power as fast as possible by coordinating damage repairing, switch operation, and communication supply processes. We evaluate the restoration efficiency on the IEEE standard system using both analytical analysis and discrete-event simulation.For the second application domain, railway transportation system, we focus on evaluating the resilience of its communication system that exchanges control and monitoring messages with both on-board driver cabin and remote control center. We use advanced discrete-event simulation techniques to achieve a high-fidelity model of the network which makes the evaluation more concrete and realistic. For the Ethernet-based on-board train communication network (TCN), we develop a parallel simulation platform according to the IEC standard and use it to conduct a case study of a double-tagging VLAN attack on this control network. Another component of the railway communication system is the train-to-ground network that enables the communication between the driving system on the train and the control center that issues commands such as the movement authority messages. We customize the NS3 network simulator to model the LTE-based protocol with a real high-speed train trace dataset from public sources. We evaluate the resilience of the cellular network specifically on the handover process, which happens when the train travels from one base station to another. Due to the high-speed nature, the handover success rate is impacted and there are many protocol-based solutions proposed in this research area. We use the high-fidelity simulation model to evaluate some of them and compare the pros and cons.
Show less
- Title
- The role of fibrillar collagen in tissue function
- Creator
- Ma, Yin
- Date
- 2020
- Description
-
Fibrillar collagen plays an important role in maintaining soft tissue integrity and providing chemical and physical cues for cell fate...
Show moreFibrillar collagen plays an important role in maintaining soft tissue integrity and providing chemical and physical cues for cell fate decisions. Collagen remodeling, which alternates the amount, distribution, and biomechanics of collagen, primarily type I (COLI) and type III (COLIII), can change tissue properties. This process is essential not only in biological developments but also in pathological processes. Thus, it is meaningful to understand the correlation between collagen remodeling and tissue dysfunction and investigate the cells' response to fibrous protein matrices. However, current studies in biochemical analysis of collagen and biomechanical study of tissues were carried out at different scales. So it is hard to correlate the data to draw solid conclusions. In this thesis research, we used two collagen disorder associated pathological conditions, pelvic organ prolapse (POP) and micropapillary serous carcinoma (MPSC) of the fallopian tube, as models to unravel the correlation between tissue dysfunctions and the impaired microenvironment relevant to the composition, nanostructure, and biomechanics of a collagen fibril. In the case of POP, we found the collagen fibers in tissues of POP patients were less abundant but stiffer than those of non-POP individuals, implying a loose and fragile matrix that is weakly integrated with other components of the connective tissue to provide adequate support of the pelvic organs. On the other hand, the collagen D-period, the characteristic banding feature which signals the proper assembly of collagen molecules, decreased in POP tissues. We surmised that the molecular level changes of collagen in POP were accountable for the weak matrix mechanics, verified by a systematic in vitro study. We also examined the collagen matrix alternation in MPSC of the fallopian tube, which is thought to cause ovarian cancer via metastasis. Since cancer metastasis is often related to collagen remodeling, we examined the collagen matrix alternation in this disease. We observed the heterogeneous distribution of COLI and COLIII in the papillae of the tumor tissue. Noticeably, COLI was accumulated at the papillae tip, whereas COLIII was dominant at the papillae base. We also observed the absence of collagen matrix between the micropapillary tip and the fibrosis base. Such an uneven collagen distribution implies that the matrix exerted distinctive forces on the tumor cells to regulate their behaviors, including cell migration, directional growth, and shedding from the primary tumor to initiate metastasis. These conclusions have been supported by the results of our in vitro experiments. In investigating the effect of the microenvironment on cell behavior, we established and validated an AFM-based method to collect and quantitatively analyze the mRNA samples from targeted live cells at the single-cell level. This method overcomes issues, such as severe cell damage or even cell death, the capability of time-dependent and in situ analyses, in current methods. The application of the method in studying heterogeneous gene expression in single cells and the interaction between cancer cells and cancer-associated fibroblasts was demonstrated. We also demonstrated that this method can be potentially used to quantitatively analyze the gene expression level changes in a targeted cell in response to the microenvironment.
Show less
- Title
- FEARING FORGETTING? DEVELOPMENT OF A SCALE TO ASSESS ATTITUDES ABOUT DEMENTIA IN THE LAY POPULATION
- Creator
- Ogu, Precious N
- Date
- 2020
- Description
-
Individuals with dementia show a progressive decline in cognitive functioning which results in an inability to complete activities of daily...
Show moreIndividuals with dementia show a progressive decline in cognitive functioning which results in an inability to complete activities of daily living (American Psychiatric Association, 2013). Early diagnosis of dementia is a positive prognostic indicator (World Alzheimer Report, 2011) and is widely regarded as an important pre-condition for improving dementia care (Kim et al., 2015; Vernooij-Dassen et al., 2005). However, negative attitudes and stigma towards dementia could possibly interfere with an individual’s willingness to recognize or accept the idea of themselves having the disease through label avoidance. The goal of the present study was to contribute to understanding the perception of dementia by developing a quantitatively derived and psychometrically validated measure that encompasses the positive and negative attitudes towards dementia held by people without dementia. This study also explored the potential association between negative attitudes about dementia and lack of familiarity with dementia as familiarity with individuals with mental illness is related to stigmatizing attitudes towards mental illness. These goals were achieved by a principal components analysis (PCA) of 56 modified items from extant and well-validated mental illness attitude scales (Community Attitudes to Mental Illness, CAMI, Taylor & Dear, 1981; Social Distance Scale, SDS, Link, 1986; Depression Stigma Scale, DSS, Griffiths et al., 2004). Convergent validity was assessed by examining the relationship between the final derived measure and a construct associated with negative attitudes about mental illness (Mental Retardation Attitude Inventory-Revised, MRAI-R). Discriminant validity was assessed by examining the relationship between the final measure and a construct that should be unrelated to negative attitudes about mental illness (Belief in a Just World Scale, BJW). Finally, exploratory analyses were conducted to assess if attitudes measured by the newly created scale are related to participants’ familiarity with dementia (Level of Familiarity Scale, LoFS, Corrigan et al., 2001). 400 adults with no history of dementia were recruited through Amazon’s MTurk. Participants were compensated by a credit to their Amazon account upon completion of the survey. The PCA supported 2 conceptually different (not method variance) latent components titled Negative Attitudes and Positive Attitudes. These 2 components comprise the Attitudes to Dementia Inventory (ADI). Construct validity was partially supported for each component of the ADI. Degree of familiarity with dementia was not associated with negative or positive attitudes about dementia. Overall, this study is an important contribution to dementia attitudes research. Given the identification of Negative Attitudes and Positive Attitudes have been identified as distinct dimensions of dementia attitudes, the ADI can be used to further investigate how negative reactions towards dementia might cause delays in initiating medical intervention and treatment, and also to examine whether positive attitudes provide any protections against the probable effects of negative attitudes on stigma and help-seeking behaviors. Since the early recognition and diagnosis of dementia is widely regarded as an important condition for improving dementia care (Kim et al., 2015; Vernooij-Dassen, et al., 2005), the ADI can be used to inform stigma-prevention, which hopefully translates into improved help-seeking behaviors.
Show less
- Title
- Efficient and Practical Cluster Scheduling for High Performance Computing
- Creator
- Li, Boyang
- Date
- 2023
- Description
-
Cluster scheduling plays a crucial role in the high-performance computing (HPC) area. It is responsible for allocating resources and...
Show moreCluster scheduling plays a crucial role in the high-performance computing (HPC) area. It is responsible for allocating resources and determining the order in which jobs are executed. Existing HPC job schedulers typically leverage simpleheuristics to schedule jobs, but such scheduling policies struggle to keep pace with modern changes and technology trends. The study of this dissertation is motivated by two new trends in HPC community: the rapid growth of heterogeneous system infrastructure and the emergence of artificial intelligence (AI) technologies. First, existing scheduling policies are solely CPU-centric. In contrast, systems become more complex and heterogeneous, and emerging workloads have diverse resource requirements, such as CPU, burst buffer, power, network bandwidth, and so on. Second, previous heuristic scheduling approaches are manually designed. Such a manual design process prevents adaptive and informative scheduling decisions. A recent trend in HPC is to intertwine AI to better leverage the investment of supercomputers. This embrace of AI provides opportunities to design more intelligent scheduling methods. In this dissertation, we propose an efficient and practical cluster scheduling framework for HPC systems. Our framework leverages AI technologies and considers system heterogeneity. The framework comprises four major components. First, shared network systems such as dragonfly-based systems are vulnerable to performance variability due to network sharing. To mitigate workload interference on these shared network systems, we explore a dedicated scheduling policy. Next, emerging workloads in HPC have diverse resource requirements instead of being CPU-centric. To cater to this, we design an intelligent scheduling agent for multi-resource scheduling in HPC leveraging the advanced multi-objective reinforcement learning (MORL) algorithm. Subsequently, we address the issues with existing state encoding approaches in RL-driven scheduling, which either lack critical scheduling information or suffer from poor scalability. To this end, we present an efficient and scalable encoding model. Lastly, the lack of interpretability of RL methods poses a significant challenge to deploying RL-driven scheduling in production systems. In response, we provide a simple, deterministic, and easily understandable model for interpreting RL-driven scheduling. The proposed models and algorithms are evaluated with real job traces from production supercomputers. Experimental results show our schemes can effectively improve job scheduling in terms of both user satisfaction and system utilization.
Show less
- Title
- Design for Equivalence: Mutual Learning and Participant Gains in Participatory Design Processes
- Creator
- Geppert, Amanda Anne
- Date
- 2023
- Description
-
The ways in which people are or are not—aware, eligible, able, invited, required, supported, willing, and/or forced, among other conditions—to...
Show moreThe ways in which people are or are not—aware, eligible, able, invited, required, supported, willing, and/or forced, among other conditions—to participate in the procedures or experiences that constitute world-making activities—from voting, policymaking, or designing algorithms, technologies, products, programs, services, interventions, infrastructures, or systems, among other things—that affect their lives—is a central issue of our time. It demands careful consideration and is of great consequence as to whether or not the worlds we create are equitable, sustainable, and just, so that all people have free and equal standing and a real opportunity to belong and flourish. This study took up this issue in the context of participatory design practice and research and the making of sexual and reproductive health interventions with and for adolescents who are marginalized by race, class, ethnicity, gender, and sexuality, in Lucknow, Uttar Pradesh, India, and Chicago, Illinois, United States. The study advances knowledge in design by exploring how problem-focused, front-end participatory design processes expand or constrain the epistemic authority of less powerful actors, more specifically, systematically excluded individuals and groups. The study was conducted in two parallel phases. First, through a theoretical elaboration and critical analysis, it examined the application of Mouffean agonism in recent formulations of participatory design processes to address complex social and political issues with marginalized individuals and groups. The analysis demonstrated that a key construct—the chain of equivalence—is absent and resulted in the failure of these processes to achieve the collective, counter-hegemonic, and emancipatory responses strong enough to counter power as imagined by Chantal Mouffe. Second, an explanatory embedded multiple case study was conducted on two front-end participatory design workshops to understand what less powerful actors gain by engaging in collaborative processes of design and how practices and processes do or do not support their epistemic authority and matters of care. Thematic analysis suggested how the practices of collective information sharing and gathering—mutual learning and learning— affect participant gains and design process outputs. Additionally, thematic analysis informed a theoretical, conceptual, and practical move to expand beyond the original scope of the Mouffean chain of equivalence to include collaborating actors who may not be equivalently disadvantaged by current power relations, but who are committed to participatory design processes that prioritize the issues and matters of care of less powerful actors. When considered together, findings from both research phases inform the development of design for equivalence, at once a theoretical stance and a methodological framework to inform the selection of approaches, theories, processes, methods, practices, and tools for participatory design processes that support the epistemic authority of participants in challenging social and structural inequalities and creating articulations of the common good strong enough to counter dominant paradigms.
Show less
- Title
- Development of Metal Oxide-Based Phosphors for Luminescence Thermometry
- Creator
- Jahanbazi, Forough
- Date
- 2023
- Description
-
Temperature is both a thermodynamic property and a fundamental unit of measurement; one of the seven base quantities of the international...
Show moreTemperature is both a thermodynamic property and a fundamental unit of measurement; one of the seven base quantities of the international system of units (SI). It can be seen simply as the degree of hotness or coldness, a qualitative definition built on the bodily sensation of heat and cold. Today it is readily defined from the principles of classical thermodynamics as the parameter of state that has the same value for any systems which are in thermal equilibrium, and from statistical mechanics as a direct measure of the average kinetic energy of noninteracting particles. Temperature is an intensive quantity, meaning that its value does not depend on the amount of the substance for which it is measured. It is important because it is something we feel and because it influences the smallest aspects of our daily life, from how to adjust our housing and clothing to what we eat for supper. It affects the life cycles of plants and animals, governs rates of chemical reactions, influences tides and so on. For these reasons, it is by far the most measured physical quantity; sensors of temperature account for 80% of all sensors worldwide at present and they are used across a broad spectrum of human activities, such as in medicine, home appliances, meteorology, agriculture, and industrial and military contexts, to mention some of the most significant areas. Thus, the market demand for temperature sensors is increasing due to their extending applications in human activities. Traditional “contact” temperature measurements, which are mainly based on the expansion and contraction of an employed material, encounter difficulties when used in some emerging technologies and environments, such as nanotechnology and biomedicine. Today, an immediate need exists for the “non-contact” thermometry of moving or contact-sensitive objects, difficult to access pieces, bodies in hazardous locations, objects of nano-size dimensions, or living cells and organisms. However, the properties of existing thermometers and sensor platforms limit their use in such environments. Non-contact sensors measure object temperature without the need for physical contact between sensors and objects. Therefore, they have been considered as a great interest for hardly accessible objects. As non-contact thermometry methods, besides pyrometers and radiation thermometers, optical thermometers have drawn extensive attention nowadays. Specifically, among all the optical based thermometry methods, including Raman scattering, optical interferometry, and near field optical scanning microscopy, the one having drawn the most attention is luminescence thermometry in which the temperature detection is based on the luminescent signal accompanied with acceptable spatial resolution.In luminescence thermometry method, temperature can be determined from different features of luminescence using luminescence thermometers. Depending on the temporal nature of these features, the principles of their measurements are classified as either time-integrated (steady-state) or time-resolved ones. The temperature measurement based on the excitation and emission band positions and bandwidths, emission band intensities, luminescence/fluorescent intensity ratio (LIR or FIR, the ratio of the intensities of two emission bands) are classified as time-integrated methods. The temperature measurements based on the emission decay- or rise-times are classified as time-resolved ones. Temperature readouts from LIR and emission lifetime are by far the most exploited methods. Both readouts are self-referenced, so they are not affected by fluctuations in excitation and signal detection. Moreover, thermal sensing ability of many lanthanide-based luminescent materials is not limited to only one read-out method. Some of them can be used as dual/multiple modes via utilizing a combination of two or more read-out methods for temperature measurement. Non-contact luminescence thermometry based on LIR read-out method has attracted much attention due to its excellent accuracy and sensitivity. The intensity ratio is independent of undesirable factors that makes this luminescence thermometry more appropriate. Moreover, the method is self- referencing which removes the need for a temperature standard. In principle, it can be realized with any combinations of the emission lines from lanthanides and transition metallic ions with different temperature dependencies, either from single or multiple luminescent centers. It is the most reported luminescence thermometric read-out method in the past few years. In the past years, researchers have done a lot of work on developing high-efficient LIR thermometers by employing a single center emitting. This ratiometric method is mainly performed based on the principle that governs thermally coupled energy level of the luminescent ions. The electronic distribution between electronic states of closely separated excited levels of the doped element follows the Boltzmann equation. The two excited levels of ions are thermally coupled with a maximum energy gap of 2000 cm-1, which is sufficiently small to allow electrons to transit to high energy level upon thermal excitation and at the same time large enough to have different electronic populations and high sensitivity value. In this case, both high and low excited states share the electronic population according to Boltzmann’s distribution. Therefore, the ratio of the number of electrons between the high and the low excited levels can be defined as follows for LIR-based thermometry utilizing single emitting centers. In addition to LIR between two thermally coupled energy levels of the luminescent ion, in some ions LIR between two other energy levels which are not coupled thermally were employed to reach to a high-sensitive thermometry. The quantitative evaluation of the thermometric performance of a temperature probe is defined by its absolute and relative thermal sensitivities, temperature resolution, and repeatability. The rate of change in thermometric parameters (indicated by Δ) over a temperature changing process (∂T) is defined as absolute thermal sensitivity (Sa). However, absolute sensitivity is not appropriate to compare the performance among thermometers with different employed materials or physical principles. The relative thermal sensitivity (Sr) is defined to eliminate the problem associated with comparison between the performance of thermometers with different natures. Sr of a luminescent thermometer is one of the most important factors which determine its temperature readout accuracy. The smallest temperature change resolvable by a thermometer is defined as temperature resolution or temperature uncertainty (indicated by δT) which is expressed in Kelvin and depends on the characteristic of measuring systems such as the experimental detection setup and the signal-to noise ratio: The reproducibility is defined as the change of the same measurement performed under different conditions such as different methods or devices. The repeatability (indicated by R) is the ability of a thermometer to provide the same result under different conditions. Regarding temperature resolution, most light detection systems, including thermometry systems, suffer from low resolution because of the scattering at both excitation and emission wavelengths. Light scattering of thermometric phosphors is induced by their grain size, shape, and surface roughness. This is a problem particularly associated with conventional phosphors which typically have micrometer grain size. On the other hand, the light scattering by nanoparticles (NPs) is close to zero, which leads to better resolution of luminescence thermometers using NPs. Consequently, nanothermometry has emerged as a hot research area of thermometers for new technological applications with high resolution. Accordingly, below in chapter 1, we discussed a host material, pyrochlore compound of La2Zr2O7, doped with Tb3+ and Eu3+, synthesized in nanoscale (~15 nm) that showed a great potential for LIR temperature sensing with a high resolution based on dual emitting centers. In chapter 2, another sample of this nano powder host, La2Zr2O7 doped with Pr3+, is discovered and discussed for LIR temperature sensing based on single emitting center. Beside the high-resolution thermometry by La2Zr2O7: Pr3+ nano powder, a broad-temperature sensing range was achieved using it. The broad temperature sensing range obtained only by using one LIR-read out mode originated from high-lying charge transfer states with slow thermal-quenching that will be elaborated in chapter 2. Multiple materials employed for luminescence thermometry application, such as organic dyes, quantum dots, metal–organic complexes and frameworks, among which lanthanide or transition metal ion-based phosphors, are most promising. The electronic states of lanthanides are characterized by partially filled 4f orbitals as they are gradually filling up from 4f0 for La3+ to 4f14 for Lu3+. Their luminescence emission occurs due to interconfigurational f-f transitions except some ions like Eu2+ and Ce3+ which have f-d allowed transition emissions. The partially filled 4f orbitals of lanthanide ions are shielded by 5s and 5p subshells from surrounding environment that leads to long lifetime and narrowband emission characteristics. Once excited with UV light, lanthanide-doped materials mostly emit light in visible/near infrared (NIR) range in a downshift (DS) photoluminescence (PL) mechanism. In DS emission, high energy photons are converted into phonons with lower energy. Overall, having excellent repeatability, reproducibility and photostability with thermally and chemically stable structures makes the lanthanide-based materials the most favorite choices for luminescent thermometry applications. Their luminescence is easy to identify and differentiate from other materials. Multiplexing is possible due to their narrow emission bands which are easily identifiable. Host materials also play a crucial role in thermal sensing properties of thermometric phosphors. Various hosts such as fluorides, ceramic oxides, nitrides, chalcogenides, and phosphides have been employed for luminescence thermometry. Ceramic hosts are composed of different elements, thus often require complex synthesis processes which would limit their applicability. Fluoride hosts have a level of toxicity which is harmful for living systems, so they are not environmentally friendly. Nitride compounds are commonly prepared in oxygen/water-free glove boxes and synthesized in harsh synthesis conditions under high pressure/temperature which restrict their large-scale production. Chalcogenides and phosphides may not be sufficiently stable. On the other hand, metal oxide phosphors possess the advantages of convenient preparation, non-toxicity, excellent chemical stability (capable of withstanding sustained exposure to high temperature), and low cost. Moreover, they are preferable in biomedical luminescence thermometry as applications for measuring long-wavelength emissions where tissues are optically transparent and are less affected by scattering and background luminescence. Considering all these aspects, metal oxide-based phosphors are more favorable for luminescent thermometry. One of the goals of research in luminescence thermometry field has been to push the limit of temperature measurement capability to higher temperatures. However, the development of luminescent phosphors with high thermal stability of emission and high sensing efficiency still is a paramount challenge. Thermal stability of photoluminescence (PL) is a property related to the chemical composition, electronic structure, and crystal structure rigidity of phosphors. It is commonly referred to as positive thermal quenching (TQ), that is, the loss of light emission with rising temperature. Most phosphors indicate positive TQ which stems from high non-radiative transition probability at elevating temperatures. This phenomenon severely limits the applications of luminescent phosphors and degrades their devices’ performance. To compensate for the thermally induced emission loss of phosphors, several strategies have been reported, while as will be discussed in chapter 3, mostly have negative impacts on their inherent luminescence properties. From the structural perspective, TQ caused by nonradiative relaxations is closely related to the crystal structure stability. A rigid structural framework with high lattice symmetry has reduced nonradiative transitions at elevated temperatures. As one of the rigid-type hosts, materials possessing a negative thermal expansion (NTE) property have been explored as suitable hosts for anti-TQ phosphors doped with lanthanides. NTE refers to the unique property of some unique and rare materials with their volume abnormally contracting with increasing temperature. Among various reported NTE families, compounds with the general formula of A2M3O12, where A is a trivalent rare earth ion and M stands for W6+ or Mo6+, are well-known with a broad range of compositions and have been explored for anti-TQ in the recent years. Some earlier works reported employing A2M3O12 host to obtain thermally enhanced upconversion (UC) emission. However, the upconversion emission is not the type of widely used emission as they produce weaker emissions mostly limited to a higher wavelength range than most-applicable visible range. Thus, NTE phosphors and thermally enhanced stronger downshift (DS) emissions on visible range are not yet high enough to fulfill their practical application. To explore the applicability of NTE idea for down-shift (DS) emitting phosphors, we reported the anti-TQ performance of single and co-doped samples of Sc2Mo3O12: Eu3+ and Sc2Mo3O12: Tb3+, Eu3+ in chapter 3 and 4, respectively. Specifically, we took advantage of the existence of interionic energy transfer in our NTE host, to achieve superior anti-TQ performance for DS luminescence that can be employed for efficient thermometry at high temperatures range. The structural shrinkage with rising temperature shortens the distance between the host and activator dopant ions, which enhances the host to activator ET and consequently the final emission intensity as will be elaborated in two last chapters. As a highly promising strategy, there is an urgent need to obtain more evidence on how NTE property, associates with the anti-TQ of luminescence that we tried to discover in our works. We explored these compound’s potential for high temperature luminescence thermometry. We tested both LIR and lifetime-based temperature sensing and revealed their great potential for an efficient temperature sensing at high temperature ranges. This study opens a new design strategy and perspective to obtain phosphors with thermally boosted luminescence based on NTE host materials to meet the serious demands for their broad applications at elevated temperatures and harsh conditions.
Show less
- Title
- Application of Blockchain and Artificial Intelligence Methods in Power System Operation and Control
- Creator
- Farhoumandi, Matin
- Date
- 2023
- Description
-
The proliferation of distributed energy resources (DERs) and the large-scale electrification of transportation infrastructure are driving...
Show moreThe proliferation of distributed energy resources (DERs) and the large-scale electrification of transportation infrastructure are driving forces behind the ongoing evolution for transforming traditionally passive consumers into prosumers (both consumers and producers) in a coordinated system of power distribution network (PDN) and urban transportation network (UTN). In this new paradigm, peer-to-peer (P2P) energy trading is a promising energy management strategy for dynamically balancing the supply and demand in electricity markets. In this thesis, we propose the applications of artificial intelligence technology to power system operation and control. First, blockchain (BC) is applied to electric vehicle charging station (EVCS) operations to optimally transact energy in a hierarchical P2P framework. In the proposed framework, a decentralized privacy-preserving clearing mechanism is implemented in the transactive energy market (TEM) in which BC’s smart contracts are applied in a coordinated PDN and UTN operation. The effectiveness of the proposed TEM and its solution approach are validated via numerical simulations which are performed on a modified IEEE 123-bus PDN and a modified Sioux Falls UTN. Second, machine learning and deep learning methods are applied to short-term forecasting of non-conforming net load (STFNL). STFNL plays a vital role in enhancing the secure and efficient operation and control of power systems. However, power system consumption is affected by a variety of external factors and thus includes high levels of variations. These variations cause STFNL to be a challenging task as more DERs are integrated into the power grid. This thesis proposes two commonly used machine learning and deep learning methods, i.e., ensemble bagged and long short-term memory, for STFNL. The advantages, features and applications of these methods are expanded in a proposed fusion forecasting model that improves the STFNL accuracy. Additionally, data engineering and preprocessing options are used to increase the accuracy of the proposed fusion model. A comparative study based on practical load data is performed to demonstrate that the proposed fusion methodology can reach a relatively higher forecasting accuracy with lower error indices. Index Terms—Blockchain, deep learning and machine learning, electric vehicle charging stations, non-conforming net load forecasting, peer-to-peer transactive energy, power distribution and transportation networks, distributed energy resources, behind-the-meter supply resources.
Show less
- Title
- Dynamic Risk and Dynamic Performance Measures Generated by Distortion Functions and Diversification Benefits Optimization
- Creator
- Liu, Hao
- Date
- 2023
- Description
-
This thesis consists of two major parts, and it contributes to the fields of risk management and optimization.One contribution to risk...
Show moreThis thesis consists of two major parts, and it contributes to the fields of risk management and optimization.One contribution to risk management is made via developing dynamic risk measures and dynamic acceptability indices that can be characterized by distortion functions. In particular, we proved a representation theorem illustrating that the class of dynamic coherent risk measures generated by distortion functions coincides with a specific type of dynamic risk measures, the dynamic WV@R. We also investigate thoroughly various types of time consistencies for dynamic risk measures and dynamic acceptability indices in terms of distortion functions. Another contribution to risk management is proving strong consistency and asymptotic normality of two estimators of dynamic WV@R. In contrast to the exist- ing literature, our results do not rely on the assumptions of distribution of random variables. Instead, we investigate the asymptotic normality of estimators in terms of the generating distortion functions. Last but not least, we give counterexample to show that a sufficient condition of asymptotic normality is not necessary. The contribution to optimization is twofold. On the one hand, we formulate the (scalar) diversification optimization problem as a vector optimization problem (VOP), and show that a set-valued Bellman principle is satisfied by this VOP. On the other hand, we derive explicit policy gradient formula and implement the deep neural network to solve diversification optimization problem numerically. This deep learning technique allows to overcome computation difficulty caused by the non-convexity of VOP.
Show less
- Title
- Toward a Network Model of Executive Functioning
- Creator
- Fuller, Jordan S.
- Date
- 2023
- Description
-
The executive functions are the higher-order mental processes that are responsible for organized, strategic behavior. These functions have...
Show moreThe executive functions are the higher-order mental processes that are responsible for organized, strategic behavior. These functions have been a source of significant controversy since their initial introduction. This study sought to create a model of the executive functions utilizing psychological network analysis. Participants completed six measures reflecting inhibition, task switching, and working memory updating, as well as a fluid intelligence measure. A processing speed index was calculated from non-executive trials of various measures. Four networks were generated, including an executive functions network, an executive functions and intelligence network, an executive functions and processing speed network, and a network with all variables included. The resulting networks contained no stable edges between the executive functioning tasks. Stable edges were identified between the intelligence node and the two nodes reflecting working memory updating. There was an additional edge identified between processing speed and one measure of task switching. Results of the study may indicate that there is relative independence among executive functions. However, the management of task impurity in a psychological network analysis also merits further investigation.
Show less
- Title
- Prediction and Control of In-Cylinder Processes in Heavy-Duty Engines Using Alternative Fuels
- Creator
- Pulpeiro Gonzalez, Jorge
- Date
- 2024
- Description
-
This Ph.D. thesis focuses on advancing diagnostic techniques and control-oriented models to enhance the efficiency and performance of internal...
Show moreThis Ph.D. thesis focuses on advancing diagnostic techniques and control-oriented models to enhance the efficiency and performance of internal combustion (IC) engines, particularly heavy-duty engines utilizing alternative fuels. The research endeavors to contribute to the field of model-based control of engines through the development and implementation of innovative methodologies. The primary emphasis is on the development of diagnostic methods, control-oriented models and advanced control strategies for compression ignition engines using alternative fuels. The first key topic explores the determination of the Most Representative Cycle for Combustion Phasing Estimation based on cylinder pressure measurements. The method developed extracts crucial information from experimental data obtained from four distinct engines: the heavy-duty single-cylinder GCI engine, the light-duty multi-cylinder diesel engine, a CFR engine, and a single-cylinder light-duty Spark Ignition (SI) engine. This work lays the foundation for precise combustion phasing estimation, a critical parameter for engine control. The second major contribution involves the development of control-oriented models for Variable Geometry Turbochargers (VGT) and inter-coolers. Two models are established: a data-driven turbocharger model and an empirical inter-cooler model. These models are meticulously calibrated and validated using experimental data from a multi-cylinder light-duty diesel engine, providing valuable insights into the behavior of these components under varying conditions. The outcomes contribute to facilitate predictive control of engine air systems. The third core aspect of the thesis revolves around Model Predictive Control of Combustion Phasing in heavy-duty compression-ignition engines utilizing alternative fuels. A combustion phasing and engine load model is derived from experimental data and incorporated into an MPC framework. The MPC strategy is subsequently tested in the heavy-duty GCI test cell and compared against a conventional Proportional-Integral-Derivative (PID) control strategy. The results showcase the effectiveness of the MPC approach in achieving precise control of combustion phasing, demonstrating its potential for optimizing engine performance. In summary, this Ph.D. thesis contributes significantly to the field of engine controls by advancing diagnostic techniques, control-oriented models, and implementing a cutting-edge MPC-based control strategy for compression ignition engines using alternative fuels. The research findings not only enhance the understanding of in-cylinder processes but also pave the way for more efficient and sustainable heavy-duty engines using alternative fuels.
Show less
- Title
- Capital Design: The Role of Design in Institutional Capital Allocation
- Creator
- Ostapchuk, Jordan
- Date
- 2024
- Description
-
There is a paradox within the $100 trillion institutional investment industry: the more choices an institutional investor has, the more...
Show moreThere is a paradox within the $100 trillion institutional investment industry: the more choices an institutional investor has, the more challenging it becomes to make investment decisions. This paradox is significant because capital is one of the most transformational elements of the 21st century, driven by financialization, universal ownership, and increasing systemic risks. The direction of capital flows significantly influences the approach to addressing climate change, aging populations, and the transition to sustainable energy, in addition to supporting the essential physical and social infrastructure supported by institutional capital. This research proposes and substantiates a novel hypothesis: design can significantly influence capital allocation in institutional investment contexts. Through an institutional case study, expert interviews, workshops with master’s level design students, and systems-informed reflective practice, this research identifies asset classes as an important and changeable lens through which institutions engage with the future. It explores how these asset classes shape choices in the capital allocation process and identifies eight design capabilities particularly suited for institutional investment contexts. In doing so, it introduces a framework termed Capital Design. This framework illustrates how design can influence institutional capital allocation by integrating these design capabilities with investment tools through informational lenses within a choice/knowledge map. As a result, Capital Design offers an innovative approach for investors and investees to reorient toward emergent asset categories that directly meet the most urgent societal needs.
Show less
- Title
- Effect of Stress Triaxiality and Lode Angle on Ductile Fracture
- Creator
- Nia, Mahan
- Date
- 2023
- Description
-
Although many ductile damage accumulation studies have been done in recent years, there is still insufficient research towards the development...
Show moreAlthough many ductile damage accumulation studies have been done in recent years, there is still insufficient research towards the development of ductile fracture models, mainly due to the difficulty of performing experiments under different states of multiaxial stress. The goals of this Ph.D. research are to (i) produce much-needed experimental data, (ii) investigate the performance of existing models against these data, and (iii) develop a new predictive ductile fracture model validated by experiments. The new model seeks to predict the fracture strain as a function of the stress triaxiality and normalized Lode angle. One of the prominent works in this area was done by Bai and Wierzbicki in 2008 by testing 2024-T351 aluminum alloy. They proposed an asymmetric 3D empirical fracture model with six model parameters. Thus, the Bai method was investigated alongside a new model for predicting ductile fracture. For that purpose, 2139-T8 aluminum alloy was chosen for our experimental program to evaluate these models better, and the data extracted from Bai's work was also used as an additional data set. An extensive experimental program was considered to create different stress states in the material, including tensile tests (with round smooth and four round notched and plate specimens), torsion, compression (with four smooth and two notched specimens), and shear-compression experiments (two different sizes). The specimens were longitudinally machined from a block of 2139-T8 aluminum alloy. The combined effects of two variables, stress triaxiality and normalized Lode angle, define a 3D fracture envelope for fracture strain. A parallel FE simulation (fine-tuned by the experimental results) has been performed for each experiment to evaluate the evolution of stress triaxiality and Lode angle in the gauge section of the specimens with complicated geometries. Finally, these results were used in developing two predictive fracture models. The first model is based on the Bai-Wierzbicki form of fracture. The second one is a new model that has been presented in this research. This new model is a modification of the Johnson-Cook fracture model and considers the simultaneous effects of Lode angle and stress triaxiality in fracture. The original Johnson-Cook fracture model (1984) does not consider the Lode angle effect. In the end, errors in the proposed approach to modeling ductile fracture have been compared to errors from Bai's work, resulting in the conclusions and recommendations for future studies.
Show less
- Title
- Ground Monitors to Support Navigation Operations of ARAIM and GBAS
- Creator
- Patel, Jaymin Harshadkumar
- Date
- 2023
- Description
-
Receiver Autonomous Integrity Monitoring (RAIM) currently provides safehorizontal navigation guidance to en route civil aircraft using the GPS...
Show moreReceiver Autonomous Integrity Monitoring (RAIM) currently provides safehorizontal navigation guidance to en route civil aircraft using the GPS L1 frequency. As an evolution of RAIM, Advanced RAIM (ARAIM) is being developed to provide vertical guidance in addition to horizontal using multiple constellations and dual frequency thus facilitating precision approach without ground support for civil aircraft. However, navigation guidance during zero-visibility (Category III) precision landing requires an additional support in real time from a Ground Based Augmentation System (GBAS). To improve the aircraft navigation solution, GBAS broadcasts a differential correction and monitors any failure on transmitted satellite signals. This dissertation contributes to ARAIM and GBAS to improve existing navigation operations in order to enable precision approach and landing.The achievable performance of ARAIM is highly dependent on the assumptionson a constellation’s nominal Signal-In-Space (SIS) error models and a priori fault probability. In the framework of ARAIM, an Integrity Support Message (ISM) is envisioned to carry the required SIS error-model parameters and fault statistics for users. The ISM is generated and validated through offline monitoring, and disseminated along the navigation message. The first dissertation contribution is to provide necessary satellite positions and clock biases as a truth product to evaluate nominal SIS range errors (SISREs). An estimator is developed to generate accurate ephemeris parameters to provide these truth products. The estimator’s performance is demonstrated for the Global Positioning System (GPS) constellation by utilizing the International GNSS Service (IGS) ground network to collect dual-frequency raw GPS code and carrier phase measurements. The resulting SISREs from the estimator are predicted to have a standard deviation of 0.5 m. When estimated ephemeris parameters and clock biases are compared with precise IGS orbit and clock products, the resulting SISREs are within ±2! at all times. In the second contribution, a new approach is proposed to generate the ISM by modeling the ephemeris parameter errors directly. In preliminary analysis, an ephemeris parameter error model is developed for the broadcast GPS legacy navigation message (LNAV) under nominal conditions. Then, the proposed approach is demonstrated to provide the nominal bias and standard deviation on GPS SISREs.As a part of fault monitoring in the GBAS, a ground monitor is developedto detect ephemeris failures, incorrect broadcast satellite positions, and hazardous ionosphere storms using either single- or dual frequency. The monitor also addresses the challenge of fault-free differential correction when satellites are rising, newly acquired, and re-acquired. The monitor utilizes differential code and carrier phase measurements across multiple reference receiver antennas as the basis for detection. Finally, the analytical performance of the monitor is demonstrated to meet Category III precision approach and landing requirements.
Show less
- Title
- Characterization of Novel Concrete Formulations: High-Volume Fly Ash for Precast Industry Use and Non-Proprietary UHPC
- Creator
- Ordillas, Kurt Andrew
- Date
- 2024
- Description
-
The use of high-volume fly ash concretes can be challenging for high-early strength applications, such as in precast construction, largely due...
Show moreThe use of high-volume fly ash concretes can be challenging for high-early strength applications, such as in precast construction, largely due to potential delays in strength gain resulting from relatively lower heats of hydration of the underlying binder formulations. Considering that the use of higher levels of available fresh or landfilled fly ash as a replacement for traditional ordinary Portland cement (OPC) could result in more sustainable mix designs, a framework to develop novel, high-volume fly ash mixes with optimized dosages of commercial grade gypsum and accelerating admixtures to enhance early-age strength performance. Early-age mechanical properties such as compressive strength, modulus of rupture, and modulus of elasticity were evaluated starting within 24 hours of specimen preparation. Experimental test results were then characterized and subsequently analyzed relative to current design provisions to highlight the best performing trial mixes (with respect to the early-age strength target) and cases where current design provisions are either unconservative or overly-conservative with respect to the test data. Additionally, thermal properties of concrete produced with fly ash were tested with two different curing environments, along with using code equations to determine if high volume fly ash provides a higher thermal resistance compared to OPC concrete. Wrapping up cementitious replacement with non-proprietary ultra high-performance concrete (UHPC) for transportation structures. Then reproducing mixtures to ensure target compressive strength values could be reached. Followed by increasing batch size to a larger quantity using a large mixer to create full-size specimens.
Show less
- Title
- Developing Advanced Materials for Carbon Dioxide Electroreduction to Value-Added Chemicals and Fuels
- Creator
- Esmaeilirad, Mohammadreza
- Date
- 2023
- Description
-
Developing highly efficient electrocatalysts for the carbon dioxide reductionreaction (CO2RR) to value-added fuels and chemicals offers a...
Show moreDeveloping highly efficient electrocatalysts for the carbon dioxide reductionreaction (CO2RR) to value-added fuels and chemicals offers a feasible pathway for renewable energy storage and could help mitigate the ever-increasing carbon dioxide (CO2) emissions from human activities. Different catalysts are known to catalyze CO2RR in aqueous solutions. Most known catalysts are only capable of transferring 2 electrons with needed protons to CO2 producing either carbon monoxide (CO) or formic acid (HCOOH). Copper (Cu) is the only electrocatalytic material that converts CO2 into different types of hydrocarbon products. Additionally, owing to Cu’s natural abundance and low cost, it has been intensively studied for CO2RR for decades. However, the required high input energy (overpotential), low product selectivity towards valuable fuel products, and the lack of long-term stability remain major challenges for Cu-based catalysts. This work aims to develop new materials that produce hydrocarbons at lower overpotentials with higher rates and greater selectivity than current copper catalysts. By implementing a process referred to as the electrocatalyst discovery cycle iterations between predications, catalyst testing, and active site characterization allow for the rational design and discovery of new and improved electrocatalysts for CO2RR. This methodology led to the discovery of different heteroatomic catalysts as low overpotential catalysts for electroreduction of CO2 high energy density hydrocarbon products.
Show less
- Title
- Transactive Energy Market for Electric Vehicle Charging Stations in Constrained Power and Transportation Networks
- Creator
- Affolabi, Larissa Arielle Sèfiath
- Date
- 2023
- Description
-
In response to the urgent need for decarbonization, our society is actively working towards reducing carbon emissions across various sectors....
Show moreIn response to the urgent need for decarbonization, our society is actively working towards reducing carbon emissions across various sectors. These efforts have resulted in the widespread adoption of distributed energy resources (DERs) in the electricity sector and the widespread adoption of electric vehicles (EVs) in the transportation sector. The growing popularity of EVs has resulted in rapid growth of charging infrastructure to meet the increasing demand. Recently, combined efforts across those two sectors have gained popularity with the deployment of EV charging stations (EVCSs) with on-site DERs like solar photovoltaic and/or battery energy storage systems not only to defer or avoid the need for power distribution equipment upgrades but also to achieve more environmentally friendly outcomes in terms of decarbonization goals. To increase transportation electrification, we need to expand further the charging infrastructure. The key challenge lies in accelerating charging station deployment while ensuring the safe and efficient operation of the power distribution system where most of this new load will be concentrated. Numerous research efforts have been dedicated to the study of EVCSs, with a focus on either optimizing the pricing of charging services or addressing the energy management challenges from the perspective of system operators. While these aspects are crucial, it is essential to recognize the importance of attracting private sector stakeholders to invest in and support the expansion of the EVCS network. Relying solely on subsidies is insufficient to finance the necessary scale of EVCS deployment required to accelerate the widespread adoption of EVs. The increasing adoption of EVCSs integrated with on-site DERs highlights the potential for Transactive Energy Market (TEM) operations among EVCSs. However, unlike regular prosumers, EVCS operations are uniquely influenced by both the power distribution and the transportation networks. In light of this issue, this dissertation proposes several multi-agent frameworks that leverage on-site DERs at EVCSs to establish a secondary revenue stream through a TEM. This dissertation investigates the technical and economic aspects of these multi-agent frameworks. At its core, we propose two holistic frameworks to solve the energy management problem of EVCSs within a TEM environment. Modeled as independent profit-driven entities, each EVCS optimally schedules its operation based on the day-ahead traffic assignment problem solved by the traffic operator agent. For the TEM clearing process, we propose two distinct lines of approach. First, a centralized approach where a single entity assumes both the market operator and grid operator functions. This integrated approach streamlines the decision-making process and ensures coordinated operations between the market and the power grid. Second, a decentralized approach, where separate entities take on the roles of the market operator and grid operator, respectively. This decentralized structure allows for more flexibility and distributed decision-making within the TEM. Furthermore, in contrast to many TEM related studies that overlook the complexity of the power distribution system, we introduce a comprehensive three-phase unbalanced optimal power flow model. This model incorporates features such as network reconfiguration and tap changers, allowing for a more accurate representation and understanding of the power distribution system's operation. Various case studies are used to prove the effectiveness of our proposed lines of approach to EVCSs’ day-ahead energy management problem.
Show less
- Title
- High-Entropy Stabilization as a Designing Tool for Li-Ion Electrodes
- Creator
- Bandeira Jovino Marques, Otavio Jose
- Date
- 2023
- Description
-
High-Entropy oxides (HEOs) form a new class of materials where the configurational entropy plays the stabilizing role of multicomponent...
Show moreHigh-Entropy oxides (HEOs) form a new class of materials where the configurational entropy plays the stabilizing role of multicomponent systems at high temperatures. Recently, it raised much attention for energy storage applications, especially on Li-ion batteries, where the combination of several different elements in a single solid solution can synergistically act to overcome some of its main drawbacks, improving the battery’s performance. The entropy stabilization opens new boundaries on electrode’s design by increasing the compositional space available for different structures and compounds. Not long ago, the high-entropy oxide (Mg0.2Co0.2Ni0.2Cu0.2Zn0.2)O demonstrated a big potential as anode material in Li-ion batteries. Its high capacity and long cycling stability raised a lot of questions about the role of the transition metals in the conversion reaction, and the configurational entropy contribution to the electrochemical reaction, further supporting the electrode’s stability. In order to investigate the structural evolution, the role of the multicomponent oxides and structures on the battery’s performance, and the entropic contribution to the electrode’s stability, this research proposes a systematic and robust methodology around the (Mg0.2Co0.2Ni0.2Cu0.2Zn0.2)O high-entropy oxide (HEO). The project heavily relies on the EXAFS ability to determine the short-range structure and the chemical sensitivity to isolate the elemental contribution of the compound at different cycling and charging states. First, the role of different metallic cations on the electrochemical reaction mechanism of the HEO was analyzed by the change in local structure during different charging steps of a Li-ion battery (Chapter 3). Secondly, the entropy contribution and tunability effects on electrochemical performance were tested in a series of medium and high-entropy oxides derived from the seminal HEO. Mg, Co, Ni, Cu, and Zn were individually removed from the HEO’s composition at a time and tested as Li-ion electrode. Fe was also added to the HEO’s composition (HEO+Fe) in order to prove the tunability effects and entropy contribution (Chapter 4). Operando x-ray absorption spectroscopy (XAS) was used to capture the short lived phases and the transient nature of the conversion reaction, to explain the origins of the extra storage capacity encountered on entropy stabilized systems (Chapter 5). Finally, the role of the high-entropy oxide initial structure was investigated and compared, to check versatility of the elements that can be used on a high-entropy system (Chapter 6).
Show less
- Title
- Using High-Pressure Reverse Osmosis Technique to Desalinate Produced Water
- Creator
- Dallalzadeh Atoufi, Hossein
- Date
- 2023
- Description
-
This dissertation presents a comprehensive investigation into the use of high-pressure reverse osmosis (HPRO) to desalinate produced water (PW...
Show moreThis dissertation presents a comprehensive investigation into the use of high-pressure reverse osmosis (HPRO) to desalinate produced water (PW) in the oil and gas industry, with the aim of developing sustainable water management strategies. The study analyzes fouling mechanisms in HPRO desalination, revealing the applicability of Hermia's fouling mechanism to high-salinity waters and highlighting the negligible impact of complete pore blocking and standard pore blocking in crossflow reverse osmosis (RO) desalination. Furthermore, the research investigates ion transport through commercial polyamide thin film composite membranes using the solution-friction model, elucidating the influence of factors such as pressure, temperature, and crossflow velocity on the initial flux while minimal impact on steady-state flux is observed. The assessment of oil and gas waste discharge in water systems provides insights into potential environmental consequences, and the analysis of the behavior of per- and polyfluoroalkyl substances (PFAS) in contaminated sediments using passive sampling demonstrates the rapid uptake of shorter-chain PFAS compounds due to their lower sorption potential and faster diffusion rates. The dissertation contributes to the development of sustainable water management strategies, addressing the challenges of produced water treatment and environmental contamination in the oil and gas industry, and offers valuable information on fouling mechanisms, impacts of ion transport mechanisms, waste discharge and PFAS behavior, enabling optimized desalination processes, informed waste management practices, and a better understanding of environmental contamination issues.
Show less
- Title
- Machine Learning (ML) for Extreme Weather Power Outage Forecasting in Power Distribution Networks
- Creator
- Bahrami, Anahita
- Date
- 2023
- Description
-
The Midwest region experiences a diverse range of severe weather conditions throughout the year. During the warmer months, thunderstorms,...
Show moreThe Midwest region experiences a diverse range of severe weather conditions throughout the year. During the warmer months, thunderstorms, heavy rain, lightning, tornadoes, and high winds pose a threat, while the colder season brings ice storms, snowstorms, high winds, and sleet storms, all of which can cause significant damage to the environment, properties, transportation systems, and power grids. The average climate in the Midwest is influenced by factors such as latitude, solar input, water systems' typical positions and movements, topography, the Great Lakes, and human activities. The combination of these conditions during different seasons contributes to the development of various types of storms. Therefore, it is crucial to predict the impacts of such atmospheric events on distribution and transmission lines, enabling utilities to assess and implement preventive measures and strategies to minimize the economic losses associated with these disasters. Additionally, the accurate classification of storm modes through an automated system allows operators to study trends in relation to climate change and implement necessary strategies to ensure grid reliability and resilience.In recent years, a significant number of power outages have occurred due to extreme ice formation on transmission and distribution networks, posing a threat to the power grid's resilience and reliability. To prepare power providers for snowstorms, extensive research has been conducted on snow accretion on power lines. Over the past two decades, many scientists have turned to machine learning (ML) algorithms for predicting ice accretion on overhead conductors, as ML models demonstrate superior accuracy compared to statistical forecasting models when it comes to forecasting challenging and fine-grained problems. However, most existing models primarily focus on predicting ice formation on power lines and fail to forecast the resulting damage to the distribution network. Therefore, this project proposes a model for predicting power outages caused by snow and ice storms in the distribution network. The goal is to aid in the planning process for disaster response and ensure the resilience and reliability of the power grid. The proposed outage prediction model incorporates statistical and machine learning techniques, taking into account features related to weather conditions, storm events, and information about the power network feeders.
Show less