Search results
(9,661 - 9,680 of 9,823)
Pages
- Title
- Colored Pencil Drawings, undated
- Creator
- Henry, Mary Dill, 1913-2009
- Description
-
Untitled colored pencil drawings by Mary Henry, date unknown. Inscription on verso: "William Winter Comments, PO Box 817, Sausalito"
- Collection
- Mary Dill Henry Papers, 1913-2021
- Title
- Colored Pencil Drawings, undated
- Creator
- Henry, Mary Dill, 1913-2009
- Description
-
Untitled colored pencil drawings by Mary Henry, date unknown. Inscription on verso: "William Winter Comments, PO Box 817, Sausalito"
- Collection
- Mary Dill Henry Papers, 1913-2021
- Title
- Colored pencil drawing, undated
- Creator
- Henry, Mary Dill, 1913-2009
- Description
-
Untitled colored pencil drawing by Mary Henry, date unknown.
- Collection
- Mary Dill Henry Papers, 1913-2021
- Title
- Pencil drawing, undated
- Creator
- Henry, Mary Dill, 1913-2009
- Description
-
Untitled drawing by Mary Henry, date unknown.
- Collection
- Mary Dill Henry Papers, 1913-2021
- Title
- Ink drawing, undated
- Creator
- Henry, Mary Dill, 1913-2009
- Description
-
Untitled drawing by Mary Henry, date unknown.
- Collection
- Mary Dill Henry Papers, 1913-2021
- Title
- Ink drawing, undated
- Creator
- Henry, Mary Dill, 1913-2009
- Description
-
Untitled drawing by Mary Henry, date unknown.
- Collection
- Mary Dill Henry Papers, 1913-2021
- Title
- Photograph of the Aaron Galleries booth at the Art 20 art fair, including Mary Henry's The Chelsea Way, New York, New York, 2006
- Date
- 2006
- Description
-
Photograph of the Aaron Galleries Booth at the Art 20 exhibition, at Park Place Armory in 2006, including Mary Henry's painting The Chelsea...
Show morePhotograph of the Aaron Galleries Booth at the Art 20 exhibition, at Park Place Armory in 2006, including Mary Henry's painting The Chelsea Way visible at center. Inscription on verso: "Art 20 - Park Ave. Armory 2006 Mary Henry 'The Chelsea Way' on the aisle Aaron Galleries Booth."
Show less - Collection
- Mary Dill Henry Papers, 1913-2021
- Title
- Photograph of the Aaron Galleries booth at the Art 20 art fair, including Mary Henry's The Chelsea Way, New York, New York, 2006
- Date
- 2006
- Description
-
Photograph of the Aaron Galleries Booth at the Art 20 exhibition, at Park Place Armory in 2006, including Mary Henry's painting The Chelsea...
Show morePhotograph of the Aaron Galleries Booth at the Art 20 exhibition, at Park Place Armory in 2006, including Mary Henry's painting The Chelsea Way visible at center right. Inscription on verso: "Art 20 - Park Ave. Armory 2006 Mary Henry 'The Chelsea Way' on the aisle Aaron Galleries Booth."
Show less - Collection
- Mary Dill Henry Papers, 1913-2021
- Title
- Photograph of the Aaron Galleries booth at the Art 20 art fair, including Mary Henry's The Chelsea Way, New York, New York, 2006
- Date
- 2006
- Description
-
Photograph of the Aaron Galleries Booth at the Art 20 exhibition, at Park Place Armory in 2006, including Mary Henry's painting The Chelsea...
Show morePhotograph of the Aaron Galleries Booth at the Art 20 exhibition, at Park Place Armory in 2006, including Mary Henry's painting The Chelsea Way visible at right. Inscription on verso: "Art 20 - Park Ave. Armory 2006 Mary Henry 'The Chelsea Way' on the aisle Aaron Galleries Booth."
Show less - Collection
- Mary Dill Henry Papers, 1913-2021
- Title
- Pencil drawings, 1977
- Creator
- Henry, Mary Dill, 1913-2009
- Date
- 1977
- Description
-
Untitled drawings by Mary Henry, accompanied by mathematical calculations.
- Collection
- Mary Dill Henry Papers, 1913-2021
- Title
- Efficient and Practical Cluster Scheduling for High Performance Computing
- Creator
- Li, Boyang
- Date
- 2023
- Description
-
Cluster scheduling plays a crucial role in the high-performance computing (HPC) area. It is responsible for allocating resources and...
Show moreCluster scheduling plays a crucial role in the high-performance computing (HPC) area. It is responsible for allocating resources and determining the order in which jobs are executed. Existing HPC job schedulers typically leverage simpleheuristics to schedule jobs, but such scheduling policies struggle to keep pace with modern changes and technology trends. The study of this dissertation is motivated by two new trends in HPC community: the rapid growth of heterogeneous system infrastructure and the emergence of artificial intelligence (AI) technologies. First, existing scheduling policies are solely CPU-centric. In contrast, systems become more complex and heterogeneous, and emerging workloads have diverse resource requirements, such as CPU, burst buffer, power, network bandwidth, and so on. Second, previous heuristic scheduling approaches are manually designed. Such a manual design process prevents adaptive and informative scheduling decisions. A recent trend in HPC is to intertwine AI to better leverage the investment of supercomputers. This embrace of AI provides opportunities to design more intelligent scheduling methods. In this dissertation, we propose an efficient and practical cluster scheduling framework for HPC systems. Our framework leverages AI technologies and considers system heterogeneity. The framework comprises four major components. First, shared network systems such as dragonfly-based systems are vulnerable to performance variability due to network sharing. To mitigate workload interference on these shared network systems, we explore a dedicated scheduling policy. Next, emerging workloads in HPC have diverse resource requirements instead of being CPU-centric. To cater to this, we design an intelligent scheduling agent for multi-resource scheduling in HPC leveraging the advanced multi-objective reinforcement learning (MORL) algorithm. Subsequently, we address the issues with existing state encoding approaches in RL-driven scheduling, which either lack critical scheduling information or suffer from poor scalability. To this end, we present an efficient and scalable encoding model. Lastly, the lack of interpretability of RL methods poses a significant challenge to deploying RL-driven scheduling in production systems. In response, we provide a simple, deterministic, and easily understandable model for interpreting RL-driven scheduling. The proposed models and algorithms are evaluated with real job traces from production supercomputers. Experimental results show our schemes can effectively improve job scheduling in terms of both user satisfaction and system utilization.
Show less
- Title
- Testing actor and partner mediation effects of the mindfulness-relationship satisfaction association in long-distance relationships
- Creator
- Manser, Kelly A.
- Date
- 2023
- Description
-
Long-distance romantic relationships (LDR) have become increasingly common as technology and sociocultural norms have evolved. Individuals in...
Show moreLong-distance romantic relationships (LDR) have become increasingly common as technology and sociocultural norms have evolved. Individuals in LDR, many of whom are post-secondary students, report LDR-specific experiences and stressors. Nonetheless, romantic relationship satisfaction (RS) nonetheless appears comparable between LDR and non-LDR relationships, although the underlying mechanisms are not well-understood. Mindfulness, which relates positively to RS and negatively to stress, is minimally studied in LDR. Moreover, despite empirical and theoretical support, few studies have tested stress as a mediator of associations between mindfulness and RS at the within-person level (termed actor effects) or between-person level (partner effects). This study tested a theoretically-grounded, empirically-supported Actor-Partner Interdependence Mediation Model (APIMeM) in a sample (N = 150; 75 dyads) of post-secondary students and their LDR romantic partners. As hypothesized, an partner-actor indirect effect emerged of T1 actor mindfulness on T2 partner RS through decreased T2 partner stress. Unexpectedly, no direct, total, or indirect effects of T1 actor mindfulness on T2 actor stress or T2 actor RS emerged. Findings suggest that within- and between-person associations between mindfulness, stress, and RS may present uniquely in LDR, with implications for research, clinical practice, and policy.
Show less
- Title
- Associations between subjective cognitive decline, neurodegeneration, and vascular neuroimaging markers: Findings from a multiethnic cohort
- Creator
- Gonzalez, Christopher
- Date
- 2023
- Description
-
Mounting evidence suggests that subjective cognitive decline (SCD) may provide a unique target to identify the earliest changes in cognitive...
Show moreMounting evidence suggests that subjective cognitive decline (SCD) may provide a unique target to identify the earliest changes in cognitive function in Alzheimer’s disease (AD). In addition, vascular-related risk factors are also linked to increase the risk of clinical expression of AD, and independently increase the risk for vascular dementia (VaD). However, most investigations have not explored SCD across a multiethnic population. The study investigated 1) the associations between white matter hyperintensities (WMH) and targeted neuroimaging AD markers (hippocampal volume, cortical thickness of AD regions) with SCD amongst a multiethnic cohort, and 2) whether race moderated the relationship between them. A total of 871 older adults ages from 62-96 years old with a mean age of 74.48 (SD = 6.11), mean education of 12.79 years (SD = 4.53), and with 62% identifying as female were recruited from preexisting data from the Washington Heights Inwood Columbia Aging Project (WHICAP). Linear regression model revealed a significant association between WMH and both AD targeted neuroimaging markers across the total sample. Secondary analyses revealed that race did not moderate the relationship between WMH and AD cortical thickness with SCD but did in fact moderate the relationship between hippocampal volume and SCD. Results suggest that cultural biological differences exist in the Hispanic/Latine individuals compared to non-Hispanic White and non-Hispanic Black individuals.
Show less
- Title
- Design for Equivalence: Mutual Learning and Participant Gains in Participatory Design Processes
- Creator
- Geppert, Amanda Anne
- Date
- 2023
- Description
-
The ways in which people are or are not—aware, eligible, able, invited, required, supported, willing, and/or forced, among other conditions—to...
Show moreThe ways in which people are or are not—aware, eligible, able, invited, required, supported, willing, and/or forced, among other conditions—to participate in the procedures or experiences that constitute world-making activities—from voting, policymaking, or designing algorithms, technologies, products, programs, services, interventions, infrastructures, or systems, among other things—that affect their lives—is a central issue of our time. It demands careful consideration and is of great consequence as to whether or not the worlds we create are equitable, sustainable, and just, so that all people have free and equal standing and a real opportunity to belong and flourish. This study took up this issue in the context of participatory design practice and research and the making of sexual and reproductive health interventions with and for adolescents who are marginalized by race, class, ethnicity, gender, and sexuality, in Lucknow, Uttar Pradesh, India, and Chicago, Illinois, United States. The study advances knowledge in design by exploring how problem-focused, front-end participatory design processes expand or constrain the epistemic authority of less powerful actors, more specifically, systematically excluded individuals and groups. The study was conducted in two parallel phases. First, through a theoretical elaboration and critical analysis, it examined the application of Mouffean agonism in recent formulations of participatory design processes to address complex social and political issues with marginalized individuals and groups. The analysis demonstrated that a key construct—the chain of equivalence—is absent and resulted in the failure of these processes to achieve the collective, counter-hegemonic, and emancipatory responses strong enough to counter power as imagined by Chantal Mouffe. Second, an explanatory embedded multiple case study was conducted on two front-end participatory design workshops to understand what less powerful actors gain by engaging in collaborative processes of design and how practices and processes do or do not support their epistemic authority and matters of care. Thematic analysis suggested how the practices of collective information sharing and gathering—mutual learning and learning— affect participant gains and design process outputs. Additionally, thematic analysis informed a theoretical, conceptual, and practical move to expand beyond the original scope of the Mouffean chain of equivalence to include collaborating actors who may not be equivalently disadvantaged by current power relations, but who are committed to participatory design processes that prioritize the issues and matters of care of less powerful actors. When considered together, findings from both research phases inform the development of design for equivalence, at once a theoretical stance and a methodological framework to inform the selection of approaches, theories, processes, methods, practices, and tools for participatory design processes that support the epistemic authority of participants in challenging social and structural inequalities and creating articulations of the common good strong enough to counter dominant paradigms.
Show less
- Title
- Development of Metal Oxide-Based Phosphors for Luminescence Thermometry
- Creator
- Jahanbazi, Forough
- Date
- 2023
- Description
-
Temperature is both a thermodynamic property and a fundamental unit of measurement; one of the seven base quantities of the international...
Show moreTemperature is both a thermodynamic property and a fundamental unit of measurement; one of the seven base quantities of the international system of units (SI). It can be seen simply as the degree of hotness or coldness, a qualitative definition built on the bodily sensation of heat and cold. Today it is readily defined from the principles of classical thermodynamics as the parameter of state that has the same value for any systems which are in thermal equilibrium, and from statistical mechanics as a direct measure of the average kinetic energy of noninteracting particles. Temperature is an intensive quantity, meaning that its value does not depend on the amount of the substance for which it is measured. It is important because it is something we feel and because it influences the smallest aspects of our daily life, from how to adjust our housing and clothing to what we eat for supper. It affects the life cycles of plants and animals, governs rates of chemical reactions, influences tides and so on. For these reasons, it is by far the most measured physical quantity; sensors of temperature account for 80% of all sensors worldwide at present and they are used across a broad spectrum of human activities, such as in medicine, home appliances, meteorology, agriculture, and industrial and military contexts, to mention some of the most significant areas. Thus, the market demand for temperature sensors is increasing due to their extending applications in human activities. Traditional “contact” temperature measurements, which are mainly based on the expansion and contraction of an employed material, encounter difficulties when used in some emerging technologies and environments, such as nanotechnology and biomedicine. Today, an immediate need exists for the “non-contact” thermometry of moving or contact-sensitive objects, difficult to access pieces, bodies in hazardous locations, objects of nano-size dimensions, or living cells and organisms. However, the properties of existing thermometers and sensor platforms limit their use in such environments. Non-contact sensors measure object temperature without the need for physical contact between sensors and objects. Therefore, they have been considered as a great interest for hardly accessible objects. As non-contact thermometry methods, besides pyrometers and radiation thermometers, optical thermometers have drawn extensive attention nowadays. Specifically, among all the optical based thermometry methods, including Raman scattering, optical interferometry, and near field optical scanning microscopy, the one having drawn the most attention is luminescence thermometry in which the temperature detection is based on the luminescent signal accompanied with acceptable spatial resolution.In luminescence thermometry method, temperature can be determined from different features of luminescence using luminescence thermometers. Depending on the temporal nature of these features, the principles of their measurements are classified as either time-integrated (steady-state) or time-resolved ones. The temperature measurement based on the excitation and emission band positions and bandwidths, emission band intensities, luminescence/fluorescent intensity ratio (LIR or FIR, the ratio of the intensities of two emission bands) are classified as time-integrated methods. The temperature measurements based on the emission decay- or rise-times are classified as time-resolved ones. Temperature readouts from LIR and emission lifetime are by far the most exploited methods. Both readouts are self-referenced, so they are not affected by fluctuations in excitation and signal detection. Moreover, thermal sensing ability of many lanthanide-based luminescent materials is not limited to only one read-out method. Some of them can be used as dual/multiple modes via utilizing a combination of two or more read-out methods for temperature measurement. Non-contact luminescence thermometry based on LIR read-out method has attracted much attention due to its excellent accuracy and sensitivity. The intensity ratio is independent of undesirable factors that makes this luminescence thermometry more appropriate. Moreover, the method is self- referencing which removes the need for a temperature standard. In principle, it can be realized with any combinations of the emission lines from lanthanides and transition metallic ions with different temperature dependencies, either from single or multiple luminescent centers. It is the most reported luminescence thermometric read-out method in the past few years. In the past years, researchers have done a lot of work on developing high-efficient LIR thermometers by employing a single center emitting. This ratiometric method is mainly performed based on the principle that governs thermally coupled energy level of the luminescent ions. The electronic distribution between electronic states of closely separated excited levels of the doped element follows the Boltzmann equation. The two excited levels of ions are thermally coupled with a maximum energy gap of 2000 cm-1, which is sufficiently small to allow electrons to transit to high energy level upon thermal excitation and at the same time large enough to have different electronic populations and high sensitivity value. In this case, both high and low excited states share the electronic population according to Boltzmann’s distribution. Therefore, the ratio of the number of electrons between the high and the low excited levels can be defined as follows for LIR-based thermometry utilizing single emitting centers. In addition to LIR between two thermally coupled energy levels of the luminescent ion, in some ions LIR between two other energy levels which are not coupled thermally were employed to reach to a high-sensitive thermometry. The quantitative evaluation of the thermometric performance of a temperature probe is defined by its absolute and relative thermal sensitivities, temperature resolution, and repeatability. The rate of change in thermometric parameters (indicated by Δ) over a temperature changing process (∂T) is defined as absolute thermal sensitivity (Sa). However, absolute sensitivity is not appropriate to compare the performance among thermometers with different employed materials or physical principles. The relative thermal sensitivity (Sr) is defined to eliminate the problem associated with comparison between the performance of thermometers with different natures. Sr of a luminescent thermometer is one of the most important factors which determine its temperature readout accuracy. The smallest temperature change resolvable by a thermometer is defined as temperature resolution or temperature uncertainty (indicated by δT) which is expressed in Kelvin and depends on the characteristic of measuring systems such as the experimental detection setup and the signal-to noise ratio: The reproducibility is defined as the change of the same measurement performed under different conditions such as different methods or devices. The repeatability (indicated by R) is the ability of a thermometer to provide the same result under different conditions. Regarding temperature resolution, most light detection systems, including thermometry systems, suffer from low resolution because of the scattering at both excitation and emission wavelengths. Light scattering of thermometric phosphors is induced by their grain size, shape, and surface roughness. This is a problem particularly associated with conventional phosphors which typically have micrometer grain size. On the other hand, the light scattering by nanoparticles (NPs) is close to zero, which leads to better resolution of luminescence thermometers using NPs. Consequently, nanothermometry has emerged as a hot research area of thermometers for new technological applications with high resolution. Accordingly, below in chapter 1, we discussed a host material, pyrochlore compound of La2Zr2O7, doped with Tb3+ and Eu3+, synthesized in nanoscale (~15 nm) that showed a great potential for LIR temperature sensing with a high resolution based on dual emitting centers. In chapter 2, another sample of this nano powder host, La2Zr2O7 doped with Pr3+, is discovered and discussed for LIR temperature sensing based on single emitting center. Beside the high-resolution thermometry by La2Zr2O7: Pr3+ nano powder, a broad-temperature sensing range was achieved using it. The broad temperature sensing range obtained only by using one LIR-read out mode originated from high-lying charge transfer states with slow thermal-quenching that will be elaborated in chapter 2. Multiple materials employed for luminescence thermometry application, such as organic dyes, quantum dots, metal–organic complexes and frameworks, among which lanthanide or transition metal ion-based phosphors, are most promising. The electronic states of lanthanides are characterized by partially filled 4f orbitals as they are gradually filling up from 4f0 for La3+ to 4f14 for Lu3+. Their luminescence emission occurs due to interconfigurational f-f transitions except some ions like Eu2+ and Ce3+ which have f-d allowed transition emissions. The partially filled 4f orbitals of lanthanide ions are shielded by 5s and 5p subshells from surrounding environment that leads to long lifetime and narrowband emission characteristics. Once excited with UV light, lanthanide-doped materials mostly emit light in visible/near infrared (NIR) range in a downshift (DS) photoluminescence (PL) mechanism. In DS emission, high energy photons are converted into phonons with lower energy. Overall, having excellent repeatability, reproducibility and photostability with thermally and chemically stable structures makes the lanthanide-based materials the most favorite choices for luminescent thermometry applications. Their luminescence is easy to identify and differentiate from other materials. Multiplexing is possible due to their narrow emission bands which are easily identifiable. Host materials also play a crucial role in thermal sensing properties of thermometric phosphors. Various hosts such as fluorides, ceramic oxides, nitrides, chalcogenides, and phosphides have been employed for luminescence thermometry. Ceramic hosts are composed of different elements, thus often require complex synthesis processes which would limit their applicability. Fluoride hosts have a level of toxicity which is harmful for living systems, so they are not environmentally friendly. Nitride compounds are commonly prepared in oxygen/water-free glove boxes and synthesized in harsh synthesis conditions under high pressure/temperature which restrict their large-scale production. Chalcogenides and phosphides may not be sufficiently stable. On the other hand, metal oxide phosphors possess the advantages of convenient preparation, non-toxicity, excellent chemical stability (capable of withstanding sustained exposure to high temperature), and low cost. Moreover, they are preferable in biomedical luminescence thermometry as applications for measuring long-wavelength emissions where tissues are optically transparent and are less affected by scattering and background luminescence. Considering all these aspects, metal oxide-based phosphors are more favorable for luminescent thermometry. One of the goals of research in luminescence thermometry field has been to push the limit of temperature measurement capability to higher temperatures. However, the development of luminescent phosphors with high thermal stability of emission and high sensing efficiency still is a paramount challenge. Thermal stability of photoluminescence (PL) is a property related to the chemical composition, electronic structure, and crystal structure rigidity of phosphors. It is commonly referred to as positive thermal quenching (TQ), that is, the loss of light emission with rising temperature. Most phosphors indicate positive TQ which stems from high non-radiative transition probability at elevating temperatures. This phenomenon severely limits the applications of luminescent phosphors and degrades their devices’ performance. To compensate for the thermally induced emission loss of phosphors, several strategies have been reported, while as will be discussed in chapter 3, mostly have negative impacts on their inherent luminescence properties. From the structural perspective, TQ caused by nonradiative relaxations is closely related to the crystal structure stability. A rigid structural framework with high lattice symmetry has reduced nonradiative transitions at elevated temperatures. As one of the rigid-type hosts, materials possessing a negative thermal expansion (NTE) property have been explored as suitable hosts for anti-TQ phosphors doped with lanthanides. NTE refers to the unique property of some unique and rare materials with their volume abnormally contracting with increasing temperature. Among various reported NTE families, compounds with the general formula of A2M3O12, where A is a trivalent rare earth ion and M stands for W6+ or Mo6+, are well-known with a broad range of compositions and have been explored for anti-TQ in the recent years. Some earlier works reported employing A2M3O12 host to obtain thermally enhanced upconversion (UC) emission. However, the upconversion emission is not the type of widely used emission as they produce weaker emissions mostly limited to a higher wavelength range than most-applicable visible range. Thus, NTE phosphors and thermally enhanced stronger downshift (DS) emissions on visible range are not yet high enough to fulfill their practical application. To explore the applicability of NTE idea for down-shift (DS) emitting phosphors, we reported the anti-TQ performance of single and co-doped samples of Sc2Mo3O12: Eu3+ and Sc2Mo3O12: Tb3+, Eu3+ in chapter 3 and 4, respectively. Specifically, we took advantage of the existence of interionic energy transfer in our NTE host, to achieve superior anti-TQ performance for DS luminescence that can be employed for efficient thermometry at high temperatures range. The structural shrinkage with rising temperature shortens the distance between the host and activator dopant ions, which enhances the host to activator ET and consequently the final emission intensity as will be elaborated in two last chapters. As a highly promising strategy, there is an urgent need to obtain more evidence on how NTE property, associates with the anti-TQ of luminescence that we tried to discover in our works. We explored these compound’s potential for high temperature luminescence thermometry. We tested both LIR and lifetime-based temperature sensing and revealed their great potential for an efficient temperature sensing at high temperature ranges. This study opens a new design strategy and perspective to obtain phosphors with thermally boosted luminescence based on NTE host materials to meet the serious demands for their broad applications at elevated temperatures and harsh conditions.
Show less
- Title
- Application of Blockchain and Artificial Intelligence Methods in Power System Operation and Control
- Creator
- Farhoumandi, Matin
- Date
- 2023
- Description
-
The proliferation of distributed energy resources (DERs) and the large-scale electrification of transportation infrastructure are driving...
Show moreThe proliferation of distributed energy resources (DERs) and the large-scale electrification of transportation infrastructure are driving forces behind the ongoing evolution for transforming traditionally passive consumers into prosumers (both consumers and producers) in a coordinated system of power distribution network (PDN) and urban transportation network (UTN). In this new paradigm, peer-to-peer (P2P) energy trading is a promising energy management strategy for dynamically balancing the supply and demand in electricity markets. In this thesis, we propose the applications of artificial intelligence technology to power system operation and control. First, blockchain (BC) is applied to electric vehicle charging station (EVCS) operations to optimally transact energy in a hierarchical P2P framework. In the proposed framework, a decentralized privacy-preserving clearing mechanism is implemented in the transactive energy market (TEM) in which BC’s smart contracts are applied in a coordinated PDN and UTN operation. The effectiveness of the proposed TEM and its solution approach are validated via numerical simulations which are performed on a modified IEEE 123-bus PDN and a modified Sioux Falls UTN. Second, machine learning and deep learning methods are applied to short-term forecasting of non-conforming net load (STFNL). STFNL plays a vital role in enhancing the secure and efficient operation and control of power systems. However, power system consumption is affected by a variety of external factors and thus includes high levels of variations. These variations cause STFNL to be a challenging task as more DERs are integrated into the power grid. This thesis proposes two commonly used machine learning and deep learning methods, i.e., ensemble bagged and long short-term memory, for STFNL. The advantages, features and applications of these methods are expanded in a proposed fusion forecasting model that improves the STFNL accuracy. Additionally, data engineering and preprocessing options are used to increase the accuracy of the proposed fusion model. A comparative study based on practical load data is performed to demonstrate that the proposed fusion methodology can reach a relatively higher forecasting accuracy with lower error indices. Index Terms—Blockchain, deep learning and machine learning, electric vehicle charging stations, non-conforming net load forecasting, peer-to-peer transactive energy, power distribution and transportation networks, distributed energy resources, behind-the-meter supply resources.
Show less
- Title
- Dynamic Risk and Dynamic Performance Measures Generated by Distortion Functions and Diversification Benefits Optimization
- Creator
- Liu, Hao
- Date
- 2023
- Description
-
This thesis consists of two major parts, and it contributes to the fields of risk management and optimization.One contribution to risk...
Show moreThis thesis consists of two major parts, and it contributes to the fields of risk management and optimization.One contribution to risk management is made via developing dynamic risk measures and dynamic acceptability indices that can be characterized by distortion functions. In particular, we proved a representation theorem illustrating that the class of dynamic coherent risk measures generated by distortion functions coincides with a specific type of dynamic risk measures, the dynamic WV@R. We also investigate thoroughly various types of time consistencies for dynamic risk measures and dynamic acceptability indices in terms of distortion functions. Another contribution to risk management is proving strong consistency and asymptotic normality of two estimators of dynamic WV@R. In contrast to the exist- ing literature, our results do not rely on the assumptions of distribution of random variables. Instead, we investigate the asymptotic normality of estimators in terms of the generating distortion functions. Last but not least, we give counterexample to show that a sufficient condition of asymptotic normality is not necessary. The contribution to optimization is twofold. On the one hand, we formulate the (scalar) diversification optimization problem as a vector optimization problem (VOP), and show that a set-valued Bellman principle is satisfied by this VOP. On the other hand, we derive explicit policy gradient formula and implement the deep neural network to solve diversification optimization problem numerically. This deep learning technique allows to overcome computation difficulty caused by the non-convexity of VOP.
Show less
- Title
- Toward a Network Model of Executive Functioning
- Creator
- Fuller, Jordan S.
- Date
- 2023
- Description
-
The executive functions are the higher-order mental processes that are responsible for organized, strategic behavior. These functions have...
Show moreThe executive functions are the higher-order mental processes that are responsible for organized, strategic behavior. These functions have been a source of significant controversy since their initial introduction. This study sought to create a model of the executive functions utilizing psychological network analysis. Participants completed six measures reflecting inhibition, task switching, and working memory updating, as well as a fluid intelligence measure. A processing speed index was calculated from non-executive trials of various measures. Four networks were generated, including an executive functions network, an executive functions and intelligence network, an executive functions and processing speed network, and a network with all variables included. The resulting networks contained no stable edges between the executive functioning tasks. Stable edges were identified between the intelligence node and the two nodes reflecting working memory updating. There was an additional edge identified between processing speed and one measure of task switching. Results of the study may indicate that there is relative independence among executive functions. However, the management of task impurity in a psychological network analysis also merits further investigation.
Show less
- Title
- Prediction and Control of In-Cylinder Processes in Heavy-Duty Engines Using Alternative Fuels
- Creator
- Pulpeiro Gonzalez, Jorge
- Date
- 2024
- Description
-
This Ph.D. thesis focuses on advancing diagnostic techniques and control-oriented models to enhance the efficiency and performance of internal...
Show moreThis Ph.D. thesis focuses on advancing diagnostic techniques and control-oriented models to enhance the efficiency and performance of internal combustion (IC) engines, particularly heavy-duty engines utilizing alternative fuels. The research endeavors to contribute to the field of model-based control of engines through the development and implementation of innovative methodologies. The primary emphasis is on the development of diagnostic methods, control-oriented models and advanced control strategies for compression ignition engines using alternative fuels. The first key topic explores the determination of the Most Representative Cycle for Combustion Phasing Estimation based on cylinder pressure measurements. The method developed extracts crucial information from experimental data obtained from four distinct engines: the heavy-duty single-cylinder GCI engine, the light-duty multi-cylinder diesel engine, a CFR engine, and a single-cylinder light-duty Spark Ignition (SI) engine. This work lays the foundation for precise combustion phasing estimation, a critical parameter for engine control. The second major contribution involves the development of control-oriented models for Variable Geometry Turbochargers (VGT) and inter-coolers. Two models are established: a data-driven turbocharger model and an empirical inter-cooler model. These models are meticulously calibrated and validated using experimental data from a multi-cylinder light-duty diesel engine, providing valuable insights into the behavior of these components under varying conditions. The outcomes contribute to facilitate predictive control of engine air systems. The third core aspect of the thesis revolves around Model Predictive Control of Combustion Phasing in heavy-duty compression-ignition engines utilizing alternative fuels. A combustion phasing and engine load model is derived from experimental data and incorporated into an MPC framework. The MPC strategy is subsequently tested in the heavy-duty GCI test cell and compared against a conventional Proportional-Integral-Derivative (PID) control strategy. The results showcase the effectiveness of the MPC approach in achieving precise control of combustion phasing, demonstrating its potential for optimizing engine performance. In summary, this Ph.D. thesis contributes significantly to the field of engine controls by advancing diagnostic techniques, control-oriented models, and implementing a cutting-edge MPC-based control strategy for compression ignition engines using alternative fuels. The research findings not only enhance the understanding of in-cylinder processes but also pave the way for more efficient and sustainable heavy-duty engines using alternative fuels.
Show less
- Title
- Capital Design: The Role of Design in Institutional Capital Allocation
- Creator
- Ostapchuk, Jordan
- Date
- 2024
- Description
-
There is a paradox within the $100 trillion institutional investment industry: the more choices an institutional investor has, the more...
Show moreThere is a paradox within the $100 trillion institutional investment industry: the more choices an institutional investor has, the more challenging it becomes to make investment decisions. This paradox is significant because capital is one of the most transformational elements of the 21st century, driven by financialization, universal ownership, and increasing systemic risks. The direction of capital flows significantly influences the approach to addressing climate change, aging populations, and the transition to sustainable energy, in addition to supporting the essential physical and social infrastructure supported by institutional capital. This research proposes and substantiates a novel hypothesis: design can significantly influence capital allocation in institutional investment contexts. Through an institutional case study, expert interviews, workshops with master’s level design students, and systems-informed reflective practice, this research identifies asset classes as an important and changeable lens through which institutions engage with the future. It explores how these asset classes shape choices in the capital allocation process and identifies eight design capabilities particularly suited for institutional investment contexts. In doing so, it introduces a framework termed Capital Design. This framework illustrates how design can influence institutional capital allocation by integrating these design capabilities with investment tools through informational lenses within a choice/knowledge map. As a result, Capital Design offers an innovative approach for investors and investees to reorient toward emergent asset categories that directly meet the most urgent societal needs.
Show less