Search results
(9,181 - 9,200 of 10,083)
Pages
- Title
- Agency and Pathway Thinking as Mediators of The Relationship Between Caregiver Burden And Life Satisfaction Among Family Caregivers Of People With Parkinson’s Disease: An Application Of Snyder’s Hope Theory
- Creator
- Springer, Jessica Gabrielle
- Date
- 2024
- Description
-
In the United States, there are 47.9 million caregivers providing care to family members with disabilities. Those providing care to someone...
Show moreIn the United States, there are 47.9 million caregivers providing care to family members with disabilities. Those providing care to someone who has Parkinson’s Disease (PD), a complex degenerative movement disorder, may have a unique caregiving experience, given that disease-related factors (e.g. motor and non-motor symptoms) can contribute to worsening caregiver burden and life satisfactions (LS). PD has an increasing incidence of 90,000 new cases per year, likely resulting in an increased need for caregivers. Caregiving research frequently focuses on the mediators between caregiver burden and LS including social support, coping skills, and appraisals. Research that has specifically focused on caregivers of people with PD (Pw/PD) is significantly limited. Hope is a “positive motivational characteristic comprised of agency and pathways thinking that can help facilitate drive towards one’s goal while also serving as a buffer against negative events” (Snyder et al.,1991). The goal of this study is to understand Snyder’s hope theory as it relates to caregiver burden and LS for caregivers of Pw/PD. Specifically, we hypothesized that (a) caregiver burden will be negatively correlated with agency thinking, pathways thinking, and LS among caregivers of Pw/PD. In addition, pathways thinking, and agency thinking will be positively associated with LS, and (b) agency thinking, and pathways thinking will mediate the relationship between caregiver burden and LS among caregivers of Pw/PD. The study sample consisted of 249 caregivers of Pw/PD who completed an online anonymous questionnaire. Correlations between agency and pathways thinking, LS, caregiver burden, and sociodemographic factors were evaluated. A parallel mediation analysis was run to evaluate the mediating roles of pathways and agency thinking in the relationship between caregiver burden and LS. Results indicated that LS was significantly and negatively correlated with caregiver burden. LS was significantly and positively correlated with both pathways and agency thinking. Pathways thinking had no indirect effect on the relationship of caregiver burden on LS. Agency thinking had a negative, indirect effect on the relationship suggesting that agency thinking partially mediated the relationship between caregiver burden and LS. Clinical implications and future directions are discussed.
Show less
- Title
- Predictive energy efficient control framework for connected and automated vehicles in heterogeneous traffic environments
- Creator
- Vellamattathil Baby, Tinu
- Date
- 2023
- Description
-
Within the automotive industry, there is a significant emphasis on enhancing fuel efficiency and mobility, and reducing emissions. In this...
Show moreWithin the automotive industry, there is a significant emphasis on enhancing fuel efficiency and mobility, and reducing emissions. In this context, connected and automated vehicles (CAVs) represent a significant advancement, as they can optimize their acceleration pattern to improve their fuel efficiency. However, when CAVs coexist with human-driven vehicles (HDVs) on the road, suboptimal conditions arise, which adversely affect the performance of CAVs. This research analyzes the automation capabilities of production vehicles to identify scenarios where their performance is suboptimal, and proposes a merge-aware modification of adaptive cruise control (ACC) method for highway merging situations. The proposed algorithm addresses the issue of sudden gap and velocity changes in relation to the preceding vehicle, thereby reducing substantial braking during merging events, resulting in improved energy efficiency. This research also presents a data-driven model for predicting the velocity and position of the preceding vehicle, as well as a robust model predictive control (MPC) strategy that optimizes fuel consumption while considering prediction inaccuracies. Another focus of this research is a novel suggestion-based control framework in interactive mixed traffic environments leveraging the emerging connectivity between vehicles and with infrastructure. It is based on MPC to optimize the fuel efficiency of CAVs in heterogeneous or mixed traffic environments (i.e., including both CAVs and HDVs). In this suggestion-based control framework, the CAVs are considered to provide non-binding velocity and lane change suggestions to the HDVs to follow to improve the fuel efficiency of both the CAVs and the HDVs. To achieve this, the host CAV must devise its own fuel-efficient control solution and determine the recommendations to convey to its preceding HDV. It is assumed that the CAVs can communicate with the HDVs via Vehicle to Vehicle (V2V) communication, while the Signal Phase and Timing (SPaT) information is accessed via Vehicle-to- Infrastructure (V2I) communication. These velocity suggestions remain constant for a predefined period, allowing the driver to adjust their speed accordingly. It is also considered that the suggestions are non binding, i.e., a driver can choose not to follow the suggested velocity. For this control framework to function, we present a velocity prediction model based on experimental data that captures the response of a HDV to different suggested velocities, and a robust approach to ensure collision avoidance. The velocity prediction’s accuracy is also validated with the experimental data (on a table-top drive simulator), and the results are presented. In cases of low CAV penetration, a CAV needs to provide suggestions to multiple surrounding HDVs and incorporating the suggestions to all the HDVs as decision variables to the optimal control problem can be computationally expensive. Hence, a suggestion-based hierarchical energy efficient control framework is also proposed in which a CAV takes into account the interactive nature of the environment by jointly planning its own trajectory and evaluating the suggestions to the surrounding HDVs. Joint planning requires solving the problem in joint state- and action-space, and this research develops a Monte Carlo Tree Search (MCTS)-based trajectory planning approach for the CAV. Since the joint action- and state-space grows exponentially with the number of agents and can be computationally expensive, an adaptive action-space is proposed through pruning the action-space of each agent so that the actions resulting in unsafe trajectories are eliminated. The trajectory planning approach is followed by a low-level model predictive control (MPC)-based motion controller, which aims at tracking the reference trajectory in an optimal fashion. Simulation studies demonstrate the proposed control strategy’s efficacy compared to existing baseline methods.
Show less
- Title
- Estimation of Platinum Oxide Degradation in Proton Exchange Membrane Fuel Cells
- Creator
- Ahmed, Niyaz Afnan
- Date
- 2024
- Description
-
The performance and durability of Proton Exchange Membrane Fuel Cells (PEMFCs) can be significantly hampered due to the degradation of the...
Show moreThe performance and durability of Proton Exchange Membrane Fuel Cells (PEMFCs) can be significantly hampered due to the degradation of the platinum catalyst. The production of platinum oxide is a major cause of the degradation of the fuel cell system, negatively affecting its performance and durability. In order to predict and prevent this degradation, this research examines a novel method to estimate degradation due to platinum oxide formation and predict the level of platinum oxide coverage over time. Mechanisms of platinum oxide formation are outlined and two methods are compared for platinum oxide estimation. Linear regression and two Artificial Neural Network (ANN) models, including a Recurrent Neural Network (RNN) and Feed-forward Back Propagation Neural Network (FFBPNN), are compared for estimation. The estimation model takes into account the influence of cell temperature and relative humidity.Evaluation of relative errors (RE) and root mean square error (RMSE) illustrates the superior performance of RNN in contrast to GT-Suite and FFBPNN. However, both RNN and GT-Suite showcase an average error rate below 5% while the FFBPNN had a higher error rate of approximately 7%. The RMSE of RNN shows mostly less compared to FFBPNN and GT-Suite, however, at 50% training data, GT-Suite shows lowest RMSE. These findings indicate that GT-Suite can be a valuable tool for estimating platinum oxide in fuel cells with a relatively low RE, but the RNN model may be more suitable for real-time estimation of platinum oxide degradation in PEM fuel cells, due to its accurate predictions and shorter computational time. This comprehensive approach provides crucial insights for optimizing fuel cell efficiency and implementing effective maintenance strategies.
Show less
- Title
- Predictive energy efficient control framework for connected and automated vehicles in heterogeneous traffic environments
- Creator
- Vellamattathil Baby, Tinu
- Date
- 2023
- Description
-
Within the automotive industry, there is a significant emphasis on enhancing fuel efficiency and mobility, and reducing emissions. In this...
Show moreWithin the automotive industry, there is a significant emphasis on enhancing fuel efficiency and mobility, and reducing emissions. In this context, connected and automated vehicles (CAVs) represent a significant advancement, as they can optimize their acceleration pattern to improve their fuel efficiency. However, when CAVs coexist with human-driven vehicles (HDVs) on the road, suboptimal conditions arise, which adversely affect the performance of CAVs. This research analyzes the automation capabilities of production vehicles to identify scenarios where their performance is suboptimal, and proposes a merge-aware modification of adaptive cruise control (ACC) method for highway merging situations. The proposed algorithm addresses the issue of sudden gap and velocity changes in relation to the preceding vehicle, thereby reducing substantial braking during merging events, resulting in improved energy efficiency. This research also presents a data-driven model for predicting the velocity and position of the preceding vehicle, as well as a robust model predictive control (MPC) strategy that optimizes fuel consumption while considering prediction inaccuracies. Another focus of this research is a novel suggestion-based control framework in interactive mixed traffic environments leveraging the emerging connectivity between vehicles and with infrastructure. It is based on MPC to optimize the fuel efficiency of CAVs in heterogeneous or mixed traffic environments (i.e., including both CAVs and HDVs). In this suggestion-based control framework, the CAVs are considered to provide non-binding velocity and lane change suggestions to the HDVs to follow to improve the fuel efficiency of both the CAVs and the HDVs. To achieve this, the host CAV must devise its own fuel-efficient control solution and determine the recommendations to convey to its preceding HDV. It is assumed that the CAVs can communicate with the HDVs via Vehicle to Vehicle (V2V) communication, while the Signal Phase and Timing (SPaT) information is accessed via Vehicle-to- Infrastructure (V2I) communication. These velocity suggestions remain constant for a predefined period, allowing the driver to adjust their speed accordingly. It is also considered that the suggestions are non binding, i.e., a driver can choose not to follow the suggested velocity. For this control framework to function, we present a velocity prediction model based on experimental data that captures the response of a HDV to different suggested velocities, and a robust approach to ensure collision avoidance. The velocity prediction’s accuracy is also validated with the experimental data (on a table-top drive simulator), and the results are presented. In cases of low CAV penetration, a CAV needs to provide suggestions to multiple surrounding HDVs and incorporating the suggestions to all the HDVs as decision variables to the optimal control problem can be computationally expensive. Hence, a suggestion-based hierarchical energy efficient control framework is also proposed in which a CAV takes into account the interactive nature of the environment by jointly planning its own trajectory and evaluating the suggestions to the surrounding HDVs. Joint planning requires solving the problem in joint state- and action-space, and this research develops a Monte Carlo Tree Search (MCTS)-based trajectory planning approach for the CAV. Since the joint action- and state-space grows exponentially with the number of agents and can be computationally expensive, an adaptive action-space is proposed through pruning the action-space of each agent so that the actions resulting in unsafe trajectories are eliminated. The trajectory planning approach is followed by a low-level model predictive control (MPC)-based motion controller, which aims at tracking the reference trajectory in an optimal fashion. Simulation studies demonstrate the proposed control strategy’s efficacy compared to existing baseline methods.
Show less
- Title
- Algorithms for Discrete Data in Statistics and Operations Research
- Creator
- Schwartz, William K.
- Date
- 2021
- Description
-
This thesis develops mathematical background for the design of algorithms for discrete-data problems, two in statistics and one in operations...
Show moreThis thesis develops mathematical background for the design of algorithms for discrete-data problems, two in statistics and one in operations research. Chapter 1 gives some background on what chapters 2 to 4 have in common. It also defines some basic terminology that the other chapters use.Chapter 2 offers a general approach to modeling longitudinal network data, including exponential random graph models (ERGMs), that vary according to certain discrete-time Markov chains (The abstract of chapter 2 borrows heavily from the abstract of Schwartz et al., 2021). It connects conditional and Markovian exponential families, permutation- uniform Markov chains, various (temporal) ERGMs, and statistical considerations such as dyadic independence and exchangeability. Markovian exponential families are explored in depth to prove that they and only they have exponential family finite sample distributions with the same parameter as that of the transition probabilities. Many new statistical and algebraic properties of permutation-uniform Markov chains are derived. We introduce exponential random ?-multigraph models, motivated by our result on replacing ? observations of a permutation-uniform Markov chain of graphs with a single observation of a corresponding multigraph. Our approach simplifies analysis of some network and autoregressive models from the literature. Removing models’ temporal dependence but not interpretability permitted us to offer closed-form expressions for maximum likelihood estimators that previously did not have closed-form expression available. Chapter 3 designs novel, exact, conditional tests of statistical goodness-of-fit for mixed membership stochastic block models (MMSBMs) of networks, both directed and undirected. The tests employ a ?²-like statistic from which we define p-values for the general null hypothesis that the observed network’s distribution is in the MMSBM as well as for the simple null hypothesis that the distribution is in the MMSBM with specified parameters. For both tests the alternative hypothesis is that the distribution is unconstrained, and they both assume we have observed the block assignments. As exact tests that avoid asymptotic arguments, they are suitable for both small and large networks. Further we provide and analyze a Monte Carlo algorithm to compute the p-value for the simple null hypothesis. In addition to our rigorous results, simulations demonstrate the validity of the test and the convergence of the algorithm. As a conditional test, it requires the algorithm sample the fiber of a sufficient statistic. In contrast to the Markov chain Monte Carlo samplers common in the literature, our algorithm is an exact simulation, so it is faster, more accurate, and easier to implement. Computing the p-value for the general null hypothesis remains an open problem because it depends on an intractable optimization problem. We discuss the two schools of thought evident in the literature on how to deal with such problems, and we recommend a future research program to bridge the gap those two schools. Chapter 4 investigates an auctioneer’s revenue maximization problem in combinatorial auctions. In combinatorial auctions bidders express demand for discrete packages of multiple units of multiple, indivisible goods. The auctioneer’s NP-complete winner determination problem (WDP) is to fit these packages together within the available supply to maximize the bids’ sum. To shorten the path practitioners traverse from from legalese auction rules to computer code, we offer a new wdp formalism to reflect how government auctioneers sell billions of dollars of radio-spectrum licenses in combinatorial auctions today. It models common tie-breaking rules by maximizing a sum of bid vectors lexicographically. After a novel pre-solving technique based on package bids’ marginal values, we develop an algorithm for the WDP. In developing the algorithm’s branch-and-bound part adapted to lexicographic maximization, we discover a partial explanation of why classical WDP has been successful in using the linear programming relaxation: it equals the Lagrangian dual. We adapt the relaxation to lexicographic maximization. The algorithm’s dynamic-programming part retrieves already computed partial solutions from a novel data structure suited specifically to our WDP formalism. Finally we show that the data structure can “warm start” a popular algorithm for solving for opportunity-cost prices.
Show less
- Title
- A Kernel-Free Boundary Integral Method for Two-Dimensional Magnetostatics Analysis
- Creator
- Jin, Zichao
- Date
- 2023
- Description
-
Performing magnetostatic analysis accurately and efficiently is crucial for the multi-objective optimization of electromagnetic device designs...
Show morePerforming magnetostatic analysis accurately and efficiently is crucial for the multi-objective optimization of electromagnetic device designs. Therefore, an accurate and computationally efficient method is essential. Kernel Free Boundary Integral Method is a numerical method that can accurately and efficiently solve partial differential equations. Unlike traditional boundary integral or boundary element methods, KFBIM does not require an analytical form of Green’s function for evaluating integrals via numerical quadrature. Instead, KFBIM computes integrals by solving an equivalent interface problem on a Cartesian mesh. Compared with traditional finite difference methods for solving the governing PDEs directly, KFBIM produces a well-conditioned linear system. Therefore, the numerical solution of KFBIM is not sensitive to computer round-off errors, and the KFBIM requires only a fixed number of iterations when an iterative method (e.g., GMRES) is applied to solve the linear system.In this research, the KFBIM is introduced for solving magnetic computations in a toroidal core geometry in 2D. This study is very relevant in designing and optimizing toroidal inductors or transformers used in electrical systems, where lighter weight, higher inductance, higher efficiency, and lower leakage flux are required. The results are then compared with a commercial finite element solver (ANSYS), which shows excellent agreement. It should be noted that, compared with FEM, the KFBIM does not require a body-fitted mesh and can achieve high accuracy with a coarse mesh. In particular, the magnetic potential and tangential field intensity calculations on the boundaries are more stable and exhibit almost no oscillations.Furthermore, although KFBIM is accurate and computationally efficient, sharp corners can be a significant problem for KFBIM. Therefore, an inverse discrete Fourier transform (DFT) based geometry reconstruction is explored to overcome this challenge for smoothening sharp corners. A toroidal core with an airgap (C-core) is modeled to show the effectiveness of the proposed approach in addressing the sharp corner problem. A numerical example demonstrates that the method works for the variable coefficient PDE. In addition, magnetostatic analysis for homogeneous and nonhomogeneous material is presented for the reconstructed geometry, and results carried out from KFBIM are compared with the results of FEM analysis for the original geometry to show the differences and the potential of the proposed method.
Show less
- Title
- Heterogeneous Workloads Study towards Large-scale Interconnect Network Simulation
- Creator
- Wang, Xin
- Date
- 2023
- Description
-
High-bandwidth, low-latency interconnect networks play a key role in the design of modern high- performance computing (HPC) systems. The ever...
Show moreHigh-bandwidth, low-latency interconnect networks play a key role in the design of modern high- performance computing (HPC) systems. The ever-increasing need for higher bandwidth and higher message rate has driven the design of low-diameter interconnect topologies like variants of dragonfly. As these hierarchical networks become increasingly dominant, interference caused by resource sharing can lead to significant network congestion and performance variability. Meanwhile, with the rapid growth of the machine learning applications, the workloads of future HPC systems are anticipated to be a mix of scientific simulation, big data analytics, and machine learning applications. However, little work has been conducted to understand performance implications of co-running heterogeneous workloads on large-scale dragonfly systems. There is a greater need to study how different interconnect technologies affect workload performance, and how conventional scientific applications interact with emerging big data applications at the underlying interconnect level. In this work, we firstly present a comparative analysis exploring the communication interference for traditional HPC applications by analyzing the trade-off between localizing communication and balancing network traffic. We conduct trace-based simulations for applications with different communication patterns, using multiple job placement policies and routing mechanisms. Then we develop a scalable workload manager that provides an automatic framework to facilitate hybrid workload simulation. We investigate various hybrid workloads and navigate various application-system configurations for a deeper understanding of performance implications of a diverse mix of workloads on current and future supercomputers. Finally, we propose a scalable framework, Union+, that enables simulation of communication and I/O simultaneously. By combining different levels of abstraction, Union+ is able to efficiently co-model the communication and I/O traffic on HPC systems that equipped with flash-based storage. We conduct experiments with different system configurations, showing how Union+ can help system designers to assess the usefulness of future technologies in next-generation HPC machines.
Show less
- Title
- Retrospective Quantitative T1 Imaging to Examine Characteristics of Multiple Sclerosis Lesions
- Creator
- Young, Griffin James
- Date
- 2024
- Description
-
Quantitative MRI plays an essential role in assessing tissue abnormality and diseaseprogression in multiple sclerosis (MS). Specifically, T1...
Show moreQuantitative MRI plays an essential role in assessing tissue abnormality and diseaseprogression in multiple sclerosis (MS). Specifically, T1 relaxometry is gaining popularity as elevated T1 values have been shown to correlate with increased inflammation, demyelination, and gliosis. The predominant issue is that relaxometry requires parametric mapping through advanced imaging techniques not commonly included in standard clinical protocols. This leaves an information gap in large clinical datasets from which quantitative mapping could have been performed. We introduce T1-REQUIRE, a retrospective T1 mapping method that approximates T1 values from a single T1-weighted MR image. This method has already been shown to be accurate within 10% of a clinically available reference standard in healthy controls but will be further validated in MS cohorts. We also further aim to determine T1-REQUIRE’s statistical significance as a unique biomarker for the assessment of MS lesions as they relate to clinical disability and disease burden. A 14-subject comparison between T1-REQUIRE maps derived from 3D T1 weighted turbo field echoes (3D T1w TFE) and an inversion-recovery fast field echo (IRFFE) revealed a whole-brain voxel-wise Pearson’s correlation of r = 0.89 (p < 0.001) and mean bias of 3.99%. In MS white matter lesions, r = 0.81, R2 = 0.65 (p < 0.001, N = 159), bias = 10.07%, and in normal appearing white matter (NAWM), r = 0.82, R 2 = 0.67 (p < 0.001), bias = 9.48%. Mean lesional T1-REQUIRE and MTR correlated significantly (r = -0.68, p < 0.001, N = 587) similar to previously published literature. Median lesional MTR correlated significantly with EDSS (rho = -0.34, p = 0.037), and lesional T1-REQUIRE exhibited xiii significant correlations with global brain tissue atrophy as measured by brain parenchymal fraction (BPF) (r = -0.41, p = 0.010, N = 38). Multivariate linear regressions between T1- REQUIRE NAWM provided meaningful statistical relationships with EDSS (β = 0.03, p = 0.027, N = 38), as well as did mean MTR values in the Thalamus (β = -0.27, p = 0.037, N = 38). A new spoiled gradient echo variation of T1-REQUIRE was assessed as a proof of concept in a small 5-subject MS cohort compared with IR-FFE T1 maps, with a whole brain voxel-wise correlation of r = 0.88, R2 = 0.77 (p < 0.001), and Bias = 0.19%. Lesional T1 comparisons reached a correlation of r = 0.75, R2 = 0.56 (p < 0.001, N = 42), and Bias = 10.81%. The significance of these findings means that there is the potential to provide supplementary quantitative information in clinical datasets where quantitative protocols were not implemented. Large MS data repositories previously only containing structural T1 weighted images now may be used in big data relaxometric studies with the potential to lead to new findings in newly uncovered datasets. Furthermore, T1-REQUIRE has the potential for immediate use in clinics where standard T1 mapping sequences aren’t able to be readily implemented.
Show less
- Title
- Independence and Graphical Models for Fitting Real Data
- Creator
- Cho, Jason Y.
- Date
- 2023
- Description
-
Given some real life dataset where the attributes of the dataset take on categorical values, with corresponding r(1) × r(2) × … × r(m)...
Show moreGiven some real life dataset where the attributes of the dataset take on categorical values, with corresponding r(1) × r(2) × … × r(m) contingency table with nonzero rows or nonzero columns, we will be testing the goodness-of-fit of various independence models to the dataset using a variation of Metropolis-Hastings that uses Markov bases as a tool to get a Monte Carlo estimate of the p-value. This variation of Metropolis-Hastings can be found in Algorithm 3.1.1. Next we will consider the problem: ``out of all possible undirected graphical models each associated to some graph with m vertices that we test to fit on our dataset, which one best fits the dataset?" Here, the m attributes are labeled as vertices for the graph. We would have to conduct 2^(mC2) goodness-of-fit tests since there are 2^(mC2) possible undirected graphs on m vertices. Instead, we consider a backwards selection method likelihood-ratio test algorithm. We first start with the complete graph G = K(m), and call the corresponding undirected graphical model ℳ(G) as the parent model. Then for each edge e in E(G), we repeatedly apply the likelihood-ratio test to test the relative fit of the model ℳ(G-e), the child model, vs. ℳ(G), the parent model, where ℳ(G-e) ⊆ℳ(G). More details on this iterative process can be found in Algorithm 4.1.3. For our dataset, we will be using the alcohol dataset found in https://www.kaggle.com/datasets/sooyoungher/smoking-drinking-dataset, where the four attributes of the dataset we will use are ``Gender" (male, female), ``Age", ``Total cholesterol (mg/dL)", and ``Drinks alcohol or not?". After testing the goodness-of-fit of three independence models corresponding to the independence statements ``Gender vs Drink or not?", ``Age vs Drink or not?", and "Total cholesterol vs Drink or not?", we found that the data came from a distribution from the two independence models corresponding to``Age vs Drink or not?" and "Total cholesterol vs Drink or not?" And after applying the backwards selection likelihood-ratio method on the alcohol dataset, we found that the data came from a distribution from the undirected graphical model associated to the complete graph minus the edge {``Total cholesterol”, ``Drink or not?”}.
Show less
- Title
- Investigation in the Uncertainty of Chassis Dynamometer Testing for the Energy Characterization of Conventional, Electric and Automated Vehicles
- Creator
- Di Russo, Miriam
- Date
- 2023
- Description
-
For conventional and electric vehicles tested in a standard chassis dynamometer environment precise regulations on the evaluation of their...
Show moreFor conventional and electric vehicles tested in a standard chassis dynamometer environment precise regulations on the evaluation of their energy performance exist. However, the regulations do not include requirements on the confidence value to associate with the results. As vehicles become more and more efficient to meet the stricter regulations mandates on emissions, fuel and energy consumption, traditional testing methods may become insufficient to validate these improvements, and may need revision. Without information about the accuracy associated with the results of those procedures however, adjustments and improvements are not possible, since no frame of reference exists. For connected and automated vehicles, there are no standard testing procedures, and researchers are still in the process of determining if current evaluation methods can be extended to test intelligent technologies and which metrics best represent their performance. For these vehicles is even more important to determine the uncertainty associated with these experimental methods and how they propagate to the final results. The work presented in this dissertation focuses on the development of a systematic framework for the evaluation of the uncertainty associated with the energy performance of conventional, electric and automated vehicles. The framework is based on a known statistical method, to determine the uncertainty associated with the different stages and processes involved in the experimental testing, and to evaluate how the accuracy of each parameter involved impacts the final results. The results demonstrate that the framework can be successfully applied to existing testing methods and provides a trustworthy value of accuracy to associate with the energy performance results, and can be easily extended to connected-automated vehicle testing to evaluate how novel experimental methods impact the accuracy and the confidence of the outputs. The framework can be easily be implemented into an existing laboratory environment to incorporate the uncertainty evaluation among the current results analyzed at the end of each test, and provide a reference for researchers to evaluate the actual benefits of new algorithms and optimization methods and understand margins for improvements, and by regulators to assess which parameters to enforce to ensure compliance and ensure projected benefits.
Show less
- Title
- Extremal and Enumerative Problems on DP-Coloring of Graphs
- Creator
- Sharma, Gunjan
- Date
- 2024
- Description
-
Graph coloring is the mathematical model for studying problems related to conflict-free allocation of resources. DP-coloring (also known as...
Show moreGraph coloring is the mathematical model for studying problems related to conflict-free allocation of resources. DP-coloring (also known as correspondence coloring) of graphs is a vast generalization of classic graph coloring, and many more concepts of colorings studied in the past 150+ years. We study problems in DP-coloring of graphs that combine questions and ideas from extremal, structural, probabilistic, and enumerative aspects of graph coloring. In particular, we study (i) DP-coloring Cartesian products of graphs using the DP-color function, the DP coloring counterpart of the Chromatic polynomial, and robust criticality, a new notion of graph criticality; (ii) Shameful conjecture on the mean number of colors used in a graph coloring, in the context of list coloring and DP-coloring; and (iii) asymptotic bounds on the difference between the chromatic polynomial and the DP color function, as well as the difference between the dual DP color function and the chromatic polynomial, in terms of the cycle structure of a graph. These results respectively give an upper bound and a lower bound on the chromatic polynomial in terms of DP colorings of a graph.
Show less
- Title
- Agency and Pathway Thinking as Mediators of The Relationship Between Caregiver Burden And Life Satisfaction Among Family Caregivers Of People With Parkinson’s Disease: An Application Of Snyder’s Hope Theory
- Creator
- Springer, Jessica Gabrielle
- Date
- 2024
- Description
-
In the United States, there are 47.9 million caregivers providing care to family members with disabilities. Those providing care to someone...
Show moreIn the United States, there are 47.9 million caregivers providing care to family members with disabilities. Those providing care to someone who has Parkinson’s Disease (PD), a complex degenerative movement disorder, may have a unique caregiving experience, given that disease-related factors (e.g. motor and non-motor symptoms) can contribute to worsening caregiver burden and life satisfactions (LS). PD has an increasing incidence of 90,000 new cases per year, likely resulting in an increased need for caregivers. Caregiving research frequently focuses on the mediators between caregiver burden and LS including social support, coping skills, and appraisals. Research that has specifically focused on caregivers of people with PD (Pw/PD) is significantly limited. Hope is a “positive motivational characteristic comprised of agency and pathways thinking that can help facilitate drive towards one’s goal while also serving as a buffer against negative events” (Snyder et al.,1991). The goal of this study is to understand Snyder’s hope theory as it relates to caregiver burden and LS for caregivers of Pw/PD. Specifically, we hypothesized that (a) caregiver burden will be negatively correlated with agency thinking, pathways thinking, and LS among caregivers of Pw/PD. In addition, pathways thinking, and agency thinking will be positively associated with LS, and (b) agency thinking, and pathways thinking will mediate the relationship between caregiver burden and LS among caregivers of Pw/PD. The study sample consisted of 249 caregivers of Pw/PD who completed an online anonymous questionnaire. Correlations between agency and pathways thinking, LS, caregiver burden, and sociodemographic factors were evaluated. A parallel mediation analysis was run to evaluate the mediating roles of pathways and agency thinking in the relationship between caregiver burden and LS. Results indicated that LS was significantly and negatively correlated with caregiver burden. LS was significantly and positively correlated with both pathways and agency thinking. Pathways thinking had no indirect effect on the relationship of caregiver burden on LS. Agency thinking had a negative, indirect effect on the relationship suggesting that agency thinking partially mediated the relationship between caregiver burden and LS. Clinical implications and future directions are discussed.
Show less
- Title
- Voxel Transformer with Density-Aware Deformable Attention for 3D Object Detection
- Creator
- Kim, Taeho
- Date
- 2023
- Description
-
The Voxel Transformer (VoTr) is a prominent model in the field of 3D object detection, employing a transformer-based architecture to...
Show moreThe Voxel Transformer (VoTr) is a prominent model in the field of 3D object detection, employing a transformer-based architecture to comprehend long-range voxel relationships through self-attention. However, despite its expanded receptive field, VoTr’s flexibility is constrained by its predefined receptive field. In this paper, we present a Voxel Transformer with Density-Aware Deformable Attention (VoTr-DADA), a novel approach to 3D object detection. VoTr-DADA leverages density-guided deformable attention for a more adaptable receptive field. It efficiently identifies key areas in the input using density features, combining the strengths of both VoTr and Deformable Attention. We introduce the Density-Aware Deformable Attention (DADA) module, which is specifically designed to focus on these crucial areas while adaptively extracting more informative features. Experimental results on the KITTI dataset and the Waymo Open dataset show that our proposed method outperforms the baseline VoTr model in 3D object detection while maintaining a fast inference speed.
Show less
- Title
- Quantification of Imaging Markers at Different MRI Contrast Weightings, Vasculature, and Across Field Strengths
- Creator
- Nguyen, Vivian S.
- Date
- 2024
- Description
-
Quantitative MRI measures physical characteristics of tissue, which creates a set scale with units that allows longitudinal monitoring and...
Show moreQuantitative MRI measures physical characteristics of tissue, which creates a set scale with units that allows longitudinal monitoring and cross-patient and cross-center studies. It enables earlier detection of disease, complements biopsy, and provides a clear numeric scale for differentiation of disease states. However, quantitative MRI acquisitions and post-processing are not trivial, which makes it hard to implement the clinical setting. This along with the variability in clinically used acquisitions and post-processing techniques leads to difficulty in establishing reliable, consistent, and accurate quantitative information. There is a critical need for rigorous validation of quantitative imaging biomarkers, both for current and novel quantitative imaging techniques. This dissertation seeks to both validate current quantitative MR imaging techniques and develop new ones in the heart and brain by: 1) examining the data variability and the loss in tag fidelity that occurs when quantitative cardiac tagging is incorrectly run post-Gadolinium injection; 2) quantifying the negative impact of unexpected relaxometric behavior observed in low field MR imaging for low inversion times during T1 mapping; 3) validating retrospectively calculated T1 as a biomarker for Multiple Sclerosis progression; 4) and prototyping an oxygen extraction fraction (OEF) mapping technique for the purpose of stroke prediction and establishment of a numeric scale for tissue health for stroke patients.Assessment of pre-Gadolinium and post-Gadolinium cardiac tag quality showed that post-Gadolinium tags are less saturated (p = 0.012) and have a wider range of saturation, contrast, and sharpness. This results in a loss of information in the late cardiac cycle and impeding quantification of myocardial function.Investigation of 64mT T1 mapping revealed unique relaxometric behavior in that at low inversion times (<250 ms), the signal response curve displayed an increase in signal intensity or a plateau in signal intensity dependent on T1 relaxation time. Inclusion of this increase or plateau in signal intensity negatively impacted T1 fitting algorithms, leading to their failure or incorrectly calculated T1 values. The maximum peak signal intensity before the null point was found to be 210 ms, which impacts current low field T1 mapping protocols which use an initial inversion time of 80-110 ms.Validation of retrospectively calculated T1 as a biomarker in Multiple Sclerosis revealed that T1 of normal appearing brain tissue correlates with measures of Multiple Sclerosis progression (EDSS, BPF, and disease duration) with normal appearing white matter T1 correlating with BPF (r = -0.49, p = 0.0018); putamen T1 correlating with EDSS (r = 0.48, p = 2.40e-03), with BPF (r = 0.69, p = 2.04e-06), and disease duration (r = -0.37; p = 0.02); and globus pallidus T1 correlating with disease duration (r = -0.42; p = 0.0093). Lesion T1 is reflective of MS severity whereas MTR is not.Finally, development of an oxygen extraction fraction (OEF) mapping technique showed that application of independent component analysis (ICA) to cardiac gated spiral-trajectory phase images yielded components that feature stenosis features observed in magnitude images. These ICA components form the basis of OEF mapping from phase images. This dissertation presents four studies that seek to improve either current quantitative MR imaging protocols in the heart, or to develop and validate new quantitative MR imaging techniques in the brain for the purpose of monitoring disease progression or predicting disease.
Show less
- Title
- SEISMIC DESIGN STUDY OF STEEL PLATE SHEAR WALL
- Creator
- Moshiri, Ali
- Date
- 2012-04-20, 2012-05
- Description
-
plate shear walls are an innovative lateral load-resisting system capable of effectively and economically bracing a building against both wind...
Show moreplate shear walls are an innovative lateral load-resisting system capable of effectively and economically bracing a building against both wind and earthquake forces. The system consists of infill steel plates connected to boundary beams and columns over the full height of the framed bay. Beam-to-column connections can be rigid or shear connections and the infill plates can be either stiffened or unstiffened, depending on the design philosophy of the infill plates. The view of some structural designers is to use heavy stiffeners to reinforce and increase the buckling capacity of shear walls, whereas, if the walls are left unstiffened and allowed to buckle, their energy absorption will increase significantly due to the post-buckling capacity. Performance of 9-story SPSW with moment resisting beam to column connections was studied under quasi-static loading condition and 10 earthquake records recorded in Los Angeles by developing a nonlinear dynamic explicit finite element models in ABAQUS. All the models were validated with experimental results. Effect of stiffness of boundary elements (VBE and HBE) and plate thickness on general behavior of the structure were also investigated. In design of SPSWs, vertical boundary elements play a major role in increasing the capacity of the system. In high seismic zones there is always a chance of plastic hinge formation in the boundary elements specially columns in any intermediate floor. It is recommended that SPSWs not be used for medium to high rise buildings in high seismic regions until the lack of capacity design requirements for this type of SPSW is rectified.
Ph.D. in Structural Engineering, May 2012
Show less
- Title
- EXPLOITING NETWORK CODING IN DIFFERENT WIRELESS NETWORKS
- Creator
- Guo, Bin
- Date
- 2012-07-06, 2012-07
- Description
-
Wireless communication networks have been incorporated into our daily life and provide convenience anytime and anywhere. However, the wireless...
Show moreWireless communication networks have been incorporated into our daily life and provide convenience anytime and anywhere. However, the wireless medium is unreliable and unpredictable. Current wireless networks suffer from low throughput, low reliability, etc. Network coding, an alternative approach, has attracted more interests and has emerged as an important technology in wireless networks. It can provide significant potential throughput improvements and a high degree of robustness. This dissertation is built on the theory of network coding. In this dissertation, different network coding protocols are designed in varied wireless networks. The first part of this dissertation proposes a novel coding-ware routing protocol in wireless mesh networks. In particular, a generalized coding condition is formally established to identify the coding opportunities. Based on general coding conditions analysis, a novel routing metric FORM (Free-ride Optimal Routing Metric) and the corresponding routing protocol are developed with the objective to exploit the coding opportunities and maximize the benefit of “free-ride” in order to reduce the total number of transmissions and consequently to increase the network throughput. The results show the proposed protocol achieves significant throughput gain than existing approaches. The second part of this dissertation exploits network coding in wireless cooperative networks. Firstly, a Decode-and-Forward Network Coded (DFNC) protocol is proposed for multi-user cooperative communication system. In particular, DFNC develops an efficient construction method for coding coefficients and a novel decoding algorithm that combines network coding and channel coding. DFNC exploits both temporal and spatial diversities through multiple channels by allowing all the users to generate redundant network-coded packets in a distributed manner and it helps fully explore the redundancy provided by network coding to realize error correction. Theoretical analysis and simulation results demonstrate that DFNC outperforms other transmission schemes in terms of Symbol Error Rate (SER) and achieves higher diversity order. Secondly, the idea of DFNC is extended and Modified-DFNC (M-DFNC) is introduced for a more practical scenario: not all the users will be able to dedicate their resources to provide assistance for others. The throughput analysis shows that M-DFNC outperforms the conventional cooperative protocol in the low-SNR regime and it implies that an adaptive cooperation system should be adopted to optimize the performance. The simulation results validate the theoretical analysis.
Ph.D. in Electrical Engineering, July 2012
Show less
- Title
- CONSTITUTIVE BEHAVIOR AND MODELING OF AL-CU ALLOY SYSTEMS
- Creator
- Turkkan, Omer Anil
- Date
- 2013-05-07, 2013-05
- Description
-
High speed deformation events such as caused by projectile penetration, fragment impact and shock/blast loading are of great importance in...
Show moreHigh speed deformation events such as caused by projectile penetration, fragment impact and shock/blast loading are of great importance in designing materials and structures for army applications. In these events, materials are subjected to large strains, high strain rates and rapid increase in temperature due to thermoplastic heating. In such severe conditions, overall performance is determined by the evolution of flow stress, failure initiation and propagation, and commonly in the form of adiabatic shear banding. Some of 2XXX series aluminum-copper (Al-Cu) alloys are recognized for their decent ballistic properties, and therefore they have been used as an armor material for lightweight U.S. Army vehicles. Most recently, an Al-Cu-Mg-Mn-Ag alloy labeled as Al 2139-T8 has been developed and is evaluated by the U.S. Army Research Labs. because of its better ballistic properties and higher strength than its predecessors. The underlying microstructure is believed to be the key element for this superior performance. The goal of this study is to explore the effect of composition and microstructural features on overall dynamic material behavior by examining mechanical and deformation behavior of different Al-Cu material systems. Starting from the pure single crystal and polycrystalline Al structures, and adding a different element to chemical composition in each step (i.e., Cu, Mg, Mn, Ag), mechanical response of these different systems has been investigated. For all alloy systems with the exception of single crystal Al, mechanical tests have been performed at room and elevated temperatures covering quasi-static ( to ) and dynamic ( to strain rate regimes. xiv Shear-compression specimens promoting localized shear deformation have been used to explore tendency of each one of these materials to failure by adiabatic shear banding. In addition to phenomenological Johnson-Cook Model (JCM), physics based Zerrilli-Armstrong and Mechanical Threshold Models have been studied to model the constitutive response of Al-Cu alloys over a wide range of strain rates and temperatures.. An improved ZA model has been developed to better capture the trends in experimental data.
M.S. in MECHANICAL, MATERIALS, AND AEROSPACE ENGINEERING, May 2013
Show less
- Title
- OPTIMAL BIDDING STRATEGY FOR HYDRO UNIT
- Creator
- Zhu, Renchen
- Date
- 2013-04-30, 2013-05
- Description
-
The bidding price for the renewable energy is very different from some traditional energy, like gas, coal and so on, because when we produced...
Show moreThe bidding price for the renewable energy is very different from some traditional energy, like gas, coal and so on, because when we produced the renewable energy, the only cost is the generator unit cost, for example, wind power, water power and so on. People never need to pay the fuel price for the solar, water or wind. So here come one question how can we decide the energy price for these power. As we all know, the bidding price decided the profit for the generating company. Nowadays, more and more researches have been done in this field for every company want a good price to earn the highest profit. But for the renewable energy, they are different from the traditional energy which price is determined by the fuel price. So how to decide the bidding price of the renewable energy will be a good topic and question for us. To solve this problem, I will try to use an idea called minimum the imbalance in order to maximum the profit for those owners. This idea first has been applied in the wind unit. For my thesis, I will apply this idea to the hydro unit both with the storage and without storage. For each part, I will use some data to test. By these testing, the result will show this bidding strategy will be better.
M.S. in Electrical Engineering, May 2013
Show less
- Title
- VERIFICATION OF LARGE-SCALE ON-CHIP POWER GRIDS
- Creator
- Xiong, Xuanxing
- Date
- 2013, 2013-05
- Description
-
As technology scaling continues, the performance and reliability of integrated circuits become increasingly susceptible to power supply noises...
Show moreAs technology scaling continues, the performance and reliability of integrated circuits become increasingly susceptible to power supply noises, such as IR drops and Ldi/dt noises in the on-chip power grids. Reduced supply voltage levels in the grid can increase the gate delay, leading to timing violations and logic failures. In order to ensure a reliable chip design, it is critical to verify that the power grid is robust, i.e., the power supply noises are acceptable for all possible runtime situations. Hence, power grid verification has become an indispensable step in modern design flow of integrated circuits. Nowadays, it is common practice to verify power grids by simulation. Typically, an equivalent RC/RLC circuit model of the grid is extracted from the layout, and designers perform simulations to evaluate the power supply noises based on the current waveforms drawn by the circuit. As power grid simulation can only be performed after the circuit design is done, vectorless power grid verification has been introduced to enable early power grid verification with incomplete current specifications, so that the power grid design can be better tuned and optimized at early design stages, thus reducing the design time. Due to the increasing complexity of modern chips, power grid verification has become very challenging. The broad goal of this dissertation is to explore efficient algorithms for verifying large-scale on-chip power grids. Specifically, we study parallel power grid transient simulation, vectorless steady-state verification and vectorless transient verification. Parallel forward and back substitution algorithms are designed for efficient transient simulation; a set of novel algorithms are developed to incrementally improve the runtime efficiency of vectorless steady-state verification; and an efficient approach is proposed for vectorless transient verification with novel constraint setting.
PH.D in Electrical Engineering, May 2013
Show less
- Title
- MATHEMATICAL MODELING OF POLY(ETHYLENE GLYCOL) DIACRYLATE HYDROGEL SYNTHESIS VIA VISIBLE LIGHT FREE-RADICAL PHOTOPOLYMERIZATION FOR TISSUE ENGINEERING APPLICATIONS
- Creator
- Lee, Chu-yi
- Date
- 2013, 2013-05
- Description
-
Crosslinked hydrogels of poly(ethylene glycol) diacrylate (PEGDA) have been extensively used as scaffolds for applications in tissue...
Show moreCrosslinked hydrogels of poly(ethylene glycol) diacrylate (PEGDA) have been extensively used as scaffolds for applications in tissue engineering. In this thesis, PEGDA hydrogels are synthesized using visible light free-radical photopolymeriza- tion (λ = 514 nm) in the presence of the visible light photosensitive dye, EosinY, the co-initiator, triethanolamine (TEA), a comonomer, N-vinyl pyrrolidone (NVP), a crosslinking agent, PEGDA, and an optional PEG monoacrylate monomer that contains the cell adhesive ligand YRGDS. The incorporation level of the YRGDS lig- and as well as the physical and mechanical properties of these hydrogels dictate cell behavior and tissue regeneration. These hydrogel properties may be tuned through variations in polymerization conditions. The goal of this thesis was to develop a math- ematical model for PEGDA hydrogel formation which predicts the incorporation level of YRGDS and the crosslink density of hydrogel as a variety of polymerization con- ditions. This model provides insight into the process of hydrogel crosslinking and in effectively guiding the experimental design of these scaffolds for tissue engineering applications. To accomplish this task two major components comprised the studies of this thesis. The first component involved an investigation of the visible light photo- initiation mechanism of EosinY and TEA, and the second component involved the develop of a hydrogel synthesis model and its validation. Experiments and modeling were used to determine an expression for the rate of initiation of the EosinY/TEA initiation system and to propose a photoinitiation mechanism. In Chapter 2, exper- imental data and parameter fitting were utilized to obtain an empirical expression for the rate of initiation. However, this empirical expression did not consider the ef- fect of inhomogeneous light distribution which is present in this experimental system. The dynamics of light absorption during polymerization were measured under differ- xiv ent conditions in order to gain insight into the kinetic photoinitiation mechanism as well as the rate of initiation. In Chapter 3, a mechanism for this photo-initiation was proposed. Using this mechanism the light absorption dynamics accounting for inhomogeneous light distribution were simulated which were found to be in an agree- ment with the light absorption measurements shown in Chapter 2. Further validation of this proposed mechanism was achieved from polyNVP conversion measurements. This photo-initiation mechanism was implemented in the hydrogel model. In Chapter 4, the hydrogel synthesis model was developed based on the kinetic approach of the method of moments combined with the Numerical Fractionation technique. The model was used to predict the dynamics of hydrogel properties such as gel fraction, crosslink density, and RGD incorporation under various polymerization conditions. Model predictions were compared with experimental data. Three sets of experiments were conducted. In the first set of experiments where hydrogels were formed in the absence of Acryl-PEG-RGD, the total double bond concentration was kept constant while varying the compositions of NVP and PEGDA. The model and the experiments showed a maximum crosslink density for an acrylate to double bond ratio of 0.5 to 0.6. This is related to the synergistic cross-propagation between NVP and PEGDA, which results in an increase in the rate of polymerization leading to higher crosslink density. In the second set of experiments, hydrogels were formed in the presence of Acryl-PEG-RGD to investigate its incorporation as well as the hydrogel crosslink density. The model showed reasonable agreement with the experimental data and in some cases the predicted RGD deviated from the experimental measurements due to changes in volume upon swelling. The effect of swelling was not considered by the model. The calculated crosslink densities were compared with the inverse swelling ratios from the experiments. The reduction of free volume due to the space occupied xv by the unreacted pendant double bonds was not considered by the model. This reduc- tion of free volume affected the apparent swelling ratio obtained from experiments thus resulting in the observed mismatch between the experimental trends and the predicted crosslink density by the model. In the third set of experiments, additional crosslink density measurements were conducted using a PEGDA macromer of lower molecular weight (MW = 575 Da.). The experiments were performed in the absence of Acryl-PEG-RGD. Few cases were not accurately predicted since the model did not consider the reduction in the concentration of available pendant double bonds when gelation occurs. Among the three set of experiments, the hydrogel synthesis model offers reasonable predictions for most of the experimental cases. This model can be used as a guide for experimen- tally designing PEGDA hydrogels with the desired properties for tissue engineering applications.
PH.D in Chemical and Biological Engineering, May 2013
Show less