Search results
(2,561 - 2,580 of 2,806)
Pages
- Title
- Investigation in the Uncertainty of Chassis Dynamometer Testing for the Energy Characterization of Conventional, Electric and Automated Vehicles
- Creator
- Di Russo, Miriam
- Date
- 2023
- Description
-
For conventional and electric vehicles tested in a standard chassis dynamometer environment precise regulations on the evaluation of their...
Show moreFor conventional and electric vehicles tested in a standard chassis dynamometer environment precise regulations on the evaluation of their energy performance exist. However, the regulations do not include requirements on the confidence value to associate with the results. As vehicles become more and more efficient to meet the stricter regulations mandates on emissions, fuel and energy consumption, traditional testing methods may become insufficient to validate these improvements, and may need revision. Without information about the accuracy associated with the results of those procedures however, adjustments and improvements are not possible, since no frame of reference exists. For connected and automated vehicles, there are no standard testing procedures, and researchers are still in the process of determining if current evaluation methods can be extended to test intelligent technologies and which metrics best represent their performance. For these vehicles is even more important to determine the uncertainty associated with these experimental methods and how they propagate to the final results. The work presented in this dissertation focuses on the development of a systematic framework for the evaluation of the uncertainty associated with the energy performance of conventional, electric and automated vehicles. The framework is based on a known statistical method, to determine the uncertainty associated with the different stages and processes involved in the experimental testing, and to evaluate how the accuracy of each parameter involved impacts the final results. The results demonstrate that the framework can be successfully applied to existing testing methods and provides a trustworthy value of accuracy to associate with the energy performance results, and can be easily extended to connected-automated vehicle testing to evaluate how novel experimental methods impact the accuracy and the confidence of the outputs. The framework can be easily be implemented into an existing laboratory environment to incorporate the uncertainty evaluation among the current results analyzed at the end of each test, and provide a reference for researchers to evaluate the actual benefits of new algorithms and optimization methods and understand margins for improvements, and by regulators to assess which parameters to enforce to ensure compliance and ensure projected benefits.
Show less
- Title
- Extremal and Enumerative Problems on DP-Coloring of Graphs
- Creator
- Sharma, Gunjan
- Date
- 2024
- Description
-
Graph coloring is the mathematical model for studying problems related to conflict-free allocation of resources. DP-coloring (also known as...
Show moreGraph coloring is the mathematical model for studying problems related to conflict-free allocation of resources. DP-coloring (also known as correspondence coloring) of graphs is a vast generalization of classic graph coloring, and many more concepts of colorings studied in the past 150+ years. We study problems in DP-coloring of graphs that combine questions and ideas from extremal, structural, probabilistic, and enumerative aspects of graph coloring. In particular, we study (i) DP-coloring Cartesian products of graphs using the DP-color function, the DP coloring counterpart of the Chromatic polynomial, and robust criticality, a new notion of graph criticality; (ii) Shameful conjecture on the mean number of colors used in a graph coloring, in the context of list coloring and DP-coloring; and (iii) asymptotic bounds on the difference between the chromatic polynomial and the DP color function, as well as the difference between the dual DP color function and the chromatic polynomial, in terms of the cycle structure of a graph. These results respectively give an upper bound and a lower bound on the chromatic polynomial in terms of DP colorings of a graph.
Show less
- Title
- Agency and Pathway Thinking as Mediators of The Relationship Between Caregiver Burden And Life Satisfaction Among Family Caregivers Of People With Parkinson’s Disease: An Application Of Snyder’s Hope Theory
- Creator
- Springer, Jessica Gabrielle
- Date
- 2024
- Description
-
In the United States, there are 47.9 million caregivers providing care to family members with disabilities. Those providing care to someone...
Show moreIn the United States, there are 47.9 million caregivers providing care to family members with disabilities. Those providing care to someone who has Parkinson’s Disease (PD), a complex degenerative movement disorder, may have a unique caregiving experience, given that disease-related factors (e.g. motor and non-motor symptoms) can contribute to worsening caregiver burden and life satisfactions (LS). PD has an increasing incidence of 90,000 new cases per year, likely resulting in an increased need for caregivers. Caregiving research frequently focuses on the mediators between caregiver burden and LS including social support, coping skills, and appraisals. Research that has specifically focused on caregivers of people with PD (Pw/PD) is significantly limited. Hope is a “positive motivational characteristic comprised of agency and pathways thinking that can help facilitate drive towards one’s goal while also serving as a buffer against negative events” (Snyder et al.,1991). The goal of this study is to understand Snyder’s hope theory as it relates to caregiver burden and LS for caregivers of Pw/PD. Specifically, we hypothesized that (a) caregiver burden will be negatively correlated with agency thinking, pathways thinking, and LS among caregivers of Pw/PD. In addition, pathways thinking, and agency thinking will be positively associated with LS, and (b) agency thinking, and pathways thinking will mediate the relationship between caregiver burden and LS among caregivers of Pw/PD. The study sample consisted of 249 caregivers of Pw/PD who completed an online anonymous questionnaire. Correlations between agency and pathways thinking, LS, caregiver burden, and sociodemographic factors were evaluated. A parallel mediation analysis was run to evaluate the mediating roles of pathways and agency thinking in the relationship between caregiver burden and LS. Results indicated that LS was significantly and negatively correlated with caregiver burden. LS was significantly and positively correlated with both pathways and agency thinking. Pathways thinking had no indirect effect on the relationship of caregiver burden on LS. Agency thinking had a negative, indirect effect on the relationship suggesting that agency thinking partially mediated the relationship between caregiver burden and LS. Clinical implications and future directions are discussed.
Show less
- Title
- Voxel Transformer with Density-Aware Deformable Attention for 3D Object Detection
- Creator
- Kim, Taeho
- Date
- 2023
- Description
-
The Voxel Transformer (VoTr) is a prominent model in the field of 3D object detection, employing a transformer-based architecture to...
Show moreThe Voxel Transformer (VoTr) is a prominent model in the field of 3D object detection, employing a transformer-based architecture to comprehend long-range voxel relationships through self-attention. However, despite its expanded receptive field, VoTr’s flexibility is constrained by its predefined receptive field. In this paper, we present a Voxel Transformer with Density-Aware Deformable Attention (VoTr-DADA), a novel approach to 3D object detection. VoTr-DADA leverages density-guided deformable attention for a more adaptable receptive field. It efficiently identifies key areas in the input using density features, combining the strengths of both VoTr and Deformable Attention. We introduce the Density-Aware Deformable Attention (DADA) module, which is specifically designed to focus on these crucial areas while adaptively extracting more informative features. Experimental results on the KITTI dataset and the Waymo Open dataset show that our proposed method outperforms the baseline VoTr model in 3D object detection while maintaining a fast inference speed.
Show less
- Title
- Quantification of Imaging Markers at Different MRI Contrast Weightings, Vasculature, and Across Field Strengths
- Creator
- Nguyen, Vivian S.
- Date
- 2024
- Description
-
Quantitative MRI measures physical characteristics of tissue, which creates a set scale with units that allows longitudinal monitoring and...
Show moreQuantitative MRI measures physical characteristics of tissue, which creates a set scale with units that allows longitudinal monitoring and cross-patient and cross-center studies. It enables earlier detection of disease, complements biopsy, and provides a clear numeric scale for differentiation of disease states. However, quantitative MRI acquisitions and post-processing are not trivial, which makes it hard to implement the clinical setting. This along with the variability in clinically used acquisitions and post-processing techniques leads to difficulty in establishing reliable, consistent, and accurate quantitative information. There is a critical need for rigorous validation of quantitative imaging biomarkers, both for current and novel quantitative imaging techniques. This dissertation seeks to both validate current quantitative MR imaging techniques and develop new ones in the heart and brain by: 1) examining the data variability and the loss in tag fidelity that occurs when quantitative cardiac tagging is incorrectly run post-Gadolinium injection; 2) quantifying the negative impact of unexpected relaxometric behavior observed in low field MR imaging for low inversion times during T1 mapping; 3) validating retrospectively calculated T1 as a biomarker for Multiple Sclerosis progression; 4) and prototyping an oxygen extraction fraction (OEF) mapping technique for the purpose of stroke prediction and establishment of a numeric scale for tissue health for stroke patients.Assessment of pre-Gadolinium and post-Gadolinium cardiac tag quality showed that post-Gadolinium tags are less saturated (p = 0.012) and have a wider range of saturation, contrast, and sharpness. This results in a loss of information in the late cardiac cycle and impeding quantification of myocardial function.Investigation of 64mT T1 mapping revealed unique relaxometric behavior in that at low inversion times (<250 ms), the signal response curve displayed an increase in signal intensity or a plateau in signal intensity dependent on T1 relaxation time. Inclusion of this increase or plateau in signal intensity negatively impacted T1 fitting algorithms, leading to their failure or incorrectly calculated T1 values. The maximum peak signal intensity before the null point was found to be 210 ms, which impacts current low field T1 mapping protocols which use an initial inversion time of 80-110 ms.Validation of retrospectively calculated T1 as a biomarker in Multiple Sclerosis revealed that T1 of normal appearing brain tissue correlates with measures of Multiple Sclerosis progression (EDSS, BPF, and disease duration) with normal appearing white matter T1 correlating with BPF (r = -0.49, p = 0.0018); putamen T1 correlating with EDSS (r = 0.48, p = 2.40e-03), with BPF (r = 0.69, p = 2.04e-06), and disease duration (r = -0.37; p = 0.02); and globus pallidus T1 correlating with disease duration (r = -0.42; p = 0.0093). Lesion T1 is reflective of MS severity whereas MTR is not.Finally, development of an oxygen extraction fraction (OEF) mapping technique showed that application of independent component analysis (ICA) to cardiac gated spiral-trajectory phase images yielded components that feature stenosis features observed in magnitude images. These ICA components form the basis of OEF mapping from phase images. This dissertation presents four studies that seek to improve either current quantitative MR imaging protocols in the heart, or to develop and validate new quantitative MR imaging techniques in the brain for the purpose of monitoring disease progression or predicting disease.
Show less
- Title
- SEISMIC DESIGN STUDY OF STEEL PLATE SHEAR WALL
- Creator
- Moshiri, Ali
- Date
- 2012-04-20, 2012-05
- Description
-
plate shear walls are an innovative lateral load-resisting system capable of effectively and economically bracing a building against both wind...
Show moreplate shear walls are an innovative lateral load-resisting system capable of effectively and economically bracing a building against both wind and earthquake forces. The system consists of infill steel plates connected to boundary beams and columns over the full height of the framed bay. Beam-to-column connections can be rigid or shear connections and the infill plates can be either stiffened or unstiffened, depending on the design philosophy of the infill plates. The view of some structural designers is to use heavy stiffeners to reinforce and increase the buckling capacity of shear walls, whereas, if the walls are left unstiffened and allowed to buckle, their energy absorption will increase significantly due to the post-buckling capacity. Performance of 9-story SPSW with moment resisting beam to column connections was studied under quasi-static loading condition and 10 earthquake records recorded in Los Angeles by developing a nonlinear dynamic explicit finite element models in ABAQUS. All the models were validated with experimental results. Effect of stiffness of boundary elements (VBE and HBE) and plate thickness on general behavior of the structure were also investigated. In design of SPSWs, vertical boundary elements play a major role in increasing the capacity of the system. In high seismic zones there is always a chance of plastic hinge formation in the boundary elements specially columns in any intermediate floor. It is recommended that SPSWs not be used for medium to high rise buildings in high seismic regions until the lack of capacity design requirements for this type of SPSW is rectified.
Ph.D. in Structural Engineering, May 2012
Show less
- Title
- EXPLOITING NETWORK CODING IN DIFFERENT WIRELESS NETWORKS
- Creator
- Guo, Bin
- Date
- 2012-07-06, 2012-07
- Description
-
Wireless communication networks have been incorporated into our daily life and provide convenience anytime and anywhere. However, the wireless...
Show moreWireless communication networks have been incorporated into our daily life and provide convenience anytime and anywhere. However, the wireless medium is unreliable and unpredictable. Current wireless networks suffer from low throughput, low reliability, etc. Network coding, an alternative approach, has attracted more interests and has emerged as an important technology in wireless networks. It can provide significant potential throughput improvements and a high degree of robustness. This dissertation is built on the theory of network coding. In this dissertation, different network coding protocols are designed in varied wireless networks. The first part of this dissertation proposes a novel coding-ware routing protocol in wireless mesh networks. In particular, a generalized coding condition is formally established to identify the coding opportunities. Based on general coding conditions analysis, a novel routing metric FORM (Free-ride Optimal Routing Metric) and the corresponding routing protocol are developed with the objective to exploit the coding opportunities and maximize the benefit of “free-ride” in order to reduce the total number of transmissions and consequently to increase the network throughput. The results show the proposed protocol achieves significant throughput gain than existing approaches. The second part of this dissertation exploits network coding in wireless cooperative networks. Firstly, a Decode-and-Forward Network Coded (DFNC) protocol is proposed for multi-user cooperative communication system. In particular, DFNC develops an efficient construction method for coding coefficients and a novel decoding algorithm that combines network coding and channel coding. DFNC exploits both temporal and spatial diversities through multiple channels by allowing all the users to generate redundant network-coded packets in a distributed manner and it helps fully explore the redundancy provided by network coding to realize error correction. Theoretical analysis and simulation results demonstrate that DFNC outperforms other transmission schemes in terms of Symbol Error Rate (SER) and achieves higher diversity order. Secondly, the idea of DFNC is extended and Modified-DFNC (M-DFNC) is introduced for a more practical scenario: not all the users will be able to dedicate their resources to provide assistance for others. The throughput analysis shows that M-DFNC outperforms the conventional cooperative protocol in the low-SNR regime and it implies that an adaptive cooperation system should be adopted to optimize the performance. The simulation results validate the theoretical analysis.
Ph.D. in Electrical Engineering, July 2012
Show less
- Title
- CONSTITUTIVE BEHAVIOR AND MODELING OF AL-CU ALLOY SYSTEMS
- Creator
- Turkkan, Omer Anil
- Date
- 2013-05-07, 2013-05
- Description
-
High speed deformation events such as caused by projectile penetration, fragment impact and shock/blast loading are of great importance in...
Show moreHigh speed deformation events such as caused by projectile penetration, fragment impact and shock/blast loading are of great importance in designing materials and structures for army applications. In these events, materials are subjected to large strains, high strain rates and rapid increase in temperature due to thermoplastic heating. In such severe conditions, overall performance is determined by the evolution of flow stress, failure initiation and propagation, and commonly in the form of adiabatic shear banding. Some of 2XXX series aluminum-copper (Al-Cu) alloys are recognized for their decent ballistic properties, and therefore they have been used as an armor material for lightweight U.S. Army vehicles. Most recently, an Al-Cu-Mg-Mn-Ag alloy labeled as Al 2139-T8 has been developed and is evaluated by the U.S. Army Research Labs. because of its better ballistic properties and higher strength than its predecessors. The underlying microstructure is believed to be the key element for this superior performance. The goal of this study is to explore the effect of composition and microstructural features on overall dynamic material behavior by examining mechanical and deformation behavior of different Al-Cu material systems. Starting from the pure single crystal and polycrystalline Al structures, and adding a different element to chemical composition in each step (i.e., Cu, Mg, Mn, Ag), mechanical response of these different systems has been investigated. For all alloy systems with the exception of single crystal Al, mechanical tests have been performed at room and elevated temperatures covering quasi-static ( to ) and dynamic ( to strain rate regimes. xiv Shear-compression specimens promoting localized shear deformation have been used to explore tendency of each one of these materials to failure by adiabatic shear banding. In addition to phenomenological Johnson-Cook Model (JCM), physics based Zerrilli-Armstrong and Mechanical Threshold Models have been studied to model the constitutive response of Al-Cu alloys over a wide range of strain rates and temperatures.. An improved ZA model has been developed to better capture the trends in experimental data.
M.S. in MECHANICAL, MATERIALS, AND AEROSPACE ENGINEERING, May 2013
Show less
- Title
- OPTIMAL BIDDING STRATEGY FOR HYDRO UNIT
- Creator
- Zhu, Renchen
- Date
- 2013-04-30, 2013-05
- Description
-
The bidding price for the renewable energy is very different from some traditional energy, like gas, coal and so on, because when we produced...
Show moreThe bidding price for the renewable energy is very different from some traditional energy, like gas, coal and so on, because when we produced the renewable energy, the only cost is the generator unit cost, for example, wind power, water power and so on. People never need to pay the fuel price for the solar, water or wind. So here come one question how can we decide the energy price for these power. As we all know, the bidding price decided the profit for the generating company. Nowadays, more and more researches have been done in this field for every company want a good price to earn the highest profit. But for the renewable energy, they are different from the traditional energy which price is determined by the fuel price. So how to decide the bidding price of the renewable energy will be a good topic and question for us. To solve this problem, I will try to use an idea called minimum the imbalance in order to maximum the profit for those owners. This idea first has been applied in the wind unit. For my thesis, I will apply this idea to the hydro unit both with the storage and without storage. For each part, I will use some data to test. By these testing, the result will show this bidding strategy will be better.
M.S. in Electrical Engineering, May 2013
Show less
- Title
- VERIFICATION OF LARGE-SCALE ON-CHIP POWER GRIDS
- Creator
- Xiong, Xuanxing
- Date
- 2013, 2013-05
- Description
-
As technology scaling continues, the performance and reliability of integrated circuits become increasingly susceptible to power supply noises...
Show moreAs technology scaling continues, the performance and reliability of integrated circuits become increasingly susceptible to power supply noises, such as IR drops and Ldi/dt noises in the on-chip power grids. Reduced supply voltage levels in the grid can increase the gate delay, leading to timing violations and logic failures. In order to ensure a reliable chip design, it is critical to verify that the power grid is robust, i.e., the power supply noises are acceptable for all possible runtime situations. Hence, power grid verification has become an indispensable step in modern design flow of integrated circuits. Nowadays, it is common practice to verify power grids by simulation. Typically, an equivalent RC/RLC circuit model of the grid is extracted from the layout, and designers perform simulations to evaluate the power supply noises based on the current waveforms drawn by the circuit. As power grid simulation can only be performed after the circuit design is done, vectorless power grid verification has been introduced to enable early power grid verification with incomplete current specifications, so that the power grid design can be better tuned and optimized at early design stages, thus reducing the design time. Due to the increasing complexity of modern chips, power grid verification has become very challenging. The broad goal of this dissertation is to explore efficient algorithms for verifying large-scale on-chip power grids. Specifically, we study parallel power grid transient simulation, vectorless steady-state verification and vectorless transient verification. Parallel forward and back substitution algorithms are designed for efficient transient simulation; a set of novel algorithms are developed to incrementally improve the runtime efficiency of vectorless steady-state verification; and an efficient approach is proposed for vectorless transient verification with novel constraint setting.
PH.D in Electrical Engineering, May 2013
Show less
- Title
- MATHEMATICAL MODELING OF POLY(ETHYLENE GLYCOL) DIACRYLATE HYDROGEL SYNTHESIS VIA VISIBLE LIGHT FREE-RADICAL PHOTOPOLYMERIZATION FOR TISSUE ENGINEERING APPLICATIONS
- Creator
- Lee, Chu-yi
- Date
- 2013, 2013-05
- Description
-
Crosslinked hydrogels of poly(ethylene glycol) diacrylate (PEGDA) have been extensively used as scaffolds for applications in tissue...
Show moreCrosslinked hydrogels of poly(ethylene glycol) diacrylate (PEGDA) have been extensively used as scaffolds for applications in tissue engineering. In this thesis, PEGDA hydrogels are synthesized using visible light free-radical photopolymeriza- tion (λ = 514 nm) in the presence of the visible light photosensitive dye, EosinY, the co-initiator, triethanolamine (TEA), a comonomer, N-vinyl pyrrolidone (NVP), a crosslinking agent, PEGDA, and an optional PEG monoacrylate monomer that contains the cell adhesive ligand YRGDS. The incorporation level of the YRGDS lig- and as well as the physical and mechanical properties of these hydrogels dictate cell behavior and tissue regeneration. These hydrogel properties may be tuned through variations in polymerization conditions. The goal of this thesis was to develop a math- ematical model for PEGDA hydrogel formation which predicts the incorporation level of YRGDS and the crosslink density of hydrogel as a variety of polymerization con- ditions. This model provides insight into the process of hydrogel crosslinking and in effectively guiding the experimental design of these scaffolds for tissue engineering applications. To accomplish this task two major components comprised the studies of this thesis. The first component involved an investigation of the visible light photo- initiation mechanism of EosinY and TEA, and the second component involved the develop of a hydrogel synthesis model and its validation. Experiments and modeling were used to determine an expression for the rate of initiation of the EosinY/TEA initiation system and to propose a photoinitiation mechanism. In Chapter 2, exper- imental data and parameter fitting were utilized to obtain an empirical expression for the rate of initiation. However, this empirical expression did not consider the ef- fect of inhomogeneous light distribution which is present in this experimental system. The dynamics of light absorption during polymerization were measured under differ- xiv ent conditions in order to gain insight into the kinetic photoinitiation mechanism as well as the rate of initiation. In Chapter 3, a mechanism for this photo-initiation was proposed. Using this mechanism the light absorption dynamics accounting for inhomogeneous light distribution were simulated which were found to be in an agree- ment with the light absorption measurements shown in Chapter 2. Further validation of this proposed mechanism was achieved from polyNVP conversion measurements. This photo-initiation mechanism was implemented in the hydrogel model. In Chapter 4, the hydrogel synthesis model was developed based on the kinetic approach of the method of moments combined with the Numerical Fractionation technique. The model was used to predict the dynamics of hydrogel properties such as gel fraction, crosslink density, and RGD incorporation under various polymerization conditions. Model predictions were compared with experimental data. Three sets of experiments were conducted. In the first set of experiments where hydrogels were formed in the absence of Acryl-PEG-RGD, the total double bond concentration was kept constant while varying the compositions of NVP and PEGDA. The model and the experiments showed a maximum crosslink density for an acrylate to double bond ratio of 0.5 to 0.6. This is related to the synergistic cross-propagation between NVP and PEGDA, which results in an increase in the rate of polymerization leading to higher crosslink density. In the second set of experiments, hydrogels were formed in the presence of Acryl-PEG-RGD to investigate its incorporation as well as the hydrogel crosslink density. The model showed reasonable agreement with the experimental data and in some cases the predicted RGD deviated from the experimental measurements due to changes in volume upon swelling. The effect of swelling was not considered by the model. The calculated crosslink densities were compared with the inverse swelling ratios from the experiments. The reduction of free volume due to the space occupied xv by the unreacted pendant double bonds was not considered by the model. This reduc- tion of free volume affected the apparent swelling ratio obtained from experiments thus resulting in the observed mismatch between the experimental trends and the predicted crosslink density by the model. In the third set of experiments, additional crosslink density measurements were conducted using a PEGDA macromer of lower molecular weight (MW = 575 Da.). The experiments were performed in the absence of Acryl-PEG-RGD. Few cases were not accurately predicted since the model did not consider the reduction in the concentration of available pendant double bonds when gelation occurs. Among the three set of experiments, the hydrogel synthesis model offers reasonable predictions for most of the experimental cases. This model can be used as a guide for experimen- tally designing PEGDA hydrogels with the desired properties for tissue engineering applications.
PH.D in Chemical and Biological Engineering, May 2013
Show less
- Title
- EL CABAÑAL, A SUSTAINABLE NEIGHBORHOOD FOR THE TWENTY-FIRST CENTURY
- Creator
- Peris, Blanca
- Date
- 2013, 2013-07
- Description
-
Every generation builds its own city in terms of the social, economic, technological and cultural conditions of its time. We have the...
Show moreEvery generation builds its own city in terms of the social, economic, technological and cultural conditions of its time. We have the opportunity to put forward a new model of urban development that responds to the new conditions of habitability for the start of the 21st century. The fact is that we no longer live in a compact metropolis, but in a discontinuous metapolis, an extensive territory criss-crossed by road and rail transport routes and occupied by kernels of population, logistics centers, industry parks and shopping and leisure centers around which people (local, national and foreign) move according to their needs. In this situation it is as necessary to propose strategies for the renewal and compaction of the urban centers as for the integration and protection of the elements that constitute the natural and geographical landscape of our environment. The challenge of constructing a new neighborhood on the boundary between the city of Valencia, in Spain, and it’s orchard (the famous ‘Huerta’) will enable to explore this open and dynamic new hybrid condition of the territory and to propose a new model for the construction of the urban fringes. I would like to address the challenge of integrating the landscape that surrounds the city of Valencia, the landscape we have inherited from our ancestors. To do this it is necessary to reformulate the very concepts of urbanism with which traditionally is operated in the city. The word 'urbanism' was coined by Ildefons Cerdà to designate the science of urban growth, a process based on the implanting of a rational grid, superposed on an agricultural layout, in which the owners of a plot of agricultural x land had transferred to them the ownership of a plot of urban land eligible for development. It is this principle that has informed and overseen the urban expansions of the 19th century and the modern city of the 20th, the typically North American low-density city and the historical revivalism of the end of the last century. But the challenge facing urbanism now is to manage to make the city grow, integrating into our developments the anthropological and cultural elements of the landscape that surrounds us - constructing and conserving are accomplished in the same act. As against the old city-country dichotomy I now propose to bring about an intelligent transition between these two formerly antagonistic modes of dwelling, an integration that lets us recognize the social and cultural value of -in this case- the landscape of the Huerta of Valencia and incorporate it into the urban fabric by means of appropriate management strategies. Given an increasingly uniform global society, we need to recognize the specific cultural and landscape values of each territory as fundamental to the quality of life of the people who live there and to reaffirm a distinct identity that can provide a competitive advantage. Because of this, in contrast to the town planning of the 20th century, conceived on the basis of the speed of the car, I would like to propose a new model of 'urban-agricultural' development that guarantees the creation of a high-quality local environment. More than designing a city, I would like to create habitable environments that effectively resolve the different factors that give people the assurance habitability at different scales: the neighborhood, the landscape and the home.
M.S. in Architecture, July 2013
Show less
- Title
- EXPLORING THE SHEAR-AND-TIME DEPENDENT DEGRADATION OF VON WILLEBRAND FACTOR UNDER VENTRICULAR ASSIST DEVICE-RELATED FLOW CONDITIONS
- Creator
- Yang, Shuo
- Date
- 2015, 2015-12
- Description
-
Abnormalities in VWF can cause impaired blood coagulation which results in higher bleeding tendency in patients with this disorder. Alteration...
Show moreAbnormalities in VWF can cause impaired blood coagulation which results in higher bleeding tendency in patients with this disorder. Alteration in VWF is characteristic in ventricular assist devices (VADs) implanted subjects with failing hearts. The nature of the abnormalities produced and the conditions which produce such abnormalities are not fully understood. The studies in this thesis investigate quantitatively the effects of VADs and VAD-related flow conditions on VWF degradation. This thesis consists of three studies: 1) an in vitro VAD loop study in which was investigated the degradation effects of three VADs either under preclinical development (VAD I) or being commercially available (VAD II & III); 2) a viscometer shear study in which was investigated a variety of factors under the controlled condition of a modified Couette viscometer, namely, shear stress, exposure time, pulsatile frequency and protease function, with respect to VWF degradation 3) a tubular shear study in which was investigated the relative degradation effects of shear stress versus exposure time under more VAD-related shear stresses (10 - 100 times higher than physiological levels) and exposure times of miliseconds. In the VAD flow loop, significant VWF degradation induced by VADs wee observed with an approximately 95% loss of high molecular weight VWF by 60 minutes. In the viscometer and the tubular studies, the factors studied enhanced VWF degradation in the following manner: increased shear stress above physiological levels, prolonged exposure time and higher pulsatile shear frequency were associated with greater degradation; shear stress was a more dominant factor than exposure time with respect to the degradation; and a various shear stress regions demonstrated maximal degradation effects. In addition, calcium-dependent protease function was a necessity for VWF degradation at all shear stress levels investigated. The studies also revealed that the unfolding of VWF to expose the cleavage sites appeared to take more time under shear than the refolding to re-cover those sites under static conditions. Critical shear regions may be important for unfolding and degrading VWF multimers of various sizes.
Ph.D. in Biomedical Engineering, December 2015
Show less
- Title
- AN INTEGRATED DATA ACCESS SYSTEM FOR BIG COMPUTING
- Creator
- Yang, Xi
- Date
- 2016, 2016-07
- Description
-
Big data has entered every corner of the fields of science and engineering and becomes a part of human society. Scientific research and...
Show moreBig data has entered every corner of the fields of science and engineering and becomes a part of human society. Scientific research and commercial practice are increasingly depending on the combined power of high-performance computing (HPC) and high-performance data analytics. Due to its importance, several commercial computing environments have been developed in recent years to support big data applications. MapReduce is a popular mainstream paradigm for large-scale data analytics. MapReduce-based data analytic tools commonly rely on underlying MapReduce file systems (MRFS), such as Hadoop Distributed File System (HDFS), to manage massive amounts of data. In the same time, conventional scientific applications usually run on HPC environments, such as Message Passing Interface (MPI), and their data are kept in parallel file systems (PFS), such as Lustre and GPFS, for high-speed computing and data consistency. As scientific applications become data intensive and big data applications become computing hungry, there is a surging interest and need to integrate HPC power and data processing power to support HPC on big data, the so-called big computing. A fundamental issue of big computing is the integration of data management and interoperability between the conventional HPC ecosystem and the newly emerged data processing/analytic ecosystem. However, data sharing between PFS and MRFS is limited currently, due to semantics mismatches, lacking communication middleware, and the diverged design philosophies and goals, etc. Also, challenges also exist in cross-platform task scheduling and parallelism. At the application layer, the data model mismatch between the raw data kept on file systems and the data management software of an application impedes cross-platform data processing as well. To support cross-platform integration, we propose and develop the Integrated Data Access System (IDAS) for big computing. IDAS extends the accessibilities of programming models and integrates the HPC environment with the data processing MapReduce/Hadoop environment. Under IDAS, MPI applications and MapReduce applications can share and exchange data under PFS and MRFS transparently and efficiently. Through this sharing and exchange, MPI and MapReduce applications can collaboratively provide both high-performance computing and data processing power for a given application. IDAS achieves its goal with several steps. First, IDAS enhances MPI-IO so that MPI-based applications can access data stored in HDFS efficiently. Here the term efficient means that HDFS is enhanced to support MPI-based applications. For instance, we have enhanced HDFS to transparently support N-to-1 file write for better write concurrency. Second, IDAS enhances Hadoop framework to enable MapReduce-based applications process data that resides on PFS transparently. Please notice that we have carefully chosen the term “enhance” here. That is MPI-based applications not only can access data stored on HDFS but also can continue access data stored on PFS. The same is for MapReduce-based applications. Through these enhancements, we achieve seamless data sharing. In addition, we have integrated data accessing with several application tools. In particular, we have integrated image plotting, query, and data subsetting within one application, for Earth Science data analysis. Many data centers prefer erasure-coding rather than triplication to achieve data durability, which trades data availability for lower storage cost. To this end, we have also investigated performance optimization of the erasure coded Hadoop system, to enhance Hadoop system in IDAS.
Ph.D. in Computer Science, July 2016
Show less
- Title
- CONSTRUCT AND MEASUREMENT EQUIVALENCE ACROSS GENDER OF THE DYADIC ADJUSTMENT SCALE
- Creator
- Yap, Bonnie Joyce
- Date
- 2012-10-16, 2012-12
- Description
-
The Dyadic Adjustment Scale (DAS) is the most widely used measure of dyadic adjustment for individuals in committed relationships (Spanier,...
Show moreThe Dyadic Adjustment Scale (DAS) is the most widely used measure of dyadic adjustment for individuals in committed relationships (Spanier, 1976). However, little research has focused on whether DAS measures the construct of dyadic adjustment in a way that is equivalent and unbiased across genders. The current study utilized matched moderated regression (MMR) to assess each item of the DAS to detect if gender differences in the relationships between item responses and the construct being measured are due to (a) factors other than the construct and (b) differences in the construct. Archival data were acquired from a previously published study (Eddy, Heyman, & Weiss, 1991). The sample was very large (N =3322) so it was divided into two replication groups in such a way that no couples were included in the same group. A number of statistically significant differences were found on items in both replication samples; however: (1) many of these items were not consistent across replicate groups; (2) Even when there was a consistent gender difference in both replication groups, the magnitude was small; (3) When all of the differences were summed across items, bias in total scale score was minimal because the direction of the biases differed across items and so cancelled out and; (4) A small gender difference may exist in preferences for demonstrations of affection. Findings suggest that there are not substantial gender bias or scale equivalence problems with the DAS. The construct of dyadic adjustment was similar in men and women. These findings are congruent results from the recent study of South and Kruger (2009) on gender differences in the factor structure on the DAS and lend support to the valid use of the DAS in studies of dyadic adjustment.
M.S. in Psychology, December 2012
Show less
- Title
- COMMUNICATION AND COMPUTATION ARCHITECTURES FOR DISTRIBUTED WIRELESS SENSOR NETWORKS AND INTERNET OF THINGS
- Creator
- Yi, Won-jae
- Date
- 2017, 2017-07
- Description
-
Real-time data communication has been viral since the era of the smartphone rose to prominence in this decade. All communications from human...
Show moreReal-time data communication has been viral since the era of the smartphone rose to prominence in this decade. All communications from human to human, from device to human, and from device to device are handled over the Internet connection either through a mobile Internet service provider or Wi-Fi, which enables information exchange including weather service, road traffic conditions, news alerts, package tracking notifications. By looking at different perspectives of the role of a smartphone, it reveals itself as an ideal device to mobilize critical user data to construct a real-time monitoring application such as in remote healthcare and home automation systems. Not only can the smartphone handle real-time data transmissions, but it can also handle real-time computations on the device itself by utilizing its embedded CPU. This dissertation is a comprehensive study of the investigation, exploration and experimentation on a real-time health monitoring system where quality of life can be improved when the conventional system may affect and hamper regular daily activities. The design flow of this system is based on the Internet connection where any device that is communicatively associated with the smartphone can be connected to the Internet. By utilizing the Android smartphone, not only does the system gain real-time data transmission capability, but it also obtains flexibility to communicate with different types of sensors and platforms through multiple wireless protocols. This system is highly adaptable to the currently trending Internet of Things (IoT) standards, where significantly increasing anticipation over its social impact, where it can assist populations in rural and distant areas for healthcare, day-to-day activity monitoring, and prevention against hazardous conditions for workers. The system architecture introduced in this research is focused on reconfigurability and compatibility of wireless sensors where they are independent from a certain platform in which sensors are not limited to medical devices but also detect movement, location, climate condition and any other sensor for analyzing the environment. Four major components are introduced in this research including wireless sensor nodes, a central sensor data processing and communication node, an Android application, and a central database server. They are discussed and explored to seek for solutions to improve and enhance features in the fundamental system design. Communication and computation processing capabilities are evaluated for all major components for practical usage of the system for different case studies. Also as a quantitative case study, a posture and fall detection system is presented which determines the patient's activities, medical conditions and the cause of an emergency event through the integration of all system architecture components. Adapting the IoT system is also explored in this dissertation by introducing a protocol standard to improve data transmission efficiency and to enable cross-platform compatibility of wireless devices. In addition to improving system efficiency, a study on data security issues and assessment on sensor data has been explored by implementing a proposed security scheme to each major component within the real-time mobile monitoring system. Also, a concept of Quality-of-Service (QoS) for mobile monitoring system using a wireless sensor network has been investigated to provide a solution to prioritize sensor data transmissions based on the results obtained from the sensor data assessment application. The proposed solutions can be either implemented on or under the application layer.
Ph.D. in Computer Engineering, July 2017
Show less
- Title
- MOBILE ANDROID SENSOR SYSTEM FOR REAL-TIME PATIENT MONITORING AND HEALTHCARE APPLICATION
- Creator
- Yi, Won-jae
- Date
- 2012-04-25, 2012-05
- Description
-
A system using Android devices that collects, displays sensor data on the screen and streams to the central server in real-time is presented...
Show moreA system using Android devices that collects, displays sensor data on the screen and streams to the central server in real-time is presented in this research. Common Android devices, such as smartphones and tablets, are considered for this system to demonstrate its flexibility and compatibility of the application on any Android device. Bluetooth and wireless Internet connections are used for data transmission among the devices. Also, using Near Field Communication (NFC) technology on the smartphone, the system constructs a more efficient and convenient mechanism to achieve an automatic Bluetooth connection and an automatic application execution. This system is beneficial on Body Sensor Network (BSN) establishments for medical healthcare applications by adding wireless technology. Various types of sensors can be adapted to monitor a patient’s status in real-time. For demonstration purposes, an accelerometer, a temperature sensor and vital signs signal sensor data, including electrocardiography (ECG), blood pressure, electroencephalography (EEG) and respiration, are used to perform the experiment to provide fundamentals of remote patient diagnosis. Raw sensor data is interpreted to either graphical or text notations to be presented on the Android device and the central server. Furthermore, a Java-based central server application is introduced to demonstrate communication with the Android system for data storage and analysis through Internet connections. This system is capable of data transmission in real-time without exploiting system resources for data collection and interpretation. This system is also can be further extended for additional sensors, such as a sweatiness sensor, an electromyography (EMG) sensor, a glucose sensor and more for enhanced patient status diagnosis.
M.S. in Computer Engineering, May 2012
Show less
- Title
- IN VITRO STUDIES OF VIRULENCE SUPPRESSION ON P. AERUGINOSA BY PHOSPHATE / POLYPHOSPHATE-LOADED NANOPARTICLES
- Creator
- Yin, Yushu
- Date
- 2015, 2015-07
- Description
-
Critically ill patients harbor multi-drug resistant pathogens that can activate their virulence in the response to low nutrient conditions and...
Show moreCritically ill patients harbor multi-drug resistant pathogens that can activate their virulence in the response to low nutrient conditions and host stress derived factors. It was recently shown that the oversupply of inorganic phosphate to bacterial environment can profoundly suppress the virulence of pathogens. Here we hypothesized that phosphateand/ or polyphosphate-loaded nanoparticles can present a tool to deliver and slowly release phosphate in pathogen-rich niche, thereby suppressing bacterial virulence. In this work, a designed study on effect of different phosphate levels (including the phosphate released from hydrogel nanoparticles) on virulence of P. aeruginosa is addressed. In this work, we developed formulations for preparing hexametaphosphate-loaded nanoparticles on the basis of that for phosphate loaded nanoparticles. We utilized inverse miniemulsion polymerization in the synthesis of these nanoparticles. Polyethylene glycol diacrylate (PEGDA, moleculat weight of 575 Da) and N-vinyl pyrrolidone (molecular weight: 111.14 Da ) were chosen to be the initial monomers because the main crosslinker, polyethylene glycol is a kind of biocompatible material that has been approved by the U.S. Food and Drug Administration (FDA). Several parameter could be adjusted among the experiment. We selected the monomer mole fraction of PEGDA-575 as our parameter. After the synthesis, a nanoparticle size distribution between 110 nm and 150 nm was obtained. And these nanoparticles were proved to be able to release phosphate and hexametaphosphate as drug molecules. Although there were release bursts in the test of release kinetics, the crosslink density could be adjusted in following researches. The second part of this study is to test the virulence suppression effect of the nanoparticles in in vitro experiment on a kind of opportunistic pathogen, P. aeruginosa. This kind of gram-negative bacteria is one of the common intestinal microbial communities. We presented the strategy of suppressing virulence while containing rather than killing the bacteria. As a result, polyphosphate loaded nanoparticles showed to be the most effective one among several experiment groups. This result gave this study a promising future in further research in several aspects, such as in vivo test in biomedical and biomedical engineering.
M.S. in Chemical Engineering, July 2015
Show less
- Title
- NOVEL METHOD OF MANUFACTURING HYDROGEN STORAGE MATERIALS COMBINING WITH NUMERICAL ANALYSIS BASED ON DISCRETE ELEMENT METHOD
- Creator
- Xuzhe, Zhao
- Date
- 2015, 2015-07
- Description
-
High efficiency hydrogen storage method is significant in development of fuel cell vehicle. Seeking for a high energy density material as the...
Show moreHigh efficiency hydrogen storage method is significant in development of fuel cell vehicle. Seeking for a high energy density material as the fuel becomes the key of wide spreading fuel cell vehicle. LiBH4 + MgH2 system is a strong candidate due to their high hydrogen storage density and the reaction between them is reversible. However, LiBH4 + MgH2 system usually requires the high temperature and hydrogen pressure for hydrogen release and uptake reaction. In order to reduce the requirements of this system, nanoengineering is the simple and efficient method to improve the thermodynamic properties and reduce kinetic barrier of reaction between LiBH4 and MgH2. Based on ab initio density functional theory (DFT) calculations, the previous study has indicated that the reaction between LiBH4 and MgH2 can take place at temperature near 200°C or below. However, the predictions have been shown to be inconsistent with many experiments. Therefore, it is the first time that our experiment using ball milling with aerosol spraying (BMAS) to prove the reaction between LiBH4 and MgH2 can happen during high energy ball milling at room temperature. Through this BMAS process we have found undoubtedly the formation of MgB2 and LiH during ball milling of MgH2 while aerosol spraying of the LiBH4/THF solution. Aerosol nanoparticles from LiBH4/THF solution leads to form Li2B12H12 during BMAS process. The Li2B12H12 formed then reacts with MgH2 in situ during ball milling to form MgB2 and LiH. Discrete element modeling (DEM) is a useful tool to describe operation of various ball milling processes. EDEM is software based on DEM to predict power consumption, liner and media wear and mill output. In order to further improve the milling efficiency of BMAS process, EDEM is conducted to make analysis for complicated ball milling process. Milling speed and ball’s filling ratio inside the canister as the variables are considered to determine the milling efficiency. The average and maximum speed of balls is critical to affect the collision force among balls. High collision force can be achieved by applying large torque on the milling shaft. The high milling speed and large ball’s filling ratio increase the torque and average speed of balls. However, the high average speed and large torque lead to non-uniformed milled material. Therefore, appropriate milling speed and ball’s filling ratio are ought to be selected to have better milled materials. The results of this study lead to the feasibility of LiBH4 + MgH2 system for reversible hydrogen storage application near ambient temperature. Applying appropriate ball’s filling ratio and milling speed can improve the milling efficiency of BMAS method.
M.S. in Material Science Engineering, July 2015
Show less
- Title
- WIRELESS COMMUNICATION FOR AN ACTUATED GLOVE FOR HAND REHABILITATION
- Creator
- Yuan, Ning
- Date
- 2016, 2016-12
- Description
-
Stroke survivors often experience long-term upper extremity impairment. This can greatly impair activities of daily living. The eXtension...
Show moreStroke survivors often experience long-term upper extremity impairment. This can greatly impair activities of daily living. The eXtension Glove (X-Glove) is a soft robotic device to aid hand therapy. It uses cables serving as external extensor tendons to assist digit extension and control digit flexion. Load cells are located on each motor to detect the force value of fingers. This paper provides a way to add a biofeedback function on the X-Glove and update the microprocessor to a PIC32MX795. So the X-Glove can establish a wireless communication transmit data with terminals, like PC. In order to display the biofeedback, a graphic user-interface is also developed so that therapists can optimize the therapy for each individual patient in real time.
M.S. in Biomedical Engineering, December 2016
Show less
