Search results
(801 - 820 of 1,017)
Pages
- Title
- IMPROVING DEEP LEARNING BASED SEMANTIC SEGMENTATION USING CONTEXT INFORMATION
- Creator
- Xia, Zhengyu
- Date
- 2021
- Description
-
Semantic segmentation is an important but challenging task in computer vision because it aims to assign each pixel a category label accurately...
Show moreSemantic segmentation is an important but challenging task in computer vision because it aims to assign each pixel a category label accurately. Nowadays, applications such as autonomous driving, path navigation, image search engine, or augmented reality require accurate semantic analysis and efficient segmentation mechanisms. In this thesis, we propose multiple models to improve the performance of semantic segmentation. In the first part, we focus on the single-task network, which aims to improve the performance of semantic segmentation. Our research includes exploiting context information using mixed spatial pyramid pooling to extract dense context-embedded features in FCN-based semantic segmentation. We also propose a GAF module to generate a global context-based attention map to guide the shallow-layer feature maps for better pixel localization. In the second part, we focus on a multi-task network that incorporates semantic segmentation to improve other computer vision tasks such as object detection. Specifically, a multi-task network, along with a learning strategy is designed to let semantic segmentation and object detection assist each other since they are highly correlated. Also, we include weakly-supervised multi-label semantic segmentation learning to deal with the shortage of high-quality training examples and to improve the performance of cross-domain object detection. In the third part, we focus on improving the performance of video panoptic segmentation, which is a unified network that incorporates semantic segmentation and instance segmentation using video streams. We design a new ConvLSTM pyramid to transmit spatio-temporal contextual information in our video panoptic segmentation network. Specifically, we propose a modified ConvLSTM to generate temporal contextual information. Also, we design an MSTPP module to obtain mixed spatio-temporal context-embedded feature maps. Experimental results on different datasets show that our proposed method achieves better performance compared with the state-of-the-art methods.
Show less
- Title
- ANALYTIC STUDY OF THE CELLULAR FUNCTIONS OF UBL4A
- Creator
- Zhang, Huaiyuan
- Date
- 2021
- Description
-
Ubiquitin-like protein 4A (Ubl4A) is a small protein encoded by a “housekeeping” gene that locates on the X chromosome. As a multi-functional...
Show moreUbiquitin-like protein 4A (Ubl4A) is a small protein encoded by a “housekeeping” gene that locates on the X chromosome. As a multi-functional protein, it has roles in a variety of cellular events including anti-tumorigenesis, response to DNA damage, inhibiting the fusion between autophagosome and lysosome, and docking of the tail-anchored proteins to the endoplasmic reticulum. We have previously reported that the newborns from Ubl4A-deficient mice had a high rate of mortality due to defect of AKT-dependent glucose metabolism. At the molecular level, Ubl4A directly binds with the actin related protein (Arp) 2/3 complex to accelerate the building up of the actin branching network, which further promotes the translocation and activation of the Akt, a key kinase for multiple cellular processes, from the cytosol to the plasma membrane.In further exploration of the molecular basis of Ubl4A in cell survival, here, we demonstrated that Ubl4A is critical for mitochondrial fusion and cell survival under nutrient depletion. In WT (wild-type) cells, the association of Ubl4A and the Arp2/3 complex serves as a primed “pool” of the actin branching network near mitochondria and enables mitochondria to fuse quickly for energy conservation upon starvation insult. However, such a “ready-to-go pool” of mitochondria was significantly decreased in the Ubl4A-deficient cells. As the result, the mitochondria became fragmentated, exhibited decreased trans-membrane potential, and accumulated ROS (reactive oxygen species), consequently, initiated mitochondria-mediated apoptosis. In this study, we also observed that Ubl4A-deficient mice displayed type II diabetic phenotype under a high-fat diet feeding. The preliminary results showed that these Ubl4A-deficient mice were more sensitive to glucose intolerance than their WT littermates, most likely owing to a delay in glucose uptake, and/or insulin secretion, both of which require the Arp2/3-actin branching network. We speculated that Ubl4A might be involved in cellular vesicle formation and/or secretion, but further investigation is needed to approve this hypothesis. Taken together, these findings provide a novel function of Ubl4A and further insight into the multi-functional roles of Ubl4A in mammalian cells, as well as the molecular basis for understanding the clinical relevance of Ubl4A in related human diseases.
Show less
- Title
- MACHINE VISION NAVIGATION SYSTEM FOR VISUALLY IMPAIRED PEOPLE
- Creator
- Yang, Guojun
- Date
- 2021
- Description
-
Visually impaired people are often challenged in the efficient navigation of complex environments. Moreover, helping them navigate intuitively...
Show moreVisually impaired people are often challenged in the efficient navigation of complex environments. Moreover, helping them navigate intuitively is not a trivial task. Cognitive maps derived from visual cues play a pivotal role in navigation. In this dissertation, we present a sight-to-sound human–machine interface (STS-HMI), a novel machine vision guidance system that enables visually impaired people to navigate with instantaneous and intuitive responses. This proposed system extracts visual context from scenes and converts them into binaural acoustic cues for users to establish cognitive maps. The development of the proposed STS-HMI system encompasses four major components: (i) a machine vision–based indoor localization system that uses augmented reality (AR) markers to locate the user in GPS-denied environments (e.g., indoor); (ii) a feature-based object detection and localization system called the simultaneous localization and mapping (SLAM) algorithm, which tracks the mobility of users when AR markers are not visible; (iii) a path-planning system that creates a course towards a destination while avoiding obstacles; and (iv) an acoustic human–machine interface to navigate users in complex navigation courses. Throughout the research and development of this dissertation, each component is analyzed for optimal performance. The navigation algorithms are used to evaluate the performance of the STS-HMI system in a complicated environment with difficult navigation paths. The experimental results confirm that the STS-HMI system advances the mobility of visually impaired people with minimal effort and high accuracy.
Show less
- Title
- EVENT-BASED NONINTRUSIVE LOAD MONITORING
- Creator
- Yan, Lei
- Date
- 2021
- Description
-
Non-Intrusive Load Monitoring (NILM) is an important application to monitor household appliance activities and provide related information to...
Show moreNon-Intrusive Load Monitoring (NILM) is an important application to monitor household appliance activities and provide related information to house owner or/and utility company via a single sensor installed at the electrical entry of the house. With this information, utilities can perform many tasks such as energy conservation, planning gen-eration more wisely, and demand response (DR) study. For house owners, they can un-derstand their bill more clearly and make better budget plan. For researchers, NILM sys-tem is a good foundation for energy management in buildings and can provide valuable power information for smart homes design. This dissertation aims to develop and demon-strate a complete and accurate event-based NILM system, which includes (1) an edge-cloud framework for event-based NILM, (2) an adaptive event detection method, (3) a two-stage event-based load disaggregation method; and (4) a high-resolution (50Hz) NILM dataset. Event detection is the first step in event-based NILM and it can provide deter-ministic transient information to identify appliances. However, existing methods with fixed parameters suffer from unpredictable and complicated changes in smart meter data such as long transition, high fluctuation and near-simultaneous events in both power and time domains. This dissertation presents an adaptive method to detect events based on home appliance load data with high sampling rate (>1Hz) by flexibly tuning the parame-ters according to the data being processed. The proposed method runs fast over the data stream and captures the transient process by multi-timescales searching as well. The mi-cro-timescale and macro-timescale window could deal with near-simultaneous events and long-transition events, respectively. Transient load signatures are extracted from detected events and stored in a sequential tree struct that can be used for NILM and load recon-struction, etc. Case studies on a 20Hz dataset, the LIFTED dataset of 50Hz, and the BLUED dataset of 60Hz demonstrate that the proposed method is able to work on data of different sampling rates and outperforms other methods in event detection. The ex-tracted load signatures could also improve the efficiency of NILM and help develop oth-er applications. This dissertation presents an online transient-based electrical appliance state track-ing method for NILM. The proposed Factorial Particle based Hidden Markov Model (FPHMM) method takes advantage of transient features in high-resolution data to infer states in the transient process and conducts steady state verification to rectify falsely identified appliances. The FPHMM method can overcome the common feature similarity problem in NILM by combining particle filter method and Markov Chain Monte Carlo sampling method, and by mining the intra-relationship of states inside a single appliance and the inter-relationship of states among multiple appliances. The FPHMM method is tested on the LIFTED dataset with appliance-level details and high sampling rates. Test-ing results demonstrate that the FPHMM method is effective in resolving the feature similarity issue. A modified mean shift algorithm with different levels of bandwidth is proposed as well to cluster the extracted features from event detection. Based on the clustered fea-tures, another solution is proposed to decode the states of appliance in two stages. The first stage uses Bayesian Inference Factorial HMM (BI-FHMM) solver to accelerate com-putational speed and improve accuracy by integrating the load signatures and statistical inference. The second stage then verifies and rectifies the results obtained from the first stage. Test results demonstrate that the proposed approach achieves good performance and can be applied to existing smart meters.
Show less
- Title
- Two Essays on Corporate Finance
- Creator
- Wang, Bo
- Date
- 2021
- Description
-
This dissertation is comprised of two essays on finance. In the first chapter, I investigate whether and to what extent unionization would...
Show moreThis dissertation is comprised of two essays on finance. In the first chapter, I investigate whether and to what extent unionization would influence the compensation to the non-executive employees. In the second chapter, I explore how social capital would impact regional innovation performance by private firms.In the first chapter, I examine the effects of unionization on stock options granted to non-executive employees. Adopting a regression discontinuity design, I find that employees receive more stock options after the union election wins. The positive association is more pronounced when unions have more bargaining power and when free-riding problems are less severe. Further, I provide evidence that employees receive more stock options when CEOs are entrenched. Finally, I show that stock options provide risk-taking incentives to non-executive employees. This work provides a potential explanation to the union wage premium puzzle that unions utilize stock options to increase non-executive employees’ total compensation. In the second chapter, I investigate whether and to what extent social capital may affect regional innovation by private firms in the U.S. I document that regional social capital is positively associated with the quantity, quality, and novelty of county-level innovation by private firms. This effect is more prominent in regions with a lower supply of financial capital. My findings further suggest that social capital is complementary to investment in research and development. Using a Spatial Durbin Model, I report that regional social capital has significant spillover effects in boosting the innovation of neighboring counties.
Show less
- Title
- RESIDENTIAL LOAD DATA COMPRESSION AND LOAD DISAGGREGATION
- Creator
- Xu, Runnan
- Date
- 2021
- Description
-
Non-Intrusive Load Monitoring (NILM) for residential applications aims to dis-aggregate the total electricity consumption of a household into...
Show moreNon-Intrusive Load Monitoring (NILM) for residential applications aims to dis-aggregate the total electricity consumption of a household into the single appliance information. For the customer side, users can change their consumption habit and save more electricity. For the utility, generation scheduling will be more accurate, efficient, and secure. Furthermore, energy management system, demand response and fault diagnosis will benefit from the real time information provided by the NILM. This dissertation first proposes a data compressed method suitable for the NILM data. Then a real time disaggregation based on the Kalman filter is proposed to obtain the appliance state information. A model-free lossless data compression method for time series in smart grids (SGs), namely, Lossless Coding considering Precision (LCP) method is proposed. The LCP method encodes the current datapoint only using the immediate previous datapoint by differential coding, XOR coding, and variable length coding and transmits the encoded data once generated. It does not use the dynamics (e.g., many previous datapoints) or prior knowledge (e.g., mathematical models) of the time series. It considers the patterns, potential applications, and associated precision to preprocess the time series and especially suits high-resolution time series with long steady periods. The LCP method features low-latency and generalizability which enables real-time data communication for different time-critical tasks. Sub-metered load profiles in REDD dataset, high-resolution LIFTED dataset, AMPds dataset and PMU dataset are used to evaluate the performance of the LCP method. The results show that the LCP method demonstrates high compression ratio, low latency, and low complexity compared to state-of-the-art Resumable Data Com-pression (RDC) method, DEFLATE based on LZ77 & Huffman coding, and Lempel-Ziv-Markov Chain Algorithm (LZMA). An online method based on the transient features of individual appliances and system steady-state characteristics is proposed to estimate the appliances’ working states. It determines the number of states for each appliance via Density-based Spatial Clustering of Applications with Noise (DBSCAN) method and models the transition relationship among different states. The states of working appliances are identified from aggregated power signals by implementing the Kalman filtering method into the Factorial Hidden Markov Model (FHMM) and by the verification of system states which are the combination of working states of individual appliances. The proposed method is event based and the use of transient features extracted from event detection could achieve fast state inference and is suitable for online load disaggregation. The proposed method is tested on high-resolution dataset such as LIFTED and outperforms other related methods, including Segment-wise Integer Quadratic Constraint Programming (SIQCP), Combinatorial Optimization (CO), and the exact FHMM (FHMM_EXACT), in terms of accuracy, f1 score, and computational time.
Show less
- Title
- Testing a pilot intervention aiming to increase transgender allyship among future healthcare providers
- Creator
- Yoder, Wren
- Date
- 2021
- Description
-
Transgender individuals often experience poor health outcomes related to a lack of provider knowledge and comfort around transgender issues. ...
Show moreTransgender individuals often experience poor health outcomes related to a lack of provider knowledge and comfort around transgender issues. Ally identity development and cultural humility theories have been used to develop interventions shown to improve attitudes, knowledge, and skills related to being an ally to the transgender community. Additionally, healthcare providers have reported a desire for online tools related to transgender healthcare, and online interventions can be more cost effective than traditional in-person trainings. The current study developed an hour-long online intervention composed of six activities aiming to increase attitudes, knowledge, skills, and identification as an ally to the transgender community. Tests were conducted to assess whether these domains increased significantly from baseline to post in the intervention condition compared to the control condition and whether the increase was maintained at 2-week follow up. The sample included cisgender (i.e., male or female) students studying a subject related to healthcare recruited online through Prolific (N = 78). Results indicated that knowledge and skills increased significantly from baseline to post in the intervention condition compared to the control condition, and increases were maintained at 2-week follow up. However, this was not the case for attitudes and identity. These findings largely replicate existing research on knowledge about transgender individuals and provide new insights into skills, attitudes, and identity related to transgender allyship. Findings can inform future research on transgender allyship intervention design and allyship theory as well as support improvements in clinical practice and policy related to transgender healthcare services.
Show less
- Title
- Drawing on Darwinism: Rewriting the Origin of Louis Sullivan's Idea
- Creator
- Frey, Syan
- Date
- 2021
- Description
-
To observe that the unique architectural ornaments that make up the body of work of Louis Henri Sullivan (1856-1924) emulate nature is to...
Show moreTo observe that the unique architectural ornaments that make up the body of work of Louis Henri Sullivan (1856-1924) emulate nature is to state a reality so obvious that it is both pedantic and droll. To use the double entendre that those natural forms drew on Darwinism, however, is to make several more specific claims. First, it can be credibly established that the system of architectural ornament that was the primary contribution of Louis Sullivan to the discipline of architecture was directly inspired by Sullivan’s synthesis of the thesis of natural selection contained within the pages of Asa Gray’s botanical manual. Second, the circumstances of that moment of synthesis reveal that the reason for Sullivan’s Darwinism was not merely the desire to emulate nature, but rather to signify the end of faith. Finally, Sullivan’s synthesis of various Darwinisms drew not only on the thesis for his own artistic inspiration, he drew on the substance of Darwin’s arguments to formulate a secular theory of the nature of inspiration and the technique of design. In the years following, this theory has become the primary technique by which design is taught.Louis’ unique education, which was tied to Darwinism from the very beginning, gave him an unusual perspective on the challenges of architectural design in the industrial age. The economic circumstances of his life as a first-generation immigrant exposed him to just the right education to lead him to explore evolutionary science as the inspiration for design. To be clear, the content of the thesis of natural selection was entirely irrelevant to the theory and practice of architecture in the nineteenth century. Yet by the end of the century the broad consensus among architects, historians, and theorists alike was that there was a, “close and causal relationship,” between Darwinism and modern architecture. Sullivan’s theory drew on Darwinian ideas to dismiss theological styles as empty formalisms, reveal the racism of ethnographic accounts for architectural forms, and argue for the evolution of an American Architecture, liberated from its colonialist origins. The context within which that shift occurred is significant. The justification for nearly every work of architecture in human history prior to the middle of the nineteenth century was some form of god. Mid-nineteenth century architecture in the United States was composed of a variety of regional ethnic styles intended to represent the ethnic origins, religious affiliations, moral inclinations, and nationalist allegiances of an array of displaced immigrant communities. The Civil War laid bare the reality that such ethnic styles represented a segregationist and racialized idea of the modern world. Over the course of the late nineteenth century, the profession of architecture was forced to abandon theological justifications for the practice of architecture as scientifically invalid, morally corrupt, and motivated by racism. This was Sullivan’s full idea: Put instinct before reason in priority, and engage in the iterative analysis of various instincts about the situation. Observe the patterns that emerge. Explore those instincts, until you find that your patterns merge with universal patterns. Do not fear error, as it makes the work alive. The capacity to capture that living essence is in all of us, individually and collectively, not some external force. The most-right instincts are ones in which the resulting form is a demonstration of its function. To understand what Sullivan meant with this we must see it as a Darwinian idea. Instinct is an animal property, a capacity which we share with other species. For Darwin, this sharing of instinct is essential for interspecies empathy. The antithesis of instinct is reason, which Sullivan describes as secondary. Reason is cold and lifeless, but also correct. True reason, Sullivan claims, is learned by experiment, and example. The greatest art speaks not just to our reason, but to our instinct. This, then is the task of the designer – to temper instinct with reasoned evaluation. Sullivan argues that it begins with an intuition, an idea he drew from Darwin’s Descent of Man.
Show less
- Title
- TASK-BASED LOAD FORECASTING AND ROBUST RESOURCE SCHEDULING IN SMART GRID
- Creator
- Han, Jiayu
- Date
- 2021
- Description
-
In microgrids, the uncertainty of load and renewables and lack of generation capacity will lead to a wide variety of operation problems in...
Show moreIn microgrids, the uncertainty of load and renewables and lack of generation capacity will lead to a wide variety of operation problems in both grid-connected mode and islanded mode. This motivates the design of the state-of-art microgrid master controller for microgrid energy management, load forecasting, and demand response. Uncertainty in renewables and load is a great challenge for microgrid operation, especially in islanded mode as the microgrid may be small in size and has limited flexible resources. A multi-timescale, two-stage robust dispatch model is proposed to optimize the microgrid operation. The proposed one uses only one model to combine the hourly and sub-hourly dispatch together, which means the day-ahead hourly dispatch results must also satisfy the sub-hourly conditions. At the same time, the feasibility of the day-ahead dispatch result is verified in the worst-case condition considering the high-level uncertainty in renewable energy output and load consumptions. In addition, battery energy storage system (BESS) and solar PV units are integrated as a combined solar-storage system in the proposed model and the output power of the combined solar-storage system remains unchanged on an hourly basis. Furthermore, both BESS and thermal units provide regulating reserve to manage solar and load uncertainty. The model has been tested in a controller hardware in loop (CHIL) environment for the Bronzeville Community Microgrid system in Chicago. The simulation results show that the proposed model works effectively in managing the uncertainty in solar PV and load and can provide a flexible dispatch in both grid-connected and islanded modes.When the generation capacity of an islanded microgrid is less than the load demand, load curtailment is inevitable. This dissertation proposes a multi-objective optimization model to minimize the load curtailments. Specifically, the proposed model minimizes the generation cost and total load curtailments and also minimizes the maximum load curtailment. Furthermore, the impact of the penalty coefficients of total load curtailment and maximum load curtailment is analyzed, which provides a strategy to choose the value of the two penalty coefficients according to different practical purposes. The proposed model can be used in both microgrid generation scheduling and microgrid planning problems. It was tested in the Bronzeville Community Microgrid system and the results showed that the proposed model can reduce the total load curtailment and maximum load curtailment.Load forecasting is one of the most important and studied topics in modern power systems. However, traditional load forecasting is an open-loop process as it does not consider the end use of the forecasted load. This dissertation proposes a closed-loop task-based day-ahead load forecasting model labeled as LfEdNet that combines two individual layers in one model, including a load forecasting layer based on deep neural network (Lf layer) and a day-ahead stochastic economic dispatch (SED) layer (Ed layer). The training of LfEdNet aims to minimize the cost of the day-ahead SED in the Ed layer by updating the parameters of the Lf layer. Sequential quadratic programming (SQP) is used to solve the day-ahead SED in the Ed layer. The test results demonstrate that the forecasted results produced by LfEdNet can lead to lower cost of day-ahead SED at the expense of slight reduction in forecasting accuracy.
Show less
- Title
- TOPOLOGY OPTIMIZATION OF SYNCHRONOUS ELECTRIC MACHINES
- Creator
- Guo, Feng
- Date
- 2021
- Description
-
Topology optimization of electric machine is attractive because of the increased design degree of freedom compared to conventional electric...
Show moreTopology optimization of electric machine is attractive because of the increased design degree of freedom compared to conventional electric machine design techniques. Also, a topology optimization approach does not necessarily require the use of a geometric template where dimensions are controlled by parameters. In this dissertation, a density-based magneto-structural topology optimization approach for the design of synchronous reluctance machine (SynRel), interior permanent magnet synchronous machine (IPMSM), and wound field synchronous machine (WFSM) rotors is developed. Depending on the electric machine type, the optimization problems are divided into single material and multi-material topology optimizations. A mass thresholding function is introduced to overcome the intermediate density issue which is caused by combining the magnetic and structural topology optimization problems. SynRel and IPMSM optimization examples are presented in the single material topology optimization section. For the multi-material topology optimization, in order to properly define the boundary conditions between multiple materials, a virtual region calculation approach is proposed. In the WFSM topology optimization, the copper field winding is represented by a virtual region. The contact and frictionless boundary conditions between the copper field winding and the electrical steel is defined and the centripetal load of the copper winding are equivalently calculated and applied on the elements on the electrical steel next to the boundary between the copper field winding and the steel of the WFSM pole tip. In additional to the total free-form magneto-structural topology optimization, a density-based combined dimensional and topology optimization is developed for the design of IPMSM and WFSM rotors. Both the dimensional and topological control variables are integrated to simplify the optimization problem. For IPMSM rotor design, the permanent magnet (PM) block shape is preferred to be retained where dimensional optimization could be used. The proposed dimensional topology optimization approach can fit in this design situation, where the PM is designed using dimensional control variables where the rest of the design domain is optimized using topology optimization. To allow the block or rectangular magnet to move and change size, the surrounding design domain mesh must deform or distort. The Laplace's smoothing mesh deformation technique is used in this approach and helper lines are connected to allow greater mesh deformation range and to avoid over mesh distortion. In addition to IPMSMs, a WFSM example is presented optimizing the winding region using dimensional optimization and the rotor core using topology optimization. An alternative combined dimensional and topology optimization approach has also been developed primarily for the design of the IPMSM rotors. In this approach, the mesh deformation is not required but there is no explicit geometric boundary between the rectangular permanent magnet and the surrounding electrical steel and air. In this approach, the PM density is expressed as a Heaviside rectangular function of dimensional variables. The function is projected onto the rotor mesh. Modified material penalizations are used. Topology optimization then controls the deposition of electrical steel and air. Three different IPMSM examples are presented with different dimensional control variables, including the PM position, size and angle.
Show less
- Title
- THE EFFECTS OF COMMUNICATION MODALITY ON PRESENCE, COGNITIVE LOAD AND RETENTION IN SECOND LIFE
- Creator
- WILKES, STEPHANY FILIMON
- Date
- 2009-12
- Description
-
This thesis reports findings from a study (N = 60) of the impact of three communication modalities (voice only, text only, and voice and text...
Show moreThis thesis reports findings from a study (N = 60) of the impact of three communication modalities (voice only, text only, and voice and text simultaneously) on cognitive load, as measured by subjective reports of mental effort; on learning, as measured by tests of recall and retention; and on perceptions of presence as measured by a Presence Questionnaire (Witmer & Singer, 2005). Based on the results of prior empirical research, it is hypothesized that retention scores will be higher for voice participants and voice-and-text participants than for text-only participants; that cognitive load will be lower for voice participants and higher for text conditions; that voice will contribute to greater perceptions to presence; and that higher perceptions of presence will not correlate with deeper learning. Study results indicate that communication modality significantly effected cognitive load (F(2, 54) = 4.58, p = .01) and retention (F(2, 54) = 3.53, p = .04), and that experience with and time spent in the virtual environment had significant effects on measures of cognitive load, retention, and presence: Significant between-subjects effects were found for cognitive load and time (p = .23), for retention and time (p = .21), and for retention and experience (p — .03).
Show less
- Title
- QUANTIFYING UNCERTAINTY IN RANDOM ALGEBRAIC OBJECTS USING DISCRETE METHODS
- Creator
- Wilburne, Dane
- Date
- 2018, 2018-05
- Description
-
This thesis consists of two parts. Part 1 is concerned with the study of random algebraic objects and Part 2 deals with statistical modeling...
Show moreThis thesis consists of two parts. Part 1 is concerned with the study of random algebraic objects and Part 2 deals with statistical modeling for networks. Part 1 begins with the study of random monomial ideals. We define several models for generating random monomial ideals, illustrate their connection with models of random simplicial complexes, and study the behavior of various algebraic invariants of interest (e.g., Krull dimension and first Betti numbers) in the ER-type model. Next we consider a model for random numerical semigroups. In order to understand their properties, we introduce a family of simplicial complexes whose algebraic and combinatorial properties encode probabilistic information about random semigroups from the model. In Part 2, we introduce two exponential random graph models. The first is the shell distribution model. The sufficient statistics of this model are related to the k-cores of a network, which is a graph-theoretic concept designed to capture connectivity information in a more refined way than node degrees. We study the theoretical properties of the shell distribution model, develop an MCMC algorithm for sampling from the model, give an algorithm for sampling from the space of graphs with a fixed shell distribution, and present several simulation studies. The second model is the edge-degeneracy model, whose sufficient statistics are related to the density of edges in the graph. For this model, we prove several theoretical results concerning the model polytope and how it governs the asymptotic behavior of the model as the parameters diverge along infinite rays.
Ph.D. in Applied Mathematics, May 2018
Show less
- Title
- Quantification of Vascular Permeability in the Retina Using Fluorescein Videoangiography Data as a Biomarker for Early Diabetic Retinopathy
- Creator
- Kayaalp Nalbant, Elif
- Date
- 2023
- Description
-
Diabetic retinopathy, which is the most common reason for blindness in the working-age population, affects over one-third of those who have...
Show moreDiabetic retinopathy, which is the most common reason for blindness in the working-age population, affects over one-third of those who have had diabetes for over ten years. High blood sugar level (hyperglycemia) in the blood damages blood vessels and tight junction at the blood-retinal barrier (BRB). Chronic inflammation leads to changes in vascular health, and over time blood vessels tend to get damaged and exhibit higher “leakage” or permeability. In the late stage of DR, hemorrhages can occur, leading to irreversible damage of neuronal tissue in the retina and vision loss. In the clinic, there are some biomarkers and imaging modalities used to diagnose DR based on some of the more severe products of DR (e.g., hemorrhage), but there is no non-invasive, highly sensitive method to detect diabetic retinopathy before clinical signs occur, when mitigating therapies could be more effective. In this thesis, indicator dilution theory was explored to modeling the temporal dynamics of fluorescein in the retina after intravenous injection, with an aim to quantitatively map subtle changes in retinal blood flow and vascular permeability that could preempt subsequent irreversible damage. Specifically, a simplified version of indicator dilution theory—namely the “adiabatic approximation in tissue homogeneity” (AATH) model—was used to estimate physiological parameters such as the blood flow (F) and the extraction fraction (E: a parameter coupled with vascular permeability) from retinal fluorescein videoangiography data. The AATH fitting protocol was optimized through simulations using a more complex model (the AATH-vascular heterogeneity model, AATH-VH). It was determined that a two-step least square fitting method was more sensitive than a single-step least square fitting of AATH to simulated data to evaluate vascular permeability in early diabetic retinopathy. The optimized data analysis protocol was then evaluated in an initial clinical study comparing healthy control subjects to those with moderate non-proliferative DR. Volumetric blood flow and retinal vascular permeability maps were compared between patient groups with clear increases in extraction fraction observed in the mild NPDR patients compared to control. These promising early data have been the foundation to an ongoing 5 year study tracking 100 Diabetic patients with no DR so see if early changes in vascular permeability can predict which patients are more likely to progress to DR.
Show less
- Title
- High School Mathematics Teachers’ Conceptions of Nature of Mathematics (NOM) and How Prior Learning Environments Affect These Conceptions
- Creator
- Elefteriou, Katherine
- Date
- 2023
- Description
-
Literature shows that the Nature of Mathematics Knowledge (NOMK) dates back to the era of Plato and Aristotle (Dossey, 1992). It suggests that...
Show moreLiterature shows that the Nature of Mathematics Knowledge (NOMK) dates back to the era of Plato and Aristotle (Dossey, 1992). It suggests that mathematics teachers’ beliefs, views, conceptions, and preferences about NOM influence the way in which they teach mathematics. It is important to understand how these conceptions are formed, which may evolve consciously or unconsciously from their experiences. Teachers’ experiences as students of mathematics, their family, school environment, cultural, and social experiences influence their behavior including their decisions, actions, class organization, learning activities, and students’ achievement (Beswick, 2012; Ernest, 2008; Thompson1984). Yet, there is no NCTM standard on NOM (Gfeller, 1999).The purpose of the present study was to assess high school mathematics teachers’ NOMK conceptions, and to explore how these conceptions have been influenced by their personal and educational experiences as students in learning mathematics. Another objective of this study was to explore whether the teachers’ years of mathematics teaching experience, and their level of education have any influence on their NOMK beliefs. The sample consisted of 52 high school mathematics teachers, who were certified to teach secondary mathematics, and who had at least three years of mathematics teaching experience. Two instruments were used to collect the data, 1) the VNOM D instrument to assess the teachers’ beliefs regarding the NOMK aspects, and 2) the demographics instrument to collect information on the teachers’ demographics, and on their experiences as students of mathematics. Interviews were also used to enhance the findings. Results showed that participants had strong beliefs regarding their NOMK, and that their years of experience, and level of education influenced their NOMK beliefs.
Show less
- Title
- Effects of Microstructure Engineering on Laser Powder Bed Fusion Processed Superalloy IN718 through Inoculant Addition
- Creator
- Ho, I-Ting
- Date
- 2023
- Description
-
Additive manufacturing (AM) techniques can now be utilized as innovative tools that provide unlimited design flexibility for the fabrication...
Show moreAdditive manufacturing (AM) techniques can now be utilized as innovative tools that provide unlimited design flexibility for the fabrication of geometrically complex metallic structures. For production of Ni-base superalloy components used in advanced gas turbine engines, laser powder bed fusion (L-PBF), which is one of the AM techniques, is frequently used as it allows good metallurgical bonding of powder feedstock and simultaneously enables development of ultra-efficient power systems for aerospace propulsion, space exploration and power generation. One of the major challenges associated with additively manufactured Ni-base superalloy components is that the extreme temperature gradients encountered during processing negatively impact the underlying microstructure and mechanical properties of the material. Although the macroscopic shape and chemistry of the additively fabricated part may be identical to the conventionally manufactured part, the resulting properties are usually compromised. In an effort to make Ni-base superalloys more amenable for processing via additive manufacturing, varying levels of benign inoculants that promote may heterogeneously grain nucleation were blended into Inconel 718 (IN718) powder feedstock and used for processing via L-PBF to characterize the microstructural evolution. In the first study, 0.2 wt. % of micron-sized CoAl2O4 flakes was found to effectively change the grain morphology during the L-PBF process leading to significant reduction in crystallographic texture and thus resulting elastic anisotropy. Dispersion of nano-oxides resulting from the reduction of CoAl2O4 particles also contributed to improved tensile strength and steady creep strain rate. It should be noted, however, that, the multiple iterations of remelting as the result of deposition of new layers dissolved the Co-rich particles reduced from CoAl2O4 inoculants. Instead of having nucleation events contributed by elemental Co, the oxide agglomerates as a result of Marangoni convection seemed to be the major contribution to facilitating grain refinement by inhibiting the heat transfer in the surroundings. On the other hand, addition CoAl2O4 particles appeared to generally reduce the melt pool width while increase the melt pool depth by inhibiting the degree of heat transfer and Marangoni flow. The changes in melt pool dimension aided in improving the relative density and surface roughness of the bulk samples by generating better metallurgical bonding to the subsequent layers. As the trade-off, however, the changes in melt pool physics also enhanced the tendency for epitaxial growth and hence retarded the columnar-to-equiaxed transition unless oxide agglomerates are present. In addition to CoAl2O4, candidates including Co, TaCr2, TiB2, and CeO2 particles were also considered to be blended with the powder feedstock of IN718. After the L-PBF process, different degree of microstructural evolution was characterized with the addition of Co, TaCr2, TiB2, or CeO2 particles. It was found that the physical presence of inoculants may change the melt pool geometries that accounted for a comparatively more columnar-grained structure with <101> texture in samples containing Co and TaCr2 particles while a relatively equiaxed-grained structure with <001> texture in samples containing TiB2. The comparison between samples containing TiB2 and CeO2 further indicates that the phase transformation induced agglomeration will also reduce the effectiveness of inoculants due to decreasing nuclei density. Findings from this investigation demonstrate the resulting grain structure upon L-PBF can be profoundly impacted by both chemistry and physical properties of the inoculants. These effects may potentially be harnessed to effectively engineer the microstructure and optimize the properties of L-PBF processed Ni-base superalloys.
Show less
- Title
- High-integrity modeling of non-stationary Kalman Filter input error processes and application to aircraft navigation
- Creator
- Gallon, Elisa
- Date
- 2023
- Description
-
Most navigation applications nowadays rely heavily on Global Navigation Satellite Systems (GNSSs) and inertial sensors. Both of these systems...
Show moreMost navigation applications nowadays rely heavily on Global Navigation Satellite Systems (GNSSs) and inertial sensors. Both of these systems are known to be complementary, and as such, their outputs are very often combined in an extended Kalman Filter (KF) to provide a continuous navigation solution, resistant to poor satellite geometry, as well as radio frequency interference. Additionally, recent development in safety critical applications (such as aviation) revealed the performance limitations of current algorithms (Advance Receiver Autonomous Integrity Monitoring - ARAIM) to vertical guidance down to 200 feet above the runway (LPV-200). When nominal constellations are depleted, LPV-200 can only sparsely be achieved. Exploiting satellite motion in ARAIM (for instance using a KF) could help alleviate those limitations, but would require adequate modeling of the errors, including the error's time correlation.Power Spectral Density (PSD) bounding is a methodology that provides high integrity, time correlated error models, but this approach is currently limited to stationary errors (which is rarely the case with real data), and has never been applied to navigation errors. More generally, no high integrity, time correlated error models have ever been derived for navigation errors.As a result, in the first part of this thesis, a methodology for high integrity modeling of time correlated errors is introduced. The PSD bounding methodology is extended to both stationary and non-stationary errors. In the second part of this thesis, these methodologies are applied to the 3 main error sources impacting iono-free GNSS measurements (orbit and clock errors, tropospheric errors and multipath), as well as to inertial errors.The methodology introduced in this dissertation provides high integrity time correlated error models and is applicable to any type of applications where high integrity is required (e.g. Differential GNSS - DGNSS, Aircaft Based Augmentation System - ABAS, Ground Based Augmentation System - GBAS, Space Based Augmentation System - SBAS, etc...). Additionally, the error models derived here are not only limited to high integrity applications, but could also be used in applications were the correlation over time of the errors plays an important role (such as any KF integration).In the last part of this dissertation, we focus on a specific safety critical application: aviation, and in particular ARAIM. The dissertation is concluded with an assessment of the performance improvements provided by recursive ARAIM, using those bounding dynamic error models, with respect to those models, used for baseline snapshot ARAIM. Additionally, a sensitivity analysis is performed on each of the error model parameters to assess which of them impacts the KF performance (i.e. covariance) the most.
Show less
- Title
- A Novel CNFET SRAM-Based Computing-In-Memory Design and Low Power Techniques for AI Accelerator
- Creator
- Kim, Young Bae
- Date
- 2023
- Description
-
Power consumption and data processing speed of integrated circuits (ICs) is an increasing concern in many emerging Artificial Intelligence (AI...
Show morePower consumption and data processing speed of integrated circuits (ICs) is an increasing concern in many emerging Artificial Intelligence (AI) applications, such as autonomous vehicles and Internet of Things (IoT). In addition, according to the 2020 International Technology Road map for Semiconductors (ITRS), the high power consumption trend of AI chips far exceeds the power requirements. As a result, power optimization techniques are highly regarded in nowadays AI chip designs. There are various low-power methodologies from the system level to the layout level, and we are focusing on transistor level and register transfer level (RTL) through this thesis. In this thesis, we propose a novel ultra-low power voltage-based computing-in- memory (CIM) design with a new SRAM bit cell structure for AI Accelerator. The basic working principle of CIM (Computing-in-memory) is to use the existing internal embedded memory array (e.g. SRAM) instead of external memory, and it reduces unnecessary access to external memory by calculating with internal embedded mem- ory. Since the proposed our SRAM bit cell uses a single bitline for CIM calculation with decoupled read and write operations, it supports much higher energy eciency. In addition, to separate read and write operations, the stack structure of the read unit minimizes leakage power consumption. Moreover, the proposed bit cell structure provides better read and write stability due to the isolated read path, write path and greater pull-up ratio. Compared to the state-of-the-art SRAM-CIM, our proposed SRAM-CIM does not require extra transistors for CIM vector-matrix multiplication. We implemented a 16k (128⇥128) bit cell array for the computation of 128x neurons, and used 64x binary inputs (0 or 1) and 64⇥128 binary weights (-1 or +1) values for the binary neural networks (BNNs). Each row of the bit cell array corresponding to a single neuron consists of a total of 128 cells, 64x cells for dot-product and 64x replicas cells for ADC reference. And 64x replicas cells consist of 32x cells for ADC reference and 32x cells for o↵set calibration. We used a row-by-row ADC for the quantized outputs of each neuron, which supports 1-7 bits of output for each neuron. The ADC uses the sweeping method using 32x duplicate bit cells, and the sweep cycle is set to 2N1 +1, where N is the number of output bits. The simulation is performed at room temperature (27C) using 32nm CNFET and 20nm FinFET technology via Synopsys Hspice, and all transistors in bitcells use the minimum size considering the area, power, and speed. The proposed SRAM-CIM has reduced power consumption for vector-matrix multiplication by 99.96% compared to the existing state-of-the-art SRAM-CIM. Moreover, because of the separated reading unit from an internal node of latch, there is no feedback from the read access circuit, which makes it read static noise margin (SNM) free. Furthermore, for the low power AI accelerator design, we propose a new AI accelerator design method that applies low power techniques such as bus specific clock gating (BSCG) and local explicit clock gating (LECG) at the register-transfer- level (RT-level). And evaluates them on the Xilinx ZCU-102 FPGA SoC hardware platform and 45nm technology for ASIC, respectively. It measures dynamic power using a commercial EDA tool, and chooses only a subset of FFs to be gated selectively based on their switching activities. We achieve up to a 53.21% power reduction in the ASIC implementation and saved 32.72% of the dynamic power dissipation in the FPGA implementation. This shows that our RTL low power schemes have a powerful possibility of dynamic power reduction when applied to the FPGA design flow and ASIC design flow for the implementation of the AI system.
Show less
- Title
- Investigating The Impact of Tall Building Ordinances (TBOs) on the Evolution of Ultra-Tall Buildings Typology: Case Studies in Chicago and Dubai
- Creator
- Alkoud, Amjad
- Date
- 2023
- Description
-
Zoning ordinances are instruments that tangibly and intangibly shape cities; control urban morphology, demography, and visual identity; and...
Show moreZoning ordinances are instruments that tangibly and intangibly shape cities; control urban morphology, demography, and visual identity; and determine the inhabitants' life quality, well-being, and comfort. Tall building ordinances (TBOs), in turn, control the vertical growth of cities and the development of tall buildings as distinctive actors in the built environment today. With the recent proliferation of developing Ultra-tall buildings in cities around the world, ordinances should offer flexibility, adaptability, and responsiveness to the dynamic nature of emerging needs and technological potentials.This dissertation investigates the emergence of Ultra-tall buildings as a new typology in major metropolises and the interaction between the building ordinances and the construction of Ultra-tall. The work presented in this dissertation implements two primary research methods: cross-sectional surveys and longitudinal studies, documenting supertall buildings completed in two major cities, Chicago and Dubai. The discussions and findings are supported by structured interviews with architects and engineers actively involved in designing and constructing Ultra-tall buildings. The cross-sectional survey comprises all supertall buildings (i.e., buildings above 1000 feet in height) completed as of 2022 in Chicago, the cradle of the "modern" high-rise with 318 towers of 100-plus meters and eight supertall towers of 300-plus meters; and Dubai, the new experimental land of supertall construction with 298 towers of 100-plus meters and 28 towers of 300-plus meters height. The longitudinal case studies provide additional information and knowledge about selected examples in Chicago and Dubai, derived from personal structured interviews conducted in both cities. Several additional survey cases from China, NYC, and London were investigated for their importance and uniqueness in supporting the research discussions and findings. This research aims to bridge the gap between the building ordinance literature and Ultra-tall building design practices on the one hand. On the other hand, it sheds light on the necessity to realize Ultra-tall buildings as a distinct typology entitled to its particular set of ordinances.The research findings are intended to help architects, engineers, policymakers, and planning authorities ensure a sustainable socioeconomic future and mitigate the negative impact of Ultra-tall constructions in major cities. This goal is assumed to be achieved by developing a set of recommendations, strategies, and universal criteria to implement a more flexible and responsive approach toward emerging human needs and technologies.
Show less
- Title
- Quantifying Localization Safety for State-of-the-Art Mobile Robot Estimation Algorithms
- Creator
- Abdul Hafez, Osama Mutie Fahad
- Date
- 2023
- Description
-
In mobile robotics, localization safety is quantified using covariance matrix or particle spread.However, such methods are insufficient for...
Show moreIn mobile robotics, localization safety is quantified using covariance matrix or particle spread.However, such methods are insufficient for mission or life-critical applications, like Autonomous Vehicles (AVs), because they only reflect nominal sensor noise without considering sensor measurement faults. Sensor faults are unknown deterministic errors that cannot be modeled using a zero mean Gaussian distribution. Ignoring sensor faults, in such applications, might result in large localization errors, which in turn deceives other reliant systems, like the controller, leading to catastrophic consequences, such as traffic accidents for AVs. Thus, other techniques need to be used to conservatively quantify pose safety.This thesis builds upon previous research in aviation safety, or what is referred to as \textit{integrity monitoring}, to quantify localization safety for mobile robots that use state-of-the-art state estimators (as localizers).Specifically, this thesis utilizes the localization \textit{integrity risk} metric, as a measure of localization safety, which is defined as the probability of the robot's pose estimate error to lie outside pre-determined acceptable limits while an alarm is not triggered. Unlike open-sky aviation applications, where Global Navigation Satellite Systems (GNSS) signals are available, mobile robots operate in GNSS-denied, or in the best case GNSS-degraded, environments, which demands utilizing more complex set of sensors to guarantee an acceptable level of localization safety. This thesis provides a conservative measure of localization safety by rigorously upper-bounding the integrity risk while accounting for both nominal lidar noise and unmodeled lidar measurement faults.The contributions of this thesis include the design and analysis of practical integrity monitoring and failure detection procedures for mobile robots utilizing map-based particle filtering, a recursive integrity monitoring method for mobile robots utilizing map-based fixed lag smoothing for both solution-separation and chi-squared as failure detectors, the synthesis of an integrity monitoring procedure for mobile robots utilizing Extended Kalman Filter-based Simultaneous Localization And Mapping (EKF-based SLAM), and a Model Predictive Control (MPC) framework that is capable of planning mobile robot's trajectory to follow a predefined robot path while maintaining a predefined minimum level of mobile robot localization safety. The proposed methodologies are validated using both simulation and experimental results conducted in real-world urban university campus environments.
Show less
- Title
- Developing Adaptive and Predictive Modules for the Second Generation of Multivariable Insulin Delivery System for People with Type-1 Diabetes
- Creator
- Askari, Mohammad Reza
- Date
- 2023
- Description
-
In this research, we are developing the second generation of multivariable automated insulin delivery system (mvAID) for people with Type 1...
Show moreIn this research, we are developing the second generation of multivariable automated insulin delivery system (mvAID) for people with Type 1 diabetes (T1D). AID system is improved by integrating missing data from sensors into the system, reconciling outliers in the data, and eliminating the effects of artifacts in signals from wearable devices. Behavioral patterns of individuals with T1D are captured by data-driven models. The model predictive control algorithm of the mvAID uses these patterns for making decisions and predicting glucose concentrations in the future more accurately. A pipeline algorithm is developed for removing noise and motion artifacts from wristband signals. Then, energy expenditure, physical activity, and acute psychological stress (APS) are estimated from wearable device signals to detect and quantify disturbances affecting the concentration of blood glucose concentration. Additionally, different modules were designed for predicting risky glycemic episodes and are used to build the second generation of the mvAID system. The techniques developed are tested with historical data sets from various clinical experiments and free-living data, and with simulations made by using our multivariable glucose, insulin and physiological variables simulator (mGIPsim).
Show less