Search results
(9,421 - 9,440 of 9,809)
Pages
- Title
- ESTIMATING PM2.5 INFILTRATION FACTORS FROM REAL-TIME OPTICAL PARTICLE COUNTERS DEPLOYED IN CHICAGO HOMES BEFORE AND AFTER MECHANICAL VENTILATION RETROFITS
- Creator
- Wang, Mingyu
- Date
- 2021
- Description
-
PM2.5 are fine inhalable particles that are 2.5 micrometers or smaller in size. Indoor PM2.5 consists of outdoor PM2.5 (ambient PM2.5) that is...
Show morePM2.5 are fine inhalable particles that are 2.5 micrometers or smaller in size. Indoor PM2.5 consists of outdoor PM2.5 (ambient PM2.5) that is infiltrated into the indoor environment and indoor generated PM2.5 (non-ambient PM2.5). As people spend nearly 90% of their lifetimes indoors, with most of that time in their homes, PM2.5 exposure in homes results in severe health effects such as asthma. One strategy increasingly being used to dilute air pollutants generated indoors and improve indoor air quality (IAQ) in homes is the introduction of mechanical ventilation systems. However, mechanical ventilation systems also have the potential to introduce more ambient PM2.5 than relying on infiltration alone, although limited data exist to demonstrate the magnitude of impacts in occupied homes. The objective of this paper is to estimate the infiltration factor (Finf) of PM2.5 before and after installing mechanical ventilation systems in a subset of occupied homes. The data source utilized comes from the Breathe Easy Project, a more than 2-year-long study conducted in 40 existing homes in Chicago, IL aiming to explore the effects of three different types of mechanical ventilation system retrofits on IAQ and asthma. An automated algorithm was developed to remove indoor PM2.5 peaks in time-series data collected from optical particle counters deployed inside and outside of each home. The Finf was estimated using the resulting indoor/outdoor ratio with indoor peaks removed. Before mechanical ventilation retrofits, the weekly median Finf was 0.29 (summer median = 0.41, fall median = 0.26, winter median = 0.29, spring median = 0.30); after mechanical ventilation retrofits, the median Finf was 0.34 (winter median= 0.28, spring median = 0.45, summer median = 0.54, fall median = 0.20). Differences in Finf between pre- and post-intervention periods were not statistically significant (p = 0.23 from Wilcoxon signed rank tests). The median PM2.5 infiltration factor increased ~22% (from 0.27 to 0.33) with the installation of balanced ventilation systems with energy recovery ventilators (ERV), although differences were not statistically significant (Wilcoxon signed rank p = 0.35). The median PM2.5 infiltration factor decreased ~4% (from 0.28 to 0.27) after installing intermittent CFIS systems, which intermittently supply ventilation air through the existing central air handling units and associated filters (which were upgraded to a minimum of MERV 10 in all CFIS homes), although differences were not statistically significant (Wilcoxon signed rank p = 0.24). The median PM2.5 infiltration factor increased ~26% (from 0.35 to 0.44) with the installation of continuous exhaust-only systems, and differences were significant (Wilcoxon signed rank p = 0.04). These results suggest that the filtration mechanisms used on the CFIS and balanced systems were adequate for maintaining similar distributions of Finf values pre- and post-interventions whereas the increased delivery of outdoor air via the building envelope by exhaust-only systems significantly increased Finf following retrofits.
Show less
- Title
- Architecture as a Communicator of Values and Identity Spaces for Public Safety and Community Benefits
- Creator
- Waidele Arteaga, Nicolas
- Date
- 2022
- Description
-
Urban segregation, violence, and crimes are linked to drug trafficking. El Castillo Social Factory is an urban strategy that aims to recover...
Show moreUrban segregation, violence, and crimes are linked to drug trafficking. El Castillo Social Factory is an urban strategy that aims to recover the El Castillo neighborhood and prevent drug trafficking from advancing, understanding that police action is necessary but insufficient.This neighborhood is located on the southern periphery of Santiago in a commune called La Pintana and aspires to make its neighborhoods “more livable, healthy, and economically viable.” This proposal explores how investing in civic commons can make these goals a reality. First, it is essential to increase the presence of the State, strengthening existing services and adding new ones, with a focus on the care of children and young people. The second is to recover vacant lots and public spaces in poor condition or deteriorated through an “urban acupuncture” strategy based on the construction of many small or medium-sized projects. Art and sports are fundamental, allowing us to protect children and young people and offer them horizons of recreation and hope. The public buildings, institutions, land, water bodies, and infrastructure inherited from earlier generations are ready for us to see anew—as a robust network of civic assets ready to be activated for the current needs, desires, and dreams of all the people who share and shape them. El Castillo Social Factory offers a fresh look at our community anchors and the vibrant hubs our public spaces can become when we invest in collective urban life. Its vision focuses on positive transformation at the architectural scale—where personal experience and aspirations meet broad, long-range planning efforts—to spark the imagination and spur us to work together toward realizing the abundant potential of what we hold in common.
Show less
- Title
- Distinctive Categorization Deficits in Repeated Sorting of Common Household Objects in Hoarding Disorder
- Creator
- Hamilton, Catharine Elizabeth
- Date
- 2022
- Description
-
The present study examines sorting techniques and deficits among individuals with hoarding disorder (n = 34) compared to age- and gender...
Show moreThe present study examines sorting techniques and deficits among individuals with hoarding disorder (n = 34) compared to age- and gender-matched adults (n = 35) in the general population. Performance was compared on the Booklet Category Test (BCT), selected other neuropsychological measures, and an ecologically valid sorting task designed for the study to model the Delis-Kaplan Executive Function System (D-KEFS) Sorting subtest but with common household objects as stimuli. Contrary to predictions, individuals with hoarding disorder did not perform significantly worse than controls on the BCT or the sorting task designed for the present study. Also contrary to predictions, the hoarding group performed significantly better when initiating their own sorts of the objects than when tasked with naming categories grouped by the researcher. These findings are discussed as well as exploratory analyses suggesting participants with hoarding put forth more mental effort sorting the household objects (shoes and mail). They provided significantly more individual responses on the task with significantly more description errors. IQ and performance on other selected neuropsychological measures were not significantly different between groups. These findings provide preliminary evidence there may be specific types of real-life sorting difficulties associated with hoarding disorder that are subtle and beyond what existing neuropsychological tests can measure. Given that current CBT treatments for hoarding presuppose a certain level of competency in sorting (e.g., recognizing and naming different categories of household items to complete a personal organizing plan), it is important to clarify potential sorting and categorization deficits in this group as one possible avenue to help improve treatment response among individuals struggling with hoarding disorder.
Show less
- Title
- Machine Learning On Graphs
- Creator
- He, Jia
- Date
- 2022
- Description
-
Deep learning has revolutionized many machine learning tasks in recent years.Successful applications range from computer vision, natural...
Show moreDeep learning has revolutionized many machine learning tasks in recent years.Successful applications range from computer vision, natural language processing to speech recognition, etc. The success is partially due to the availability of large amounts of data and fast growing computing resources (i.e., GPU and TPU), and partially due to the recent advances in deep learning technology. Neural networks, in particular, have been successfully used to process regular data such as images and videos. However, for many applications with graph-structured data, due to the irregular structure of graphs, many powerful operations in deep learning can not be readily applied. In recent years, there is a growing interest in extending deep learning to graphs. We first propose graph convolutional networks (GCNs) for the task of classification or regression on time-varying graph signals, where the signal at each vertex is given as a time series. An important element of the GCN design is filter design. We consider filtering signals in either the vertex (spatial) domain, or the frequency (spectral) domain. Two basic architectures are proposed. In the spatial GCN architecture, the GCN uses a graph shift operator as the basic building block to incorporate the underlying graph structure into the convolution layer. The spatial filter directly utilizes the graph connectivity information. It defines the filter to be a polynomial in the graph shift operator to obtain the convolved features that aggregate neighborhood information of each node. In the spectral GCN architecture, a frequency filter is used instead. A graph Fourier transform operator or a graph wavelet transform operator first transforms the raw graph signal to the spectral domain, then the spectral GCN uses the coe"cients from the graph Fourier transform or graph wavelet transform to compute the convolved features. The spectral filter is defined using the graph’s spectral parameters. There are additional challenges to process time-varying graph signals as the signal value at each vertex changes over time. The GCNs are designed to recognize di↵erent spatiotemporal patterns from high-dimensional data defined on a graph. The proposed models have been tested on simulation data and real data for graph signal classification and regression. For the classification problem, we consider the power line outage identification problem using simulation data. The experiment results show that the proposed models can successfully classify abnormal signal patterns and identify the outage location. For the regression problem, we use the New York city bike-sharing demand dataset to predict the station-level hourly demand. The prediction accuracy is superior to other models. We next study graph neural network (GNN) models, which have been widely used for learning graph-structured data. Due to the permutation-invariant requirement of graph learning tasks, a basic element in graph neural networks is the invariant and equivariant linear layers. Previous work by Maron et al. (2019) provided a maximal collection of invariant and equivariant linear layers and a simple deep neural network model, called k-IGN, for graph data defined on k-tuples of nodes. It is shown that the expressive power of k-IGN is equivalent to k-Weisfeiler-Lehman (WL) algorithm in graph isomorphism tests. However, the dimension of the invariant layer and equivariant layer is the k-th and 2k-th bell numbers, respectively. Such high complexity makes it computationally infeasible for k-IGNs with k > 3. We show that a much smaller dimension for the linear layers is su"cient to achieve the same expressive power. We provide two sets of orthogonal bases for the linear layers, each with only 3(2k & 1) & k basis elements. Based on these linear layers, we develop neural network models GNN-a and GNN-b, and show that for the graph data defined on k-tuples of data, GNN-a and GNN-b achieve the expressive power of the k-WL algorithm and the (k + 1)-WL algorithm in graph isomorphism tests, respectively. In molecular prediction tasks on benchmark datasets, we demonstrate that low-order neural network models consisting of the proposed linear layers achieve better performance than other neural network models. In particular, order-2 GNN-b and order-3 GNN-a both have 3-WL expressive power, but use a much smaller basis and hence much less computation time than known neural network models. Finally, we study generative neural network models for graphs. Generative models are often used in semi-supervised learning or unsupervised learning. We address two types of generative tasks. In the first task, we try to generate a component of a large graph, such as predicting if a link exists between a pair of selected nodes, or predicting the label of a selected node/edge. The encoder embeds the input graph to a latent vector space via vertex embedding, and the decoder uses the vertex embedding to compute the probability of a link or node label. In the second task, we try to generate an entire graph. The encoder embeds each input graph to a point in the latent space. This is called graph embedding. The generative model then generates a graph from a sampled point in the latent space. Di↵erent from the previous work, we use the proposed equivariant and invariant layers in the inference model for all tasks. The inference model is used to learn vertex/graph embeddings and the generative model is used to learn the generative distributions. Experiments on benchmark datasets have been performed for a range of tasks, including link prediction, node classification, and molecule generation. Experiment results show that the high expressive power of the inference model directly improves latent space embedding, and hence the generated samples.
Show less
- Title
- X-Ray Diffraction Studies of Activation and Relaxation In Fast and Slow Rat Skeletal Muscle
- Creator
- Gong, Henry M.
- Date
- 2022
- Description
-
The contractile properties of fast-twitch and slow-twitch skeletal muscles are primarily determined by the myosin isoform content and...
Show moreThe contractile properties of fast-twitch and slow-twitch skeletal muscles are primarily determined by the myosin isoform content and modulated by a variety of sarcomere proteins. X-ray diffraction studies of regulatory mechanisms in muscle contraction have focused predominately on fast- or mixed-fiber muscle with slow muscle being much less studied. Here, we used time-resolved x-ray diffraction to investigate the dynamic behavior of the myofilament proteins in relatively pure slow fiber rat soleus (SOL) and pure fast fiber rat extensor digitorum longus (EDL) muscle during twitch and tetanic contractions at optimal lengths (Lo), 95% Lo, and 90% Lo. Before the delivery of stimulation, reduction in muscle length led to decrease in passive tension. The x-ray reflections upon reduction in length showed no transition in the myosin heads from ordered OFF state, where heads are held close to the thick filament backbone, to disordered ON states, where heads are free to bind to thin filament, in both muscles. When stimulation was delivered to both muscles for twitch contractions at Lo, x-ray signatures indicating the transition of myosin heads to ON states were observed in EDL but not in soleus muscle. During tetanic contractions, changes in the disposition of myosin heads as active tension develops is a cooperative process in EDL muscle whereas in soleus muscle this relationship is less cooperative. Moreover, this high cooperativity was maintained in EDL at all lengths tested here, but cooperativity decreased upon reduction in lengths in soleus. The observed reduced extensibility of the thick filaments in soleus muscles as compared to EDL muscles indicate a molecular basis for this behavior. These data indicate that for the EDL thick filament activation is a cooperative strain-induced mechano-sensing mechanism, whereas for the soleus thick filament xiii activation has a more graded response. Lastly, x-ray data collected at different lengths demonstrated that the effect of length on soleus is more pronounce compared to EDL, particularly noticeable in the thick filament during relaxation phase after stimulation ceased. These observations indicate that soleus is more length-dependent than EDL. These different approaches to thick filament regulation in fast- and slow-twitch muscles may be designed to allow for short duration, strong contractions versus sustained finely controlled contractions, respectively.
Show less
- Title
- Pressure Feedback Control on a UCAS Model in Random Gusts
- Creator
- He, Xiaowei
- Date
- 2021
- Description
-
This research focuses on efficient active flow control (AFC) of the aerodynamic loads on a generic tailless delta wing in various flow/flight...
Show moreThis research focuses on efficient active flow control (AFC) of the aerodynamic loads on a generic tailless delta wing in various flow/flight conditions, such as, flying through atmosphere gusts, fast pitching, and other rapid maneuvers that would cause the aircraft to experience unsteady aerodynamic effects. A feedback control scheme that uses the surface pressure measurements to estimate the actual aerodynamic loads that act on the aircraft is put forward, with the hypothesis that a pressure surrogate can replace the inertia-based sensors to provide the controller with faster and/or more accurate feedback signals of the real-time aerodynamic load. The control performance of the AFC actuation and conventional elevons were evaluated. Results showed that the AFC with a momentum coefficient input of 2% was equivalent to 27-deg elevon deflection in terms of roll moment change and the control derivative of the AFC is at least doubled than that of the elevons.Streamwise and cross-flow gusts were simulated in the Andrew Fejer Unsteady Wind Tunnel at IIT. A spectral feedback approach was tested by generating the horizontal velocity components of the von Karman and the Dryden turbulence spectra. The velocity components in the test section were controlled temporally and spatially to generate transverse cross-flow gusts with designated wavelengths and frequencies. Sparse surface pressure measurements on the aircraft surface were used to develop lower-order models to estimate the instantaneous aerodynamic loads using the Sparse Identification of Nonlinear Dynamics (SINDy) algorithm. The pressure-based models acted as surrogates of the aerodynamic loads to provide feedback signals to the closed-loop controller to alleviate the gust effects on the wing. The control results showed that the pressure feedback scheme was sufficient to provide feedback signals to the controller to reduce the roll moment fluctuations caused by the dynamic perturbations down to 20% comparing to 30% to 50% in previous studies.
Show less
- Title
- AI IN MEDICINE: ENABLING INTELLIGENT IMAGING, PROGNOSIS, AND MINIMALLY INVASIVE SURGERY
- Creator
- Getty, Neil
- Date
- 2022
- Description
-
While an extremely rich research field, compared to other applications of AI such as natural language processing (NLP) and image processing...
Show moreWhile an extremely rich research field, compared to other applications of AI such as natural language processing (NLP) and image processing/generation, AI in medicine has been much slower to be applied in real-world clinical settings. Often the stakes of failure are more dire, the access of private and proprietary data more costly, and the burden of proof required by expert clinicians is much higher. Beyond these barriers, the often typical data-driven approach towards validation is interrupted by a need for expertise to analyze results. Whereas the results of a trained Imagenet or machine translation model are easily verified by a computational researcher, analysis in medicine can be much more multi-disciplinary demanding. AI in medicine is motivated by a great demand for progress in health-care, but an even greater responsibility for high accuracy, model transparency, and expert validation.This thesis develops machine and deep learning techniques for medical image enhancement, patient outcome prognosis, and minimally invasive robotic surgery awareness and augmentation. Each of the works presented were undertaken in di- rect collaboration with medical domain experts, and the efforts could not have been completed without them. Pursuing medical image enhancement we worked with radiologists, neuroscientists and a neurosurgeon. In patient outcome prognosis we worked with clinical neuropsychologists and a cardiovascular surgeon. For robotic surgery we worked with surgical residents and a surgeon expert in minimally invasive surgery. Each of these collaborations guided priorities for problem and model design, analysis, and long-term objectives that ground this thesis as a concerted effort towards clinically actionable medical AI. The contributions of this thesis focus on three specific medical domains. (1) Deep learning for medical brain scans: developed processing pipelines and deep learn- ing models for image annotation, registration, segmentation and diagnosis in both traumatic brain injury (TBI) and brain tumor cohorts. A major focus of these works is on the efficacy of low-data methods, and techniques for validation of results without any ground truth annotations. (2) Outcome prognosis for TBI and risk prediction for Cardiovascular Disease (CVD): we developed feature extraction pipelines and models for TBI and CVD patient clinical outcome prognosis and risk assessment. We design risk prediction models for CVD patients using traditional Cox modeling, machine learning, and deep learning techniques. In this works we conduct exhaustive data and model ablation study, with a focus on feature saliency analysis, model transparency, and usage of multi-modal data. (3) AI for enhanced and automated robotic surgery: we developed computer vision and deep learning techniques for understanding and augmenting minimally invasive robotic surgery scenes. We’ve developed models to recognize surgical actions from vision and kinematic data. Beyond model and techniques, we also curated novel datasets and prediction benchmarks from simulated and real endoscopic surgeries. We show the potential for self-supervised techniques in surgery, as well as multi-input and multi-task models.
Show less
- Title
- Corporate Insider Holdings and Analyst Recommendations
- Creator
- Gogolak, William Peter
- Date
- 2022
- Description
-
I pursued two competing theories about insider stock holding levels and analyst recommendations. The complementary hypothesis states that top...
Show moreI pursued two competing theories about insider stock holding levels and analyst recommendations. The complementary hypothesis states that top management and analysts conduct actions in a comparable manner; the contradicting hypothesis states that insiders and analysts exhibit opposite market actions (Hsieh and Ng, 2019). I examined insider stock holding levels and analyst recommendations. I analyzed a sample of S&P 500 firms from 2011-2020. In this sample, I found that the relationship between insider holding levels and analyst recommendations are opposite in concurrent time periods; thus, supporting the contradictory hypothesis. I also analyzed lagged insider holdings levels in a granger causality test. This test supports the idea that top management stock holdings increase when analysts downgrade stocks, and the opposite effect it true when analysts upgrade stocks. Using a sample of S&P 500 firms from 2011 – 2020, I provided support to my hypothesis that aggregated analyst recommendations forecast future aggregate equity returns. Furthermore, I conducted a test to support my conclusion that changes to insider holding levels should be used to forecast changes in future equity returns, beyond what is already explained by analyst recommendations. I argue two compelling additions that I make to the existing body of work regarding aggregate stock prediction. First, I build upon existing papers by using Bloomberg aggregate analyst recommendations as opposed to the IBES datasets. Second, I expand upon recent index forecasting papers by incorporating both aggregate analyst recommendations and aggregate insider holding levels into aggregate stock return models.
Show less
- Title
- Deep Learning and Model Predictive Methods for the Control of Fuel-Flexible Compression Ignition Engines
- Creator
- Peng, Qian
- Date
- 2022
- Description
-
Compression ignited diesel engines are widely used for transportation and power generation because of their high fuel efficiency. However,...
Show moreCompression ignited diesel engines are widely used for transportation and power generation because of their high fuel efficiency. However, diesel engines can cause concerning environmental pollution because of their high nitrogen oxide (NOx) and soot emissions. In addition to meeting the stringent emission regulations, the demand to reduce greenhouse gas emissions has become urgent due to the more frequent destructive catastrophes caused by global warming in recent decades. In an effort to reduce emissions and improve fuel economy, many techniques have been developed and investigated by researchers. Air handling systems like exhaust gas recirculation and variable geometry turbochargers are the most widely used techniques on the market for modern diesel engines. Meanwhile, the concept of low temperature combustion is widely investigated by researchers. Low temperature combustion can increase the portion of pre-mixed fuel-air combustion to reduce the peak in-cylinder temperature so that the formation of NOx can be suppressed. Furthermore, the combustion characteristics and performance of bio-derived fuel blends are also studied to reduce overall greenhouse gas emissions through the reduced usage of fossil fuels. All the above mentioned systems are complicated because they involve not only chemical reactions but also complex fluid motion and mixing processes. As such, the control of these systems is always challenging and limits their commercial application. Currentlymost control methods are feed-forward control based on load condition and engine speed due to the simplicity in real-time application. With the development of faster control unit and deep learning techniques, the application of more complex control algorithms is possible to further improve the emissions and fuel economy. This work focuses on improvements to the control of engine air handling systems and combustion processes that leverage alternative fuels.Complex air handling systems, featuring technologies such as exhaust gas recirculation (EGR) and variable geometry turbochargers (VGTs), are commonly used in modern diesel engines to meet stringent emissions and fuel economy requirements. The control of diesel air handling systems with EGR and VGTs is challenging because of their nonlinearity and coupled dynamics. In this thesis, artificial neural networks (ANNs) and recurrent neural networks (RNNs) are applied to control the low pressure (LP) EGR valve position and VGT vane position simultaneously on a light-duty multi-cylinder diesel engine. In addition, experimental examination of a low temperature combustion based on gasoline compression ignition as well as its control has also been studied in this work. This type of combustion has been explored on traditional diesel engines in order to meet increasingly stringent emission regulations without sacrificing efficiency. In this study, a six-cylinder heavy-duty diesel engine was operated in a mixing controlled gasoline compression ignition mode to investigatethe influence of fuels and injection strategies on the combustion characteristics, emissions, and thermal efficiencies. Fuels, including ethanol (E), isobutanol (IB), and diisobutylene (DIB), were blended with a gasoline fuel to form E10, E30, IB30, and DIB30 based on volumetric fraction. These four blends along with gasoline formed the five test fuels. With these fuels, three injections strategies were investigated, including late pilot injection, early pilot injection, and port fuel injection/direct injection. The impact of moderate exhaust gas recirculation on nitrogen oxides and soot emissions was examined to determine the most promising fuel/injection strategy for emissions reduction. In addition, first and second law analyses were performed to provide insights into the efficiency, loss, and exergy destruction of the various gasoline fuel blends at low and medium load conditions. Overall, the emission output, thermal efficiency, and combustion performances of the five fuels were found to be similar and their differences are modest under most test conditions.While experimental work showed that low temperature combustion with alternative fuels could be effective, control is still challenging due to not only the properties of different gasoline-type fuels but also the impacts of injection strategies on the in-cylinder reactivity. As such, a computationally efficient zero-dimension combustion model can significantly reduce the cost of control development. In this study, a previously developed zero-dimension combustion model for gasoline compression ignition was extended to multiple gasoline-type fuel blends and a port fuel injection/direct fuel injection strategy. Tests were conducted on a 12.4-liter heavy-duty engine with five fuel blends. A modification was made to the functional ignition delay model to cover the significantly different ignition delay behavior between conventional and oxygenated fuel blends. The parameters in the model were calibrated with only gasoline data at a load of 14 bar brake mean effective pressure. The results showed that this physics-based model can be applied to the other four fuel blends at three differentpilot injection strategies without recalibration. In order to also facilitate the control of emissions, machine learning models were investigated to capture NOx emissions. A kernel-based extreme learning machine (K-ELM) performed best and had a coefficient of correlation (R-squared) of 0.998. The combustion and NOx emission models are valid for not only conventional gasoline fuel but also oxygenated alternative fuel blends at three different pilot injection strategies. In order to track key combustion metrics while keeping noise and emissions within constraints, a model predictive control(MPC) was applied for a compression ignition engine operating with a range of potential fuels and fuel injection strategies. The MPC is validated under different scenarios, including a load step change, fuel type change, and injection strategy change, with proportional-integral (PI) control as the baseline. The simulation results show that MPC can optimize the overall performance through modifying the main injection timing, pilot fuel mass, and exhaust gas recirculation (EGR) fraction.
Show less
- Title
- Integrating Provenance Management and Query Optimization
- Creator
- Niu, Xing
- Date
- 2021
- Description
-
Provenance, information about the origin of data and the queries and/or updates that produced it, is critical for debugging queries and...
Show moreProvenance, information about the origin of data and the queries and/or updates that produced it, is critical for debugging queries and transactions, auditing, establishing trust in data, and many other use cases.While how to model and capture the provenance of database queries has been studied extensively, optimization was recognized as an important problem in provenance management which includes storing, capturing, querying provenance and so on. However, previous work has almost exclusively focused on how to compress provenance to reduce storage cost, there is a lack of work focusing on optimizing provenance capture process. Many approaches for capturing database provenance are using SQL query language and representing provenance information as a standard relation. However, even sophisticated query optimizers often fail to produce efficient execution plans for such queries because of the query complexity and uncommon structures. To address this problem, we study algebraic equivalences and alternative ways of generating queries for provenance capture. Furthermore, we present an extensible heuristic and cost-based optimization framework utilizing these optimizations. While provenance has been well studied, no database optimizer is aware of using provenance information to optimize the query processing. Intuitively, provenance records exactly what data is relevant for a query. We can use this feature of provenance to figure out and filter out irrelevant input data of a query early on and such that the query processing will be speeded up. The reason is that instead of fully accessing the input dataset, we only run the query on the relevant input data. In this work, we develop provenance-based data skipping (PBDS), a novel approach that generates provenance sketches which are concise encodings of what data is relevant for a query. In addition, a provenance sketch captured for one query is used to speed up subsequent queries, possibly by utilizing physical design artifacts such as indexes and zone maps. The work we present in this thesis demonstrates a tight integration between provenance management and query optimization can lead a significant performance improvement of query processing as well as traditional database management task.
Show less
- Title
- Extreme Fine-grained Parallelism On Modern Many-Core Architectures
- Creator
- Nookala, Poornima
- Date
- 2022
- Description
-
Processors with 100s of threads of execution and GPUs with 1000s of cores are among the state-of-the-art in high-end computing systems. This...
Show moreProcessors with 100s of threads of execution and GPUs with 1000s of cores are among the state-of-the-art in high-end computing systems. This transition to many-core computing has required the community to develop new algorithms to overcome significant latency bottlenecks through massive concurrency. Implementing efficient parallel runtimes that can scale up to hundreds of threads with extremely fine-grained tasks (less than 100 microseconds) remains a challenge. We propose XQueue, a novel lockless concurrent queueing system that can scale up to hundreds of threads. We integrate XQueue into LLVM OpenMP and implement X-OpenMP, a library for lightweight tasking on modern many-core systems with hundreds of cores. We show that it is possible to implement a parallel execution model using lock-less techniques for enabling applications to strongly scale on many-core architectures. While the fork-join model is suitable for on-node parallelism, the use of joins and synchronization induces artificial dependencies which can lead to under utilization of resources. Data-flow based parallelism is crucial to overcome the limitations of fork-join parallelism by specifying dependencies at a finer granularity. It is also crucial for parallel runtime systems to support heterogeneous platforms to better utilize the hardware resources that are available in modern day supercomputers. The existing parallel programming environments that support distributed memory either discover the DAG entirely on all processes which limits the scalability or introduce explicit communications which increases the complexity of programming. We implement Template Task Graph (TTG), a novel programming model and its C++ implementation by marrying the ideas of control and data flowgraph programming. TTG can address the issues of performance portability without sacrificing scalability or programmability by providing higher-level abstractions than conventionally provided by task-centric programming systems, but without impeding the ability of these runtimes to manage task creation and execution as well as data and resource management efficiently. TTG implementation currently supports distributed memory execution over 2 different task runtimes PaRSEC and MADNESS.
Show less
- Title
- Towards a Secure and Resilient Smart Grid Cyberinfrastructure Using Software-Defined Networking
- Creator
- Qu, Yanfeng
- Date
- 2022
- Description
-
To enhance the cyber-resilience and security of the smart grid against malicious attacks and system errors, we present software-defined...
Show moreTo enhance the cyber-resilience and security of the smart grid against malicious attacks and system errors, we present software-defined networking (SDN)-based communication architecture design for smart grid operation. Our design utilizes SDN technology, which improves network manageability, and provides application-oriented visibility and direct programmability, to deploy the multiple SDN-aware applications to enhance grid security and resilience including optimization-based network management to recover Phasor Measurement Unit (PMU) network connectivity and restore power system observability; Flow-based anomaly detection and optimization-based network management to mitigate Manipulation of demand of IoT (MadIoT) attack. We also developed a prototype system in a cyber-physical testbed and conducted extensive evaluation experiments using the IEEE 30-bus system, IEEE 118-bus system, and IIT campus microgrid.
Show less
- Title
- ROBUST AND EXPLAINABLE RESULTS UTILIZING NEW METHODS AND NON-LINEAR MODELS
- Creator
- Onallah, Amir
- Date
- 2022
- Description
-
This research focuses on robustness and explainability of new methods, and nonlinear analysis compared to traditional methods and linear...
Show moreThis research focuses on robustness and explainability of new methods, and nonlinear analysis compared to traditional methods and linear analysis. Further, it demonstrates that making assumptions, reducing the data, or simplifying the problem results in negative effect on the outcomes. This study utilizes the U.S. Patent Inventor database and the Medical Innovation dataset. Initially, we employ time-series models to enhance the quality of the results for event history analysis (EHA), add insights, and infer meanings, explanations, and conclusions. Then, we introduce newer algorithms of machine learning and machine learning with a time-to-event element to offer more robust methods than previous papers and reach optimal solutions by removing assumptions or simplifications of the problem, combine all data that encompasses the maximum knowledge, and provide nonlinear analysis.
Show less
- Title
- Automated Successive Baseline Schedule Model
- Creator
- Patel, Mihir Prakashbhai
- Date
- 2021
- Description
-
The construction project involves many stakeholders and diverse phases. Usually, a construction schedule is initially set up as a simple ideal...
Show moreThe construction project involves many stakeholders and diverse phases. Usually, a construction schedule is initially set up as a simple ideal case scenario, but then, during construction, the project faces modifications such as delay, acceleration, and change in logic caused by the project’s complexity and inherent risk. To recover the damage(s) caused by these modifications, the parties responsible for them should be identified accurately. Researchers and practitioners developed and used various delay analysis models to quantify delays, but the selection of the model depends on the time of analysis, available information, and expertise of the analyst. So, the results can be biased. The general problem is that most delay analysis models consider only delays in quantifying impacts rather than every type of modification that impacted the project, including CPM logic changes and adding/removing activities during construction. This study proposes a new successive baseline model to enable the precise analysis of the impacts of all sorts of modifications that occur during construction. This model can achieve unbiased and accurate results. The analysis process can also be computerized into a web application to improve efficiency and productivity. The fundamental concepts of the various modifications that can occur in the work schedule during construction and the analysis of the modifications’ impacts are presented in this study. Issues related to concurrency, float ownership, type of modification, selection of delay analysis model, and challenges with automation are also highlighted to broaden the understanding disagreements of the parties to a construction contract. A case example is presented to prove the accuracy and usefulness of the proposed model and web application.
Show less
- Title
- New Insights to Thermoelectrics from Fundamental Transport Properties to Potential Materials and Device Design
- Creator
- Pan, Zhenyu
- Date
- 2022
- Description
-
Thermoelectric (TE) materials have been widely studied as their ability to make direct energy conversion between heat and electricity. However...
Show moreThermoelectric (TE) materials have been widely studied as their ability to make direct energy conversion between heat and electricity. However, the conversion efficiency is still low compared with conventional devices no matter in power generation or electrical cooling. Therefore, most efforts have been made to improve the zT of TE materials, which is the commonly accepted metric for determining the performance of TE materials. But the progress is slow as the key parameters governing the zT is interrelated to each other which makes improving one often at the cost of the others leading to a narrow use of TE applications. Thus this thesis does not confine itself only in searching for high zT TE materials but also exploring useful things which are buried or ignored in previous thermoelectric researches from fundamental transport properties to TE device design. Firstly, we reevaluated the photo-Seebeck effect, which has been known for decades, and demonstrated that it is a powerful tool for semiconductor study as it allows the determination of mobilities, photo-carrier densities, even weighed mobilities (hence effective masses) of both electrons and holes and impact of defects all from a single sample. We then investigated a newly discovered low dimensional material, 2D tellurene, which has the potential to decouple the interrelated parameters to achieve a high zT. Lastly, we reconsidered the question that whether zT is the only merit index determining TE device performance. We hope this thesis can shed some light on thermoelectrics both from fundamental transport properties to device design.
Show less
- Title
- How Does Self-Stigma Influence Functionality in People with Serious Mental Illness? A Multiple Mediation Model of "Why-Try" Effect, Coping Resources, and Personal Recovery
- Creator
- Qin, Sang
- Date
- 2022
- Description
-
People with serious mental illness (SMI) face self-stigma effects that often undermine their functionality. Functionality herein refers to a...
Show morePeople with serious mental illness (SMI) face self-stigma effects that often undermine their functionality. Functionality herein refers to a person's execution of tasks (i.e., activities) and engagement in life situations (i.e., participation). This study used a path model to examine three mediating factors between self-stigma and functionality: The "why-try" effect, coping resources, and personal recovery. Specifically, the “why-try” effect was viewed as an extension of self-stigma harm that occurred when people suffered from a loss of self-esteem and self-efficacy. Coping resources were conceptualized as individuals’ strengths and the support they had to overcome negative stigma outcomes, particularly stigma stress. Endorsement of personal recovery, namely pursuing self-defined life goals despite illness—had a buffering effect reducing self-stigma. These three mediators were examined simultaneously using an archival dataset. Due to poor internal consistency, coping resources were eventually removed from the model; the subsequent, revised model achieved a good model fit. Results showed that people with SMI experiencing self-stigma were found to have an enhanced "why-try" effect as well as reduced personal recovery, leading to a decline in functionality. Implications of the results and future research directions are discussed.
Show less
- Title
- Decreasing Body Dissatisfaction in Male College Athletes: A Pilot Study of the Male Athlete Body Project
- Creator
- Perelman, Hayley
- Date
- 2020
- Description
-
Body dissatisfaction is associated with marked distress and often precipitates disordered eating symptomology. Body dissatisfaction in male...
Show moreBody dissatisfaction is associated with marked distress and often precipitates disordered eating symptomology. Body dissatisfaction in male athletes is an important area to explore, as research in this field often focuses on eating disorders in female athletes. The current body of literature regarding male college athletes suggests that they experience pressures associated with both societal muscular ideals and sport performance. While there is a clear association between drive for muscularity and body dissatisfaction in college male athletes, no study to date has evaluated the efficacy of a body dissatisfaction intervention for this population. Therefore, the present study sought to investigate the efficacy and feasibility of a pilot intervention program that targeted body dissatisfaction in male college athletes. Participants were randomized into an adapted version of the Female Athlete Body Project (i.e., the Male Athlete Body Project) or an assessment-only control condition. A total of 79 male college athletes (39 in treatment condition) completed this study for a retention rate of 84.9%. Participants in the experimental group attended three 80-minute group sessions once a week for three weeks. All participants completed measures of body dissatisfaction, internalization of the body ideal, drive for muscularity, negative affect, and sport confidence at three time points: baseline, post-treatment (three weeks after baseline for the control condition), and one-month follow-up. Hierarchical Linear Modeling was used to assess differences between conditions across time. Participation in the MABP improved men’s satisfaction with specific body parts, drive for muscularity, and body-ideal internalization at post-treatment. Men in the MABP also reported improvements in appearance evaluation and overweight preoccupation at post-treatment and one-month follow-up, and in negative affect at one-month follow-up only. Improvements in drive for muscularity were retained at one-month follow-up. This study provides preliminary evidence for the feasibility and efficacy of the Male Athlete Body Project.
Show less
- Title
- SAFETY AND MOBILITY IMPACTS ASSESSMENT OF THE CHICAGO BIKE LANE PROGRAM
- Creator
- Zhao, Yu
- Date
- 2021
- Description
-
In recent years, bike as a travel mode is getting increasingly popular among large cities in the U.S. These cities also found promoting bike...
Show moreIn recent years, bike as a travel mode is getting increasingly popular among large cities in the U.S. These cities also found promoting bike mode can potentially mitigate traffic congestion issues, reduce carbon emission and improve the quality of life for residents. Therefore, many cities-initiated bike-related programs promote the bike mode from all aspects, such as establishing a shared bike system and developing bike-related facilities. Specifically, bike lane installation is widely seen in large cities as a pivot component of bike promotion programs. Due to the installation of bike lanes on the existing network, vehicles’ safety and mobility performance may be affected due to the variation of facilities. This study attempts to propose a methodology to quantify the safety and mobility impacts on vehicles brought by bike lane installation. The proposed method accounts for safety impact by using predicted crashes in conjunction with field observed crash data for empirical Bayes (EB) before-after comparison group analysis. The mobility impact is captured by comparing the segment average travel time before and after the bike lane installation. Further, vehicle volume information is involved in the consumer surplus computation to quantify the variation in vehicle safety, and mobility performance resulting from the bike lane installation. A case study is conducted using a real data set from the city of Chicago bike lane program. The results reveal that the safety and mobility impacts vary mainly depending on the type of bike lane installed and location.
Show less
- Title
- The Development of a Measure of Public Stigma Towards Adults With Autism
- Creator
- Beedle, Robert Brian
- Date
- 2022
- Description
-
Adults with autism (AwA) report experiences of stigma and discrimination. Yet, quantitative research suggests that public attitudes are...
Show moreAdults with autism (AwA) report experiences of stigma and discrimination. Yet, quantitative research suggests that public attitudes are relatively benign. This research discrepancy is compounded by the present lack of a stakeholder-informed, theoretically-guided measure of the stigma towards AwA. The objective of the present study was to develop a measure of stigma towards AwA following best practices survey methodology. First, existing related measures were reviewed for possible candidate items, yielding 36 draft questions related to the stigma of AwA. Next, seven stakeholders in the AwA community were recruited to provide feedback on their experiences of stigma and discrimination, as well as feedback on the draft items. Following stakeholder feedback, draft items were edited, added, or removed based on feedback from the participants with AwA and their lived experiences, resulting in a revised measure of 51 candidate items. Finally, these 51 items underwent a quantitative phase with participants recruited through MTurk (N = 357). Exploratory factor analyses were conducted in order to generate a data driven factor structure that reflected stigma theory. The end result was a 20-item, four factor solution measuring numerous components of stigma within factors including cognitive components of stigma, blame, positive and negative affect, and comfort with close contact. The resulting measurement tool was titled the Public Stigma towards Adults with Autism Scale (PSAWA) and demonstrated strong psychometric properties. The tool has utility for further studying stigma towards AwA and assessing stigma interventions.
Show less
- Title
- Reconditioning Dharavi: A Toolkit of Strategies for Incremental Development
- Creator
- Bhogle, Saylee Deepak
- Date
- 2022
- Description
-
The 2003 Global Report on Human Settlements (Un-Habitat, 2003) defines a "slum" as a densely populated metropolitan area that is distinguished...
Show moreThe 2003 Global Report on Human Settlements (Un-Habitat, 2003) defines a "slum" as a densely populated metropolitan area that is distinguished by a variety of low-income settlements, subpar housing, and squalor. Dharavi, on the other hand, is far more than a "slum." In the heart of Mumbai, Dharavi is an economically prosperous and socially active informal town. Mumbai is a thriving metropolis with many different realities and patterns, even though it appears to be a slum filled with squatters. However, the region has recently become a hub for informal settlements and urban problems associated with poor hygiene in developing countries. People’s misconceptions about Dharavi stem from a failure to recognize its social capital and economic power: the area encompasses a variety of economic networks, production types, income levels, land tenure arrangements, and religious activities and festivities. Dharavi is made up of 85 separate groups with a strong feeling of belonging and high expectations for stability and improved economic position and living standards. It is also clear that these folks are capable of building and enhancing their shelter if they have the resources to do so. To develop all these qualities, Dharavi's Social Capital must be recognized and promoted as an asset to the city of Mumbai. A community such as Dharavi requires ‘urban acupuncture’; where mediation of the littlest kind will have the greatest effect. Dharavi, like any other "Informal" city, requires rigorous examination to be fully comprehended. It is a unique location where a large flood of migrants has managed to build jobs and their city. My underlying attitude to this location is a conflicting desire to save and replace it. The desire to save is linked to the aesthetic of informality as well as the intense sociality, diversity, and production of the streets and lanes - a fascinating and diversified urban ensemble. The desire to eliminate stems from hopeless states of sterilization, ventilation, light, open space, and congested areas. As a result, a reliable strategy for combining the two methodologies and locating a functioning arrangement should be developed. The government has been trying to redevelop this area for the past 50 years but hasn’t been successful in doing so. In contrast to the existing redevelopment plan, which promotes uniform top-down development, my concept anticipates techniques for progressive self-development, including "bottom-up" finance models and architectural approaches. After identifying various patterns and carefully examining behavior patterns, production systems, and existing community facilities, a toolkit of methods can be built that can be used in various places and "outboxes." The simple homogeneity of solutions for Dharavi's changing conditions has been avoided. Dharavi's current identity and "mixed-use" paradigm have been respected, with Home recognized as an instrument of production. The proposed design has been tested for various environmental factors using different tools for natural lighting and ventilation. The outdoor areas are also analyzed for thermal comfort since a lot of social activities take place in these areas. Communal areas have been designed to accommodate micro infrastructure systems while also increasing productivity. As a result, a system of self-development triggers has been created that can improve present conditions while also supporting the community's need for stability. Simultaneously, by focusing on property ownership as an economic driver, the proposed approach can provide a type of "social mobility" for Dharavi's residents.
Show less