Search results
(9,181 - 9,200 of 9,434)
Pages
- Title
- Distinctive Categorization Deficits in Repeated Sorting of Common Household Objects in Hoarding Disorder
- Creator
- Hamilton, Catharine Elizabeth
- Date
- 2022
- Description
-
The present study examines sorting techniques and deficits among individuals with hoarding disorder (n = 34) compared to age- and gender...
Show moreThe present study examines sorting techniques and deficits among individuals with hoarding disorder (n = 34) compared to age- and gender-matched adults (n = 35) in the general population. Performance was compared on the Booklet Category Test (BCT), selected other neuropsychological measures, and an ecologically valid sorting task designed for the study to model the Delis-Kaplan Executive Function System (D-KEFS) Sorting subtest but with common household objects as stimuli. Contrary to predictions, individuals with hoarding disorder did not perform significantly worse than controls on the BCT or the sorting task designed for the present study. Also contrary to predictions, the hoarding group performed significantly better when initiating their own sorts of the objects than when tasked with naming categories grouped by the researcher. These findings are discussed as well as exploratory analyses suggesting participants with hoarding put forth more mental effort sorting the household objects (shoes and mail). They provided significantly more individual responses on the task with significantly more description errors. IQ and performance on other selected neuropsychological measures were not significantly different between groups. These findings provide preliminary evidence there may be specific types of real-life sorting difficulties associated with hoarding disorder that are subtle and beyond what existing neuropsychological tests can measure. Given that current CBT treatments for hoarding presuppose a certain level of competency in sorting (e.g., recognizing and naming different categories of household items to complete a personal organizing plan), it is important to clarify potential sorting and categorization deficits in this group as one possible avenue to help improve treatment response among individuals struggling with hoarding disorder.
Show less
- Title
- Machine Learning On Graphs
- Creator
- He, Jia
- Date
- 2022
- Description
-
Deep learning has revolutionized many machine learning tasks in recent years.Successful applications range from computer vision, natural...
Show moreDeep learning has revolutionized many machine learning tasks in recent years.Successful applications range from computer vision, natural language processing to speech recognition, etc. The success is partially due to the availability of large amounts of data and fast growing computing resources (i.e., GPU and TPU), and partially due to the recent advances in deep learning technology. Neural networks, in particular, have been successfully used to process regular data such as images and videos. However, for many applications with graph-structured data, due to the irregular structure of graphs, many powerful operations in deep learning can not be readily applied. In recent years, there is a growing interest in extending deep learning to graphs. We first propose graph convolutional networks (GCNs) for the task of classification or regression on time-varying graph signals, where the signal at each vertex is given as a time series. An important element of the GCN design is filter design. We consider filtering signals in either the vertex (spatial) domain, or the frequency (spectral) domain. Two basic architectures are proposed. In the spatial GCN architecture, the GCN uses a graph shift operator as the basic building block to incorporate the underlying graph structure into the convolution layer. The spatial filter directly utilizes the graph connectivity information. It defines the filter to be a polynomial in the graph shift operator to obtain the convolved features that aggregate neighborhood information of each node. In the spectral GCN architecture, a frequency filter is used instead. A graph Fourier transform operator or a graph wavelet transform operator first transforms the raw graph signal to the spectral domain, then the spectral GCN uses the coe"cients from the graph Fourier transform or graph wavelet transform to compute the convolved features. The spectral filter is defined using the graph’s spectral parameters. There are additional challenges to process time-varying graph signals as the signal value at each vertex changes over time. The GCNs are designed to recognize di↵erent spatiotemporal patterns from high-dimensional data defined on a graph. The proposed models have been tested on simulation data and real data for graph signal classification and regression. For the classification problem, we consider the power line outage identification problem using simulation data. The experiment results show that the proposed models can successfully classify abnormal signal patterns and identify the outage location. For the regression problem, we use the New York city bike-sharing demand dataset to predict the station-level hourly demand. The prediction accuracy is superior to other models. We next study graph neural network (GNN) models, which have been widely used for learning graph-structured data. Due to the permutation-invariant requirement of graph learning tasks, a basic element in graph neural networks is the invariant and equivariant linear layers. Previous work by Maron et al. (2019) provided a maximal collection of invariant and equivariant linear layers and a simple deep neural network model, called k-IGN, for graph data defined on k-tuples of nodes. It is shown that the expressive power of k-IGN is equivalent to k-Weisfeiler-Lehman (WL) algorithm in graph isomorphism tests. However, the dimension of the invariant layer and equivariant layer is the k-th and 2k-th bell numbers, respectively. Such high complexity makes it computationally infeasible for k-IGNs with k > 3. We show that a much smaller dimension for the linear layers is su"cient to achieve the same expressive power. We provide two sets of orthogonal bases for the linear layers, each with only 3(2k & 1) & k basis elements. Based on these linear layers, we develop neural network models GNN-a and GNN-b, and show that for the graph data defined on k-tuples of data, GNN-a and GNN-b achieve the expressive power of the k-WL algorithm and the (k + 1)-WL algorithm in graph isomorphism tests, respectively. In molecular prediction tasks on benchmark datasets, we demonstrate that low-order neural network models consisting of the proposed linear layers achieve better performance than other neural network models. In particular, order-2 GNN-b and order-3 GNN-a both have 3-WL expressive power, but use a much smaller basis and hence much less computation time than known neural network models. Finally, we study generative neural network models for graphs. Generative models are often used in semi-supervised learning or unsupervised learning. We address two types of generative tasks. In the first task, we try to generate a component of a large graph, such as predicting if a link exists between a pair of selected nodes, or predicting the label of a selected node/edge. The encoder embeds the input graph to a latent vector space via vertex embedding, and the decoder uses the vertex embedding to compute the probability of a link or node label. In the second task, we try to generate an entire graph. The encoder embeds each input graph to a point in the latent space. This is called graph embedding. The generative model then generates a graph from a sampled point in the latent space. Di↵erent from the previous work, we use the proposed equivariant and invariant layers in the inference model for all tasks. The inference model is used to learn vertex/graph embeddings and the generative model is used to learn the generative distributions. Experiments on benchmark datasets have been performed for a range of tasks, including link prediction, node classification, and molecule generation. Experiment results show that the high expressive power of the inference model directly improves latent space embedding, and hence the generated samples.
Show less
- Title
- X-Ray Diffraction Studies of Activation and Relaxation In Fast and Slow Rat Skeletal Muscle
- Creator
- Gong, Henry M.
- Date
- 2022
- Description
-
The contractile properties of fast-twitch and slow-twitch skeletal muscles are primarily determined by the myosin isoform content and...
Show moreThe contractile properties of fast-twitch and slow-twitch skeletal muscles are primarily determined by the myosin isoform content and modulated by a variety of sarcomere proteins. X-ray diffraction studies of regulatory mechanisms in muscle contraction have focused predominately on fast- or mixed-fiber muscle with slow muscle being much less studied. Here, we used time-resolved x-ray diffraction to investigate the dynamic behavior of the myofilament proteins in relatively pure slow fiber rat soleus (SOL) and pure fast fiber rat extensor digitorum longus (EDL) muscle during twitch and tetanic contractions at optimal lengths (Lo), 95% Lo, and 90% Lo. Before the delivery of stimulation, reduction in muscle length led to decrease in passive tension. The x-ray reflections upon reduction in length showed no transition in the myosin heads from ordered OFF state, where heads are held close to the thick filament backbone, to disordered ON states, where heads are free to bind to thin filament, in both muscles. When stimulation was delivered to both muscles for twitch contractions at Lo, x-ray signatures indicating the transition of myosin heads to ON states were observed in EDL but not in soleus muscle. During tetanic contractions, changes in the disposition of myosin heads as active tension develops is a cooperative process in EDL muscle whereas in soleus muscle this relationship is less cooperative. Moreover, this high cooperativity was maintained in EDL at all lengths tested here, but cooperativity decreased upon reduction in lengths in soleus. The observed reduced extensibility of the thick filaments in soleus muscles as compared to EDL muscles indicate a molecular basis for this behavior. These data indicate that for the EDL thick filament activation is a cooperative strain-induced mechano-sensing mechanism, whereas for the soleus thick filament xiii activation has a more graded response. Lastly, x-ray data collected at different lengths demonstrated that the effect of length on soleus is more pronounce compared to EDL, particularly noticeable in the thick filament during relaxation phase after stimulation ceased. These observations indicate that soleus is more length-dependent than EDL. These different approaches to thick filament regulation in fast- and slow-twitch muscles may be designed to allow for short duration, strong contractions versus sustained finely controlled contractions, respectively.
Show less
- Title
- Pressure Feedback Control on a UCAS Model in Random Gusts
- Creator
- He, Xiaowei
- Date
- 2021
- Description
-
This research focuses on efficient active flow control (AFC) of the aerodynamic loads on a generic tailless delta wing in various flow/flight...
Show moreThis research focuses on efficient active flow control (AFC) of the aerodynamic loads on a generic tailless delta wing in various flow/flight conditions, such as, flying through atmosphere gusts, fast pitching, and other rapid maneuvers that would cause the aircraft to experience unsteady aerodynamic effects. A feedback control scheme that uses the surface pressure measurements to estimate the actual aerodynamic loads that act on the aircraft is put forward, with the hypothesis that a pressure surrogate can replace the inertia-based sensors to provide the controller with faster and/or more accurate feedback signals of the real-time aerodynamic load. The control performance of the AFC actuation and conventional elevons were evaluated. Results showed that the AFC with a momentum coefficient input of 2% was equivalent to 27-deg elevon deflection in terms of roll moment change and the control derivative of the AFC is at least doubled than that of the elevons.Streamwise and cross-flow gusts were simulated in the Andrew Fejer Unsteady Wind Tunnel at IIT. A spectral feedback approach was tested by generating the horizontal velocity components of the von Karman and the Dryden turbulence spectra. The velocity components in the test section were controlled temporally and spatially to generate transverse cross-flow gusts with designated wavelengths and frequencies. Sparse surface pressure measurements on the aircraft surface were used to develop lower-order models to estimate the instantaneous aerodynamic loads using the Sparse Identification of Nonlinear Dynamics (SINDy) algorithm. The pressure-based models acted as surrogates of the aerodynamic loads to provide feedback signals to the closed-loop controller to alleviate the gust effects on the wing. The control results showed that the pressure feedback scheme was sufficient to provide feedback signals to the controller to reduce the roll moment fluctuations caused by the dynamic perturbations down to 20% comparing to 30% to 50% in previous studies.
Show less
- Title
- AI IN MEDICINE: ENABLING INTELLIGENT IMAGING, PROGNOSIS, AND MINIMALLY INVASIVE SURGERY
- Creator
- Getty, Neil
- Date
- 2022
- Description
-
While an extremely rich research field, compared to other applications of AI such as natural language processing (NLP) and image processing...
Show moreWhile an extremely rich research field, compared to other applications of AI such as natural language processing (NLP) and image processing/generation, AI in medicine has been much slower to be applied in real-world clinical settings. Often the stakes of failure are more dire, the access of private and proprietary data more costly, and the burden of proof required by expert clinicians is much higher. Beyond these barriers, the often typical data-driven approach towards validation is interrupted by a need for expertise to analyze results. Whereas the results of a trained Imagenet or machine translation model are easily verified by a computational researcher, analysis in medicine can be much more multi-disciplinary demanding. AI in medicine is motivated by a great demand for progress in health-care, but an even greater responsibility for high accuracy, model transparency, and expert validation.This thesis develops machine and deep learning techniques for medical image enhancement, patient outcome prognosis, and minimally invasive robotic surgery awareness and augmentation. Each of the works presented were undertaken in di- rect collaboration with medical domain experts, and the efforts could not have been completed without them. Pursuing medical image enhancement we worked with radiologists, neuroscientists and a neurosurgeon. In patient outcome prognosis we worked with clinical neuropsychologists and a cardiovascular surgeon. For robotic surgery we worked with surgical residents and a surgeon expert in minimally invasive surgery. Each of these collaborations guided priorities for problem and model design, analysis, and long-term objectives that ground this thesis as a concerted effort towards clinically actionable medical AI. The contributions of this thesis focus on three specific medical domains. (1) Deep learning for medical brain scans: developed processing pipelines and deep learn- ing models for image annotation, registration, segmentation and diagnosis in both traumatic brain injury (TBI) and brain tumor cohorts. A major focus of these works is on the efficacy of low-data methods, and techniques for validation of results without any ground truth annotations. (2) Outcome prognosis for TBI and risk prediction for Cardiovascular Disease (CVD): we developed feature extraction pipelines and models for TBI and CVD patient clinical outcome prognosis and risk assessment. We design risk prediction models for CVD patients using traditional Cox modeling, machine learning, and deep learning techniques. In this works we conduct exhaustive data and model ablation study, with a focus on feature saliency analysis, model transparency, and usage of multi-modal data. (3) AI for enhanced and automated robotic surgery: we developed computer vision and deep learning techniques for understanding and augmenting minimally invasive robotic surgery scenes. We’ve developed models to recognize surgical actions from vision and kinematic data. Beyond model and techniques, we also curated novel datasets and prediction benchmarks from simulated and real endoscopic surgeries. We show the potential for self-supervised techniques in surgery, as well as multi-input and multi-task models.
Show less
- Title
- Constellation and Detection Design for Non-orthogonal Multiple Access System
- Creator
- Hao, Xing
- Date
- 2022
- Description
-
It is well known that the Non-Orthogonal Multiple Access (NOMA) system has the capability to achieve higher spectral efficiency and massive...
Show moreIt is well known that the Non-Orthogonal Multiple Access (NOMA) system has the capability to achieve higher spectral efficiency and massive connectivity. In this thesis, some optimized designs in both code-domain and power-domain NOMA systems are studied. Overall, the main contributions are listed as follows:Firstly, we investigate a NOMA system based on the combinatorial design with a novel constellation design for eliminating the surjective mapping from the linear adding data of multiuser and lowering the complexity of constellation design and Multiuser Detection (MUD). And for further enlarge the connectivity, we propose a Low-Density Codes structure to build a trade-off between the diversity and multiusers in resources by expurgating excessive interference on coding matrices. Therefore, our scheme can not only provide a one-to-one mapping pattern with a sparser multiple access structure but also be adjusted with more flexibility to achieve diversity and transmit a large number of users.Secondly, we proposed a constellation mapping scheme based on sub-optimized signal constellation designs by shaping the receiver’s constellation with a strategy that allows differentiated users by which resolvable points will be received allowing simpler detection and design.Thirdly, a novel NOMA system in uplink with time-delayed symbols is investigated, in which a modified Successive Interference Cancellation (SIC) scheme is used at the receiver side. In conventional SIC, when the transmission power is distributed to one user with trivial shifts to other users, the Bit Error Rate (BER) performance will be decreased significantly. Thus, we evaluated a modified SIC by adding artificial time offsets to the conventional power domain-NOMA (PD-NOMA) between users, which can provide higher degrees of freedom for power allocation of users and reduce mutual interference. And then, the added time offsets can provide additional resources to detect the superimposed signals, then the combination of users’ estimations of overall time slots will be considered to get detection improvements. Numerical results demonstrate that the BER performance of our modified SIC outperforms the PD-NOMA with other SIC-based schemes.Thirdly, we propose a new modulation scheme based on polynomial phase signals (PPS) for downlink and uplink non-orthogonal multiple-access (NOMA) transceivers in both the code and power domains. The PPS leads to outstanding spectral efficiency and bit error rate (BER) performance. We also propose a design criterion for CD-NOMA systems to enable the NOMA system to deploy a large number of users with more flexibility as well as lower design and detection complexity than traditional CD-NOMA systems, such as SCMA and PDMA.
Show less
- Title
- Fault Detection and Localization in Flying Capacitor Multilevel Converters
- Creator
- Hekmati, Parham
- Date
- 2021
- Description
-
This dissertation addresses fault detection, fault localization, and recovery in different topologies of the flying capacitor multilevel...
Show moreThis dissertation addresses fault detection, fault localization, and recovery in different topologies of the flying capacitor multilevel converters to guarantee the safe post-fault operation of the system and maintain load supply. There are multiple contributions of this dissertation, including techniques for device open-circuit fault (OCF) detection in stacked multicell converters (SMCs), a windows detector circuit to track the output terminal voltage levels and current directions, a fast and straightforward active power device OC fault detection and localization technique for the family of flying capacitor multilevel converters (FCMCs), a model-based open circuit fault detection and localization technique for the Buck-FCMC, a new estimator for tracking the voltage of flying capacitors, and fault detection and localization for interleaved converters. Each of these contributions is summarized below.The first contribution of this dissertation proposes a fast and straightforward technique for power device OCF detection in SMCs. The fault detection concept only needs to sense the converter's output terminal voltage and current. The sensed output terminal voltage is compared to a predicted one to detect and localize the OCF. A front-end routing circuit is then added to the SMC to maintain the operation of the converter post fault. The second contribution proposes a window detector circuit to track the output terminal voltage levels and current directions. The window detector circuit detects output terminal voltage level and current direction instead of requiring high sample rates and interrupt loops in the controller.The third contribution proposes a fast and straightforward active power device OCF detection and localization technique for the family of FCMCs, including DC to DC FCMCs, single or multi-phase H-bridge FCMCs, and cascaded H-bridge multilevel converters. This technique only needs to sense voltage and direction of current at the output terminals of the converters to detect and localize the fault. The method compares the measured and the expected terminal voltage while considering the commanded switch states and the terminal current direction. As switches transition to different states, healthy switches are excluded from the set of possible faulty switches until only one faulty switch remains. Coordination of the asynchronous operation of FPGA, DSP, and sensors is addressed for practical implementation. The fourth contribution is a model-based OCF detection and localization technique for the Buck-FCMC using model predictive control. In this technique, state-space equations of the system are developed. Comparison of the measured output inductor with the predicted one from the state-space model is used for the OCF detection and localization. This technique can potentially be used for other converters of the FCMC family. The fifth contribution is a new estimator for tracking the voltage of flying capacitors as the internal states of the FCMC. Using the proposed flying capacitor voltage estimator reduces the number of required sensors compared to the conventional model-based methods. At the same time, the overall technique's robustness to dynamic changes, including startup and load changes, is maintained. The last contribution is open and short circuit switch fault detection and localization for interleaved converters using the harmonic analysis of the output terminal parameters. With this method, monitoring electrical parameters of each leg of the interleaves converters is no longer required for fault detection and localization purposes.
Show less
- Title
- Corporate Insider Holdings and Analyst Recommendations
- Creator
- Gogolak, William Peter
- Date
- 2022
- Description
-
I pursued two competing theories about insider stock holding levels and analyst recommendations. The complementary hypothesis states that top...
Show moreI pursued two competing theories about insider stock holding levels and analyst recommendations. The complementary hypothesis states that top management and analysts conduct actions in a comparable manner; the contradicting hypothesis states that insiders and analysts exhibit opposite market actions (Hsieh and Ng, 2019). I examined insider stock holding levels and analyst recommendations. I analyzed a sample of S&P 500 firms from 2011-2020. In this sample, I found that the relationship between insider holding levels and analyst recommendations are opposite in concurrent time periods; thus, supporting the contradictory hypothesis. I also analyzed lagged insider holdings levels in a granger causality test. This test supports the idea that top management stock holdings increase when analysts downgrade stocks, and the opposite effect it true when analysts upgrade stocks. Using a sample of S&P 500 firms from 2011 – 2020, I provided support to my hypothesis that aggregated analyst recommendations forecast future aggregate equity returns. Furthermore, I conducted a test to support my conclusion that changes to insider holding levels should be used to forecast changes in future equity returns, beyond what is already explained by analyst recommendations. I argue two compelling additions that I make to the existing body of work regarding aggregate stock prediction. First, I build upon existing papers by using Bloomberg aggregate analyst recommendations as opposed to the IBES datasets. Second, I expand upon recent index forecasting papers by incorporating both aggregate analyst recommendations and aggregate insider holding levels into aggregate stock return models.
Show less
- Title
- Deep Learning and Model Predictive Methods for the Control of Fuel-Flexible Compression Ignition Engines
- Creator
- Peng, Qian
- Date
- 2022
- Description
-
Compression ignited diesel engines are widely used for transportation and power generation because of their high fuel efficiency. However,...
Show moreCompression ignited diesel engines are widely used for transportation and power generation because of their high fuel efficiency. However, diesel engines can cause concerning environmental pollution because of their high nitrogen oxide (NOx) and soot emissions. In addition to meeting the stringent emission regulations, the demand to reduce greenhouse gas emissions has become urgent due to the more frequent destructive catastrophes caused by global warming in recent decades. In an effort to reduce emissions and improve fuel economy, many techniques have been developed and investigated by researchers. Air handling systems like exhaust gas recirculation and variable geometry turbochargers are the most widely used techniques on the market for modern diesel engines. Meanwhile, the concept of low temperature combustion is widely investigated by researchers. Low temperature combustion can increase the portion of pre-mixed fuel-air combustion to reduce the peak in-cylinder temperature so that the formation of NOx can be suppressed. Furthermore, the combustion characteristics and performance of bio-derived fuel blends are also studied to reduce overall greenhouse gas emissions through the reduced usage of fossil fuels. All the above mentioned systems are complicated because they involve not only chemical reactions but also complex fluid motion and mixing processes. As such, the control of these systems is always challenging and limits their commercial application. Currentlymost control methods are feed-forward control based on load condition and engine speed due to the simplicity in real-time application. With the development of faster control unit and deep learning techniques, the application of more complex control algorithms is possible to further improve the emissions and fuel economy. This work focuses on improvements to the control of engine air handling systems and combustion processes that leverage alternative fuels.Complex air handling systems, featuring technologies such as exhaust gas recirculation (EGR) and variable geometry turbochargers (VGTs), are commonly used in modern diesel engines to meet stringent emissions and fuel economy requirements. The control of diesel air handling systems with EGR and VGTs is challenging because of their nonlinearity and coupled dynamics. In this thesis, artificial neural networks (ANNs) and recurrent neural networks (RNNs) are applied to control the low pressure (LP) EGR valve position and VGT vane position simultaneously on a light-duty multi-cylinder diesel engine. In addition, experimental examination of a low temperature combustion based on gasoline compression ignition as well as its control has also been studied in this work. This type of combustion has been explored on traditional diesel engines in order to meet increasingly stringent emission regulations without sacrificing efficiency. In this study, a six-cylinder heavy-duty diesel engine was operated in a mixing controlled gasoline compression ignition mode to investigatethe influence of fuels and injection strategies on the combustion characteristics, emissions, and thermal efficiencies. Fuels, including ethanol (E), isobutanol (IB), and diisobutylene (DIB), were blended with a gasoline fuel to form E10, E30, IB30, and DIB30 based on volumetric fraction. These four blends along with gasoline formed the five test fuels. With these fuels, three injections strategies were investigated, including late pilot injection, early pilot injection, and port fuel injection/direct injection. The impact of moderate exhaust gas recirculation on nitrogen oxides and soot emissions was examined to determine the most promising fuel/injection strategy for emissions reduction. In addition, first and second law analyses were performed to provide insights into the efficiency, loss, and exergy destruction of the various gasoline fuel blends at low and medium load conditions. Overall, the emission output, thermal efficiency, and combustion performances of the five fuels were found to be similar and their differences are modest under most test conditions.While experimental work showed that low temperature combustion with alternative fuels could be effective, control is still challenging due to not only the properties of different gasoline-type fuels but also the impacts of injection strategies on the in-cylinder reactivity. As such, a computationally efficient zero-dimension combustion model can significantly reduce the cost of control development. In this study, a previously developed zero-dimension combustion model for gasoline compression ignition was extended to multiple gasoline-type fuel blends and a port fuel injection/direct fuel injection strategy. Tests were conducted on a 12.4-liter heavy-duty engine with five fuel blends. A modification was made to the functional ignition delay model to cover the significantly different ignition delay behavior between conventional and oxygenated fuel blends. The parameters in the model were calibrated with only gasoline data at a load of 14 bar brake mean effective pressure. The results showed that this physics-based model can be applied to the other four fuel blends at three differentpilot injection strategies without recalibration. In order to also facilitate the control of emissions, machine learning models were investigated to capture NOx emissions. A kernel-based extreme learning machine (K-ELM) performed best and had a coefficient of correlation (R-squared) of 0.998. The combustion and NOx emission models are valid for not only conventional gasoline fuel but also oxygenated alternative fuel blends at three different pilot injection strategies. In order to track key combustion metrics while keeping noise and emissions within constraints, a model predictive control(MPC) was applied for a compression ignition engine operating with a range of potential fuels and fuel injection strategies. The MPC is validated under different scenarios, including a load step change, fuel type change, and injection strategy change, with proportional-integral (PI) control as the baseline. The simulation results show that MPC can optimize the overall performance through modifying the main injection timing, pilot fuel mass, and exhaust gas recirculation (EGR) fraction.
Show less
- Title
- Integrating Provenance Management and Query Optimization
- Creator
- Niu, Xing
- Date
- 2021
- Description
-
Provenance, information about the origin of data and the queries and/or updates that produced it, is critical for debugging queries and...
Show moreProvenance, information about the origin of data and the queries and/or updates that produced it, is critical for debugging queries and transactions, auditing, establishing trust in data, and many other use cases.While how to model and capture the provenance of database queries has been studied extensively, optimization was recognized as an important problem in provenance management which includes storing, capturing, querying provenance and so on. However, previous work has almost exclusively focused on how to compress provenance to reduce storage cost, there is a lack of work focusing on optimizing provenance capture process. Many approaches for capturing database provenance are using SQL query language and representing provenance information as a standard relation. However, even sophisticated query optimizers often fail to produce efficient execution plans for such queries because of the query complexity and uncommon structures. To address this problem, we study algebraic equivalences and alternative ways of generating queries for provenance capture. Furthermore, we present an extensible heuristic and cost-based optimization framework utilizing these optimizations. While provenance has been well studied, no database optimizer is aware of using provenance information to optimize the query processing. Intuitively, provenance records exactly what data is relevant for a query. We can use this feature of provenance to figure out and filter out irrelevant input data of a query early on and such that the query processing will be speeded up. The reason is that instead of fully accessing the input dataset, we only run the query on the relevant input data. In this work, we develop provenance-based data skipping (PBDS), a novel approach that generates provenance sketches which are concise encodings of what data is relevant for a query. In addition, a provenance sketch captured for one query is used to speed up subsequent queries, possibly by utilizing physical design artifacts such as indexes and zone maps. The work we present in this thesis demonstrates a tight integration between provenance management and query optimization can lead a significant performance improvement of query processing as well as traditional database management task.
Show less
- Title
- Extreme Fine-grained Parallelism On Modern Many-Core Architectures
- Creator
- Nookala, Poornima
- Date
- 2022
- Description
-
Processors with 100s of threads of execution and GPUs with 1000s of cores are among the state-of-the-art in high-end computing systems. This...
Show moreProcessors with 100s of threads of execution and GPUs with 1000s of cores are among the state-of-the-art in high-end computing systems. This transition to many-core computing has required the community to develop new algorithms to overcome significant latency bottlenecks through massive concurrency. Implementing efficient parallel runtimes that can scale up to hundreds of threads with extremely fine-grained tasks (less than 100 microseconds) remains a challenge. We propose XQueue, a novel lockless concurrent queueing system that can scale up to hundreds of threads. We integrate XQueue into LLVM OpenMP and implement X-OpenMP, a library for lightweight tasking on modern many-core systems with hundreds of cores. We show that it is possible to implement a parallel execution model using lock-less techniques for enabling applications to strongly scale on many-core architectures. While the fork-join model is suitable for on-node parallelism, the use of joins and synchronization induces artificial dependencies which can lead to under utilization of resources. Data-flow based parallelism is crucial to overcome the limitations of fork-join parallelism by specifying dependencies at a finer granularity. It is also crucial for parallel runtime systems to support heterogeneous platforms to better utilize the hardware resources that are available in modern day supercomputers. The existing parallel programming environments that support distributed memory either discover the DAG entirely on all processes which limits the scalability or introduce explicit communications which increases the complexity of programming. We implement Template Task Graph (TTG), a novel programming model and its C++ implementation by marrying the ideas of control and data flowgraph programming. TTG can address the issues of performance portability without sacrificing scalability or programmability by providing higher-level abstractions than conventionally provided by task-centric programming systems, but without impeding the ability of these runtimes to manage task creation and execution as well as data and resource management efficiently. TTG implementation currently supports distributed memory execution over 2 different task runtimes PaRSEC and MADNESS.
Show less
- Title
- Towards a Secure and Resilient Smart Grid Cyberinfrastructure Using Software-Defined Networking
- Creator
- Qu, Yanfeng
- Date
- 2022
- Description
-
To enhance the cyber-resilience and security of the smart grid against malicious attacks and system errors, we present software-defined...
Show moreTo enhance the cyber-resilience and security of the smart grid against malicious attacks and system errors, we present software-defined networking (SDN)-based communication architecture design for smart grid operation. Our design utilizes SDN technology, which improves network manageability, and provides application-oriented visibility and direct programmability, to deploy the multiple SDN-aware applications to enhance grid security and resilience including optimization-based network management to recover Phasor Measurement Unit (PMU) network connectivity and restore power system observability; Flow-based anomaly detection and optimization-based network management to mitigate Manipulation of demand of IoT (MadIoT) attack. We also developed a prototype system in a cyber-physical testbed and conducted extensive evaluation experiments using the IEEE 30-bus system, IEEE 118-bus system, and IIT campus microgrid.
Show less
- Title
- ROBUST AND EXPLAINABLE RESULTS UTILIZING NEW METHODS AND NON-LINEAR MODELS
- Creator
- Onallah, Amir
- Date
- 2022
- Description
-
This research focuses on robustness and explainability of new methods, and nonlinear analysis compared to traditional methods and linear...
Show moreThis research focuses on robustness and explainability of new methods, and nonlinear analysis compared to traditional methods and linear analysis. Further, it demonstrates that making assumptions, reducing the data, or simplifying the problem results in negative effect on the outcomes. This study utilizes the U.S. Patent Inventor database and the Medical Innovation dataset. Initially, we employ time-series models to enhance the quality of the results for event history analysis (EHA), add insights, and infer meanings, explanations, and conclusions. Then, we introduce newer algorithms of machine learning and machine learning with a time-to-event element to offer more robust methods than previous papers and reach optimal solutions by removing assumptions or simplifications of the problem, combine all data that encompasses the maximum knowledge, and provide nonlinear analysis.
Show less
- Title
- Automated Successive Baseline Schedule Model
- Creator
- Patel, Mihir Prakashbhai
- Date
- 2021
- Description
-
The construction project involves many stakeholders and diverse phases. Usually, a construction schedule is initially set up as a simple ideal...
Show moreThe construction project involves many stakeholders and diverse phases. Usually, a construction schedule is initially set up as a simple ideal case scenario, but then, during construction, the project faces modifications such as delay, acceleration, and change in logic caused by the project’s complexity and inherent risk. To recover the damage(s) caused by these modifications, the parties responsible for them should be identified accurately. Researchers and practitioners developed and used various delay analysis models to quantify delays, but the selection of the model depends on the time of analysis, available information, and expertise of the analyst. So, the results can be biased. The general problem is that most delay analysis models consider only delays in quantifying impacts rather than every type of modification that impacted the project, including CPM logic changes and adding/removing activities during construction. This study proposes a new successive baseline model to enable the precise analysis of the impacts of all sorts of modifications that occur during construction. This model can achieve unbiased and accurate results. The analysis process can also be computerized into a web application to improve efficiency and productivity. The fundamental concepts of the various modifications that can occur in the work schedule during construction and the analysis of the modifications’ impacts are presented in this study. Issues related to concurrency, float ownership, type of modification, selection of delay analysis model, and challenges with automation are also highlighted to broaden the understanding disagreements of the parties to a construction contract. A case example is presented to prove the accuracy and usefulness of the proposed model and web application.
Show less
- Title
- New Insights to Thermoelectrics from Fundamental Transport Properties to Potential Materials and Device Design
- Creator
- Pan, Zhenyu
- Date
- 2022
- Description
-
Thermoelectric (TE) materials have been widely studied as their ability to make direct energy conversion between heat and electricity. However...
Show moreThermoelectric (TE) materials have been widely studied as their ability to make direct energy conversion between heat and electricity. However, the conversion efficiency is still low compared with conventional devices no matter in power generation or electrical cooling. Therefore, most efforts have been made to improve the zT of TE materials, which is the commonly accepted metric for determining the performance of TE materials. But the progress is slow as the key parameters governing the zT is interrelated to each other which makes improving one often at the cost of the others leading to a narrow use of TE applications. Thus this thesis does not confine itself only in searching for high zT TE materials but also exploring useful things which are buried or ignored in previous thermoelectric researches from fundamental transport properties to TE device design. Firstly, we reevaluated the photo-Seebeck effect, which has been known for decades, and demonstrated that it is a powerful tool for semiconductor study as it allows the determination of mobilities, photo-carrier densities, even weighed mobilities (hence effective masses) of both electrons and holes and impact of defects all from a single sample. We then investigated a newly discovered low dimensional material, 2D tellurene, which has the potential to decouple the interrelated parameters to achieve a high zT. Lastly, we reconsidered the question that whether zT is the only merit index determining TE device performance. We hope this thesis can shed some light on thermoelectrics both from fundamental transport properties to device design.
Show less
- Title
- How Does Self-Stigma Influence Functionality in People with Serious Mental Illness? A Multiple Mediation Model of "Why-Try" Effect, Coping Resources, and Personal Recovery
- Creator
- Qin, Sang
- Date
- 2022
- Description
-
People with serious mental illness (SMI) face self-stigma effects that often undermine their functionality. Functionality herein refers to a...
Show morePeople with serious mental illness (SMI) face self-stigma effects that often undermine their functionality. Functionality herein refers to a person's execution of tasks (i.e., activities) and engagement in life situations (i.e., participation). This study used a path model to examine three mediating factors between self-stigma and functionality: The "why-try" effect, coping resources, and personal recovery. Specifically, the “why-try” effect was viewed as an extension of self-stigma harm that occurred when people suffered from a loss of self-esteem and self-efficacy. Coping resources were conceptualized as individuals’ strengths and the support they had to overcome negative stigma outcomes, particularly stigma stress. Endorsement of personal recovery, namely pursuing self-defined life goals despite illness—had a buffering effect reducing self-stigma. These three mediators were examined simultaneously using an archival dataset. Due to poor internal consistency, coping resources were eventually removed from the model; the subsequent, revised model achieved a good model fit. Results showed that people with SMI experiencing self-stigma were found to have an enhanced "why-try" effect as well as reduced personal recovery, leading to a decline in functionality. Implications of the results and future research directions are discussed.
Show less
- Title
- Decreasing Body Dissatisfaction in Male College Athletes: A Pilot Study of the Male Athlete Body Project
- Creator
- Perelman, Hayley
- Date
- 2020
- Description
-
Body dissatisfaction is associated with marked distress and often precipitates disordered eating symptomology. Body dissatisfaction in male...
Show moreBody dissatisfaction is associated with marked distress and often precipitates disordered eating symptomology. Body dissatisfaction in male athletes is an important area to explore, as research in this field often focuses on eating disorders in female athletes. The current body of literature regarding male college athletes suggests that they experience pressures associated with both societal muscular ideals and sport performance. While there is a clear association between drive for muscularity and body dissatisfaction in college male athletes, no study to date has evaluated the efficacy of a body dissatisfaction intervention for this population. Therefore, the present study sought to investigate the efficacy and feasibility of a pilot intervention program that targeted body dissatisfaction in male college athletes. Participants were randomized into an adapted version of the Female Athlete Body Project (i.e., the Male Athlete Body Project) or an assessment-only control condition. A total of 79 male college athletes (39 in treatment condition) completed this study for a retention rate of 84.9%. Participants in the experimental group attended three 80-minute group sessions once a week for three weeks. All participants completed measures of body dissatisfaction, internalization of the body ideal, drive for muscularity, negative affect, and sport confidence at three time points: baseline, post-treatment (three weeks after baseline for the control condition), and one-month follow-up. Hierarchical Linear Modeling was used to assess differences between conditions across time. Participation in the MABP improved men’s satisfaction with specific body parts, drive for muscularity, and body-ideal internalization at post-treatment. Men in the MABP also reported improvements in appearance evaluation and overweight preoccupation at post-treatment and one-month follow-up, and in negative affect at one-month follow-up only. Improvements in drive for muscularity were retained at one-month follow-up. This study provides preliminary evidence for the feasibility and efficacy of the Male Athlete Body Project.
Show less
- Title
- INTELLIGENT SOLID STATE CIRCUIT BREAKERS USING WIDE BANDGAP SEMICONDUCTORS
- Creator
- Zhou, Yuanfeng
- Date
- 2021
- Description
-
Electricity, in its predominant form of alternating current (AC), is at the heart of modern civilization. However, direct current (DC)...
Show moreElectricity, in its predominant form of alternating current (AC), is at the heart of modern civilization. However, direct current (DC) electricity is re-emerging, offering higher transmission efficiency, better system stability, better match with modern electrical loads, and easier integration of renewable and storage resources than AC. DC power is gaining tractions in HVDC or MVDC grids, DC data centers, photovoltaic farms, EV charging infrastructures, shipboard, and aircraft power systems. However, DC fault protection remains a major challenge. Interruption of DC currents is extremely difficult due to the lack of current zero crossings which are naturally available in AC power systems. Conventional mechanical breakers only offer a very limited DC current interruption capability even after significant power derating. Hybrid circuit breakers (HCBs) offers a relatively low conduction loss but a response time too slow to protect many low-impedance DC grids. Solid state circuit breakers (SSCBs) can quickly interrupt a DC fault current within tens of microseconds but suffer from high conduction losses. Furthermore, it is generally difficult for an SSCB to distinguish between a short circuit fault and a normal inrush current condition during the start-up of a capacitive load.The purpose of this thesis is to develop a tri-mode, intelligent solid-state circuit breaker technology using wide bandgap semiconductors (especially Gallium Nitride transistors), referred to as iBreaker. The iBreaker design methodology includes the use of mΩ-resistance GaN and SiC devices, new circuit topology and control techniques beyond the commonly used ON/OFF switch configuration, and integration of intelligent functions without increasing component count. The iBreaker adopts a distinct pulse width modulation (PWM) current limiting (PWM-CL) state in addition to the conventional ON and OFF states to facilitate soft startup, fault authentication, and fault location functions. Key design elements, such as use of wide bandgap (particularly GaN) switches, tri-mode operation, combined digital and analog control, the bidirectional buck topology, variable PWM frequency control and universal hardware/software architecture, are discussed in detail. Multiple iBreaker prototypes, rated at 380 V/20 A and 1000 V/10 A, respectively, are built and tested to validate the proposed SSCB design concept. 99.95% transmission efficiency, passive cooling, and μs-scale response time are demonstrated experimentally.
Show less
- Title
- SAFETY AND MOBILITY IMPACTS ASSESSMENT OF THE CHICAGO BIKE LANE PROGRAM
- Creator
- Zhao, Yu
- Date
- 2021
- Description
-
In recent years, bike as a travel mode is getting increasingly popular among large cities in the U.S. These cities also found promoting bike...
Show moreIn recent years, bike as a travel mode is getting increasingly popular among large cities in the U.S. These cities also found promoting bike mode can potentially mitigate traffic congestion issues, reduce carbon emission and improve the quality of life for residents. Therefore, many cities-initiated bike-related programs promote the bike mode from all aspects, such as establishing a shared bike system and developing bike-related facilities. Specifically, bike lane installation is widely seen in large cities as a pivot component of bike promotion programs. Due to the installation of bike lanes on the existing network, vehicles’ safety and mobility performance may be affected due to the variation of facilities. This study attempts to propose a methodology to quantify the safety and mobility impacts on vehicles brought by bike lane installation. The proposed method accounts for safety impact by using predicted crashes in conjunction with field observed crash data for empirical Bayes (EB) before-after comparison group analysis. The mobility impact is captured by comparing the segment average travel time before and after the bike lane installation. Further, vehicle volume information is involved in the consumer surplus computation to quantify the variation in vehicle safety, and mobility performance resulting from the bike lane installation. A case study is conducted using a real data set from the city of Chicago bike lane program. The results reveal that the safety and mobility impacts vary mainly depending on the type of bike lane installed and location.
Show less
- Title
- The Development of a Measure of Public Stigma Towards Adults With Autism
- Creator
- Beedle, Robert Brian
- Date
- 2022
- Description
-
Adults with autism (AwA) report experiences of stigma and discrimination. Yet, quantitative research suggests that public attitudes are...
Show moreAdults with autism (AwA) report experiences of stigma and discrimination. Yet, quantitative research suggests that public attitudes are relatively benign. This research discrepancy is compounded by the present lack of a stakeholder-informed, theoretically-guided measure of the stigma towards AwA. The objective of the present study was to develop a measure of stigma towards AwA following best practices survey methodology. First, existing related measures were reviewed for possible candidate items, yielding 36 draft questions related to the stigma of AwA. Next, seven stakeholders in the AwA community were recruited to provide feedback on their experiences of stigma and discrimination, as well as feedback on the draft items. Following stakeholder feedback, draft items were edited, added, or removed based on feedback from the participants with AwA and their lived experiences, resulting in a revised measure of 51 candidate items. Finally, these 51 items underwent a quantitative phase with participants recruited through MTurk (N = 357). Exploratory factor analyses were conducted in order to generate a data driven factor structure that reflected stigma theory. The end result was a 20-item, four factor solution measuring numerous components of stigma within factors including cognitive components of stigma, blame, positive and negative affect, and comfort with close contact. The resulting measurement tool was titled the Public Stigma towards Adults with Autism Scale (PSAWA) and demonstrated strong psychometric properties. The tool has utility for further studying stigma towards AwA and assessing stigma interventions.
Show less