Search results
(7,401 - 7,420 of 7,598)
Pages
- Title
- Informed Consent in Digital Data Management
- Creator
- Hildt, Elisabeth, Laas, Kelly
- Date
- 2022, 2022-01-03
- Publisher
- Springer, Cham
- Description
-
This article discusses the role of informed consent, a well-known concept and standard established in the field of medicine, in ethics codes...
Show moreThis article discusses the role of informed consent, a well-known concept and standard established in the field of medicine, in ethics codes relating to digital data management. It analyzes the significance allotted to informed consent and informed consent-related principles in ethics codes, policies, and guidelines by presenting the results of a study focused on 31 ethics codes, policies, and guidelines held as part of the Ethics Codes Collection. The analysis reveals that up to now, there is a limited number of codes of ethics, policies, and guidelines on digital data management. Informed consent often is a central component in these codes and guidelines. While there undoubtedly are significant similarities between informed consent in medicine and digital data management, in ethics codes and guidelines, informed consent-related standards in some fields such as marketing are weaker and less strict. The article concludes that informed consent is an essential standard in digital data management that can help effectively shape future practices in the field. However, a more detailed reflection on the specific content and role of informed consent and informed consent-related standards in the various areas of digital data management is needed to avoid the weakening and dilution of standards in contexts where there are no clear legal regulations.
Show less - Collection
- Codes of Ethics and Ethical Guidelines: Emerging Technologies and Changing Fields
- Title
- Simulation and Experimental Testing of High-Gradient Dielectric Disk Accelerating Cavities
- Creator
- Weatherly, Sarah K.
- Date
- 2022
- Description
-
Structure-based wakefield acceleration can be accomplished using either Collinear Wakefield Acceleration (CWA) where the drive beam and the...
Show moreStructure-based wakefield acceleration can be accomplished using either Collinear Wakefield Acceleration (CWA) where the drive beam and the witness beam are located on the same beamline or Two Beam Acceleration (TBA) where the RF power generated by the drive beam is extracted and transferred to the witness beam line. A Dielectric Disk Accelerator (DDA) is an accelerating structure that is utilized by TBA that uses dielectric disks to improve the structure's shunt impedance and accelerate the witness beam. Dielectric based accelerators studied in this thesis are X-Band structures (have a working frequency between 8 and 12 GHz) that can use any pulse length but in this study utilize short (<20 ns) traveling wave pulses. Short pulse lengths are used to decrease breakdown probability and allow for a large gradient. DDAs have a higher group velocity and a larger shunt impedance compared to traditional metallic accelerating structures while maintaining a large accelerating gradient. DDAs are a strong candidate for use in the Argonne Wakefield Accelerator’s 500 MeV Demonstrator. Recent experimental results of a clamped single cell structure demonstrated a >100 MV/m accelerating gradient with no evidence of breakdown in the RF volume. Additional structures, including a brazed single cell model and a multicell structure, have been designed and are now being fabricated for high power testing.
Show less
- Title
- A SCALABLE AND CUSTOMIZABLE SIMULATION PLATFORM FOR ACCURATE QUANTUM NETWORK DESIGN AND EVALUATION
- Creator
- Wu, Xiaoliang
- Date
- 2021
- Description
-
Recent advances in quantum information science enabled the development of quantum communication network prototypes and created an opportunity...
Show moreRecent advances in quantum information science enabled the development of quantum communication network prototypes and created an opportunity to study full-stack quantum network architectures. The scale and complexity of quantum networks require cost-efficient means for testing and evaluation. Simulators allow for testing hardware, protocols, and applications cost-effectively before constructing experimental networks. This work develops SeQUeNCe, a comprehensive, customizable quantum network simulator. We have explored SeQUeNCe for quantum communication network evaluation. We use SeQUeNCe to study the performance of the quantum network with different hardware and applications. Additionally, we extend SeQUeNCe to a parallel discrete-event simulator by using the message passing interface (MPI). We comprehensively analyze the benefit and overhead of parallelization. The parallelization technique significantly increases the scalability of SeQUeNCe. In the future, we would like to improve SeQUeNCe in three aspects. First, we plan to continue reducing overhead from parallelization and increasing the scalability of SeQUeNCe. Second, we plan to investigate means to model quantum memory, entanglement protocols, and control protocols to enrich simulation models in the SeQUeNCe library. Third, we plan to integrate hardware with SeQUeNCe to enable high-fidelity analysis.
Show less
- Title
- Development of MIITRA T1w, DTI and FOD templates of the older adult brain in a common space
- Creator
- Wu, Yingjuan
- Date
- 2022
- Description
-
Human brain atlases play an important role in neuroimaging studies and are commonly used as references for spatial normalization, tissue...
Show moreHuman brain atlases play an important role in neuroimaging studies and are commonly used as references for spatial normalization, tissue segmentation, automated brain parcellation, seed selection for functional connectivity analyses and fiber-tracking, or standards for algorithm evaluation. A brain atlas typically consists of brain templates of different imaging modalities in a common space and semantic labels that delineate brain regions according to the characteristics of the underlying tissue.High-quality T1-weighted (T1w) and diffusion tensor imaging (DTI) brain templates that are representative of the individuals under study enhance the accuracy of template-based neuroimaging investigations, and when they are also located in a common space they facilitate optimal integration of information on brain morphometry and diffusion characteristics. However, such multimodal templates have not been constructed for the brain of older adults. This thesis introduced an iterative method for construction of multimodal T1w and DTI templates that aims at maximizing the quality of each template separately as well as the spatial matching between templates. The performance of the proposed method was evaluated across iterations and was compared to the performance of state-of-the-art multimodal template construction approaches based on multichannel registration. Using the proposed method, along with other recently developed techniques, high-quality T1w and DTI templates of the older adult brain were developed in a common space at 0.5mm resolution for the MIITRA atlas. In this thesis, the new templates were compared to other available templates in terms of the image quality, inter-subject and inter-modality spatial normalization accuracy achieved when used as a reference, and the representativeness of the older adult brain. Furthermore, as fiber orientation distribution (FOD) model is capable of resolving intravoxel heterogeneity, which overcomes the limitations of the DTI model especially in regions with complex neuronal microarchitecture, FOD template is in high demand to facilitate FOD-based, fixel-based analyses, white matter connectivity studies and white matter parcellations. In this thesis, several FOD template construction methods were compared and a FOD template was developed at 0.5mm resolution for the MIITRA atlas. Overall, the present work brought new insights into multimodal template construction, conducted a thorough, quantitative evaluation of available multimodal template construction methods, and generated much-needed high quality T1w, DTI and FOD templates of the older adult brain in a common space with 0.5mm resolution.
Show less
- Title
- AN EXPLORATION INTO THE EFFECTS OF CHROMATIN STRUCTURAL PROTEINS ON THE DYNAMICS AND ENERGETIC LANDSCAPE OF NUCLEOSOME ARCHITECTURES
- Creator
- Woods, Dustin C
- Date
- 2022
- Description
-
Comprised of eight core histones wrapped around at least 147 base pairs of DNA, nucleosomes are the fundamental unit the chromatin fiber from...
Show moreComprised of eight core histones wrapped around at least 147 base pairs of DNA, nucleosomes are the fundamental unit the chromatin fiber from which long arrays are built to compact genetic information into the cell nucleus. Structural proteins, such as linker histones (LH) and centromere proteins (CENP), interact with the DNA to dictate the exact architecture of the fiber which can directly influence the regulation of epigentic processes. However, the mechanisms by which structural proteins affect these processes are poorly understood. In this thesis, I will explore the various way in which LHs and CENP-N affect nucleosome and, by extension, chromatin fiber dynamics. First, I present a series of simulations of nucleosomes bound to LHs, otherwise known as chromatosomes, with the globular domain of two LH variants, generic H1 (genGH1) and H1.0 (GH1.0), to determine how their differences influence chromatosome structures, energetics and dynamics. These simulations highlight the thermodynamic basis for different LH binding motifs, and details their physical and chemical effects on chromatosomes. Second, I examine how well the findings above translate from mono-nucleosomes to poly-nucleosome arrays. I present a series of molecular dynamics simulations of octa-nucleosome arrays, based on a cryo-EMstructure of the 30-nm chromatin fiber, with and without the globular domains of the H1 LH to determine how they influence fiber structures and dynamics. These simulations highlight the effects of LH binding on the internal dynamics and global structure of poly- nucleosome arrays, while providing physical insight into a mechanism of chromatin compaction. Third, I took a brief departure from LHs to study the effects that the centromere protein N (CENP-N) has on the poly-nucleosome systems. I present a series of molecular dynamics simulations of CENP-N and di-nucleosome complexes based on cryo- EM and crystal structures provided by Keda Zhou and Karolin Luger. Simulations were conducted with nucleosomes in complex with one, two, and no CENP-Ns. This work, in collaboration with the Karolin Luger Group (University of Colorado – Boulder) and the Aaron Straight Group (Stanford University), represents the first atomistic simulations of this novel complex, providing the foundation for a plethora of future research opportunities exploring centromeric chromatin the effect that its structure and dynamics have on epigenetics. Lastly, I return to the chromatosome to study how DNA sequence affects the free energy surface and detailed mechanism of LH transitions between binding modes. I used umbrella sampling simulations to produce PMFs of chromatosomes wrapped in three different DNA sequences: Widom 601, poly-AT, and poly-CG. This work, my final in the series, represents a culmination of my studies furthering the understanding of biophysical phenomena surrounding LHs and how they can be extrapolated towards epigentic mechanisms. I was able to report on the first PMFs illustrating a previously unknown transition and describe the transition mechanism as it depends on DNA sequence.
Show less
- Title
- The Studio Practice for Sustainable (Craft) Production
- Creator
- Werdhaningsih, Hendriana
- Date
- 2022
- Description
-
Craft market demand globally is rising. On the other hand, the domination of economic goals in craft production is threading the social system...
Show moreCraft market demand globally is rising. On the other hand, the domination of economic goals in craft production is threading the social system and the environment. Craft production facts do not represent the sustainable development principles that should be a central concept for this age. Design as practice and method had not yet correctly facilitated craft production to embrace the harmony of the social, environmental, and economic systems. Believing that studio is a core design practice, this research investigated studio practice through interviews, field research, and action research conducted in Indonesia and the US. It developed a model called Studio Practice for Sustainable (Craft) Production, the SP2 Model. The Model helps designers, the crafts community, and stakeholders ensure their role in the studio practice and determine their goals for sustainable development. The SP2 Model offers alternative practical solutions in craft production, contributes to polycentric discourse, and designs interventions in sustainable development models.
Show less
- Title
- ESTIMATING PM2.5 INFILTRATION FACTORS FROM REAL-TIME OPTICAL PARTICLE COUNTERS DEPLOYED IN CHICAGO HOMES BEFORE AND AFTER MECHANICAL VENTILATION RETROFITS
- Creator
- Wang, Mingyu
- Date
- 2021
- Description
-
PM2.5 are fine inhalable particles that are 2.5 micrometers or smaller in size. Indoor PM2.5 consists of outdoor PM2.5 (ambient PM2.5) that is...
Show morePM2.5 are fine inhalable particles that are 2.5 micrometers or smaller in size. Indoor PM2.5 consists of outdoor PM2.5 (ambient PM2.5) that is infiltrated into the indoor environment and indoor generated PM2.5 (non-ambient PM2.5). As people spend nearly 90% of their lifetimes indoors, with most of that time in their homes, PM2.5 exposure in homes results in severe health effects such as asthma. One strategy increasingly being used to dilute air pollutants generated indoors and improve indoor air quality (IAQ) in homes is the introduction of mechanical ventilation systems. However, mechanical ventilation systems also have the potential to introduce more ambient PM2.5 than relying on infiltration alone, although limited data exist to demonstrate the magnitude of impacts in occupied homes. The objective of this paper is to estimate the infiltration factor (Finf) of PM2.5 before and after installing mechanical ventilation systems in a subset of occupied homes. The data source utilized comes from the Breathe Easy Project, a more than 2-year-long study conducted in 40 existing homes in Chicago, IL aiming to explore the effects of three different types of mechanical ventilation system retrofits on IAQ and asthma. An automated algorithm was developed to remove indoor PM2.5 peaks in time-series data collected from optical particle counters deployed inside and outside of each home. The Finf was estimated using the resulting indoor/outdoor ratio with indoor peaks removed. Before mechanical ventilation retrofits, the weekly median Finf was 0.29 (summer median = 0.41, fall median = 0.26, winter median = 0.29, spring median = 0.30); after mechanical ventilation retrofits, the median Finf was 0.34 (winter median= 0.28, spring median = 0.45, summer median = 0.54, fall median = 0.20). Differences in Finf between pre- and post-intervention periods were not statistically significant (p = 0.23 from Wilcoxon signed rank tests). The median PM2.5 infiltration factor increased ~22% (from 0.27 to 0.33) with the installation of balanced ventilation systems with energy recovery ventilators (ERV), although differences were not statistically significant (Wilcoxon signed rank p = 0.35). The median PM2.5 infiltration factor decreased ~4% (from 0.28 to 0.27) after installing intermittent CFIS systems, which intermittently supply ventilation air through the existing central air handling units and associated filters (which were upgraded to a minimum of MERV 10 in all CFIS homes), although differences were not statistically significant (Wilcoxon signed rank p = 0.24). The median PM2.5 infiltration factor increased ~26% (from 0.35 to 0.44) with the installation of continuous exhaust-only systems, and differences were significant (Wilcoxon signed rank p = 0.04). These results suggest that the filtration mechanisms used on the CFIS and balanced systems were adequate for maintaining similar distributions of Finf values pre- and post-interventions whereas the increased delivery of outdoor air via the building envelope by exhaust-only systems significantly increased Finf following retrofits.
Show less
- Title
- A New Control and Decision Support Framework To Avoid Fast-Evolving System Collapse and Cascading Failure
- Creator
- Guha, Bikiran
- Date
- 2022
- Description
-
The modern power system is a vast and incredibly complex network with a very large number of equipment operating round the clock to reliably...
Show moreThe modern power system is a vast and incredibly complex network with a very large number of equipment operating round the clock to reliably transport electricity from generators to consumers. However, factors such as aging and faulty equipment, extreme and unpredictable weather, cyber attacks and increasing amounts of unpredictable renewable generation have made it increasingly vulnerable to cascading failure and wide-area collapse. Therefore, a lot of work has been done over the years on cascading failure vulnerability analysis and mitigation. However, to the best of our knowledge, the existing literature on this topic focus on preventive analysis and mitigation, mostly from a planning perspective. There is a lack of decision support schemes which can take real-time preventive action when the system becomes vulnerable to cascading failure, while taking into account the various dynamics and uncertainties involved in these types of failures. The only defense under these situations are pre-designed emergency control schemes. However, they are only effective against known vulnerabilities and can make matters worse if not accurately designed and calibrated.This research work has proposed a novel wide-area monitoring protection and control (N-WAMPAC-20) framework designed to make decisions in real-time to assess the vulnerabilities of the system (when a disturbance happens) and to implement mitigation actions, if necessary. The main contributions of this dissertation focus on the disturbance monitoring, real-time control and decision making aspects of this framework. The proposed framework has been divided into two major parts: an offline part and an online part. The offline part continuously runs extreme contingency analysis in the background (using combined dynamics and protection simulators) to generate elements which can assess system vulnerabilities and suggest suitable mitigation actions, if necessary. In this regard, a novel load shedding adjustment scheme is also proposed, which has been shown to be effective against a variety of fast-evolving cascading failure scenarios. The online part consists of real-time disturbance monitoring and decision-making components. The disturbance monitoring component focuses on real-time fault detection and location. If a fault has been identified and located, the real-time decision making component determines the vulnerability of the system, by consulting with the elements designed offline. If vulnerabilities are identified, targeted mitigation actions are implemented. The design and applicability of a prototype of N-WAMPAC-20 has been presented using a case of voltage collapse and a case of wide-area loss of synchronization on a synthetic model of the Texas grid.
Show less
- Title
- Distinctive Categorization Deficits in Repeated Sorting of Common Household Objects in Hoarding Disorder
- Creator
- Hamilton, Catharine Elizabeth
- Date
- 2022
- Description
-
The present study examines sorting techniques and deficits among individuals with hoarding disorder (n = 34) compared to age- and gender...
Show moreThe present study examines sorting techniques and deficits among individuals with hoarding disorder (n = 34) compared to age- and gender-matched adults (n = 35) in the general population. Performance was compared on the Booklet Category Test (BCT), selected other neuropsychological measures, and an ecologically valid sorting task designed for the study to model the Delis-Kaplan Executive Function System (D-KEFS) Sorting subtest but with common household objects as stimuli. Contrary to predictions, individuals with hoarding disorder did not perform significantly worse than controls on the BCT or the sorting task designed for the present study. Also contrary to predictions, the hoarding group performed significantly better when initiating their own sorts of the objects than when tasked with naming categories grouped by the researcher. These findings are discussed as well as exploratory analyses suggesting participants with hoarding put forth more mental effort sorting the household objects (shoes and mail). They provided significantly more individual responses on the task with significantly more description errors. IQ and performance on other selected neuropsychological measures were not significantly different between groups. These findings provide preliminary evidence there may be specific types of real-life sorting difficulties associated with hoarding disorder that are subtle and beyond what existing neuropsychological tests can measure. Given that current CBT treatments for hoarding presuppose a certain level of competency in sorting (e.g., recognizing and naming different categories of household items to complete a personal organizing plan), it is important to clarify potential sorting and categorization deficits in this group as one possible avenue to help improve treatment response among individuals struggling with hoarding disorder.
Show less
- Title
- Machine Learning On Graphs
- Creator
- He, Jia
- Date
- 2022
- Description
-
Deep learning has revolutionized many machine learning tasks in recent years.Successful applications range from computer vision, natural...
Show moreDeep learning has revolutionized many machine learning tasks in recent years.Successful applications range from computer vision, natural language processing to speech recognition, etc. The success is partially due to the availability of large amounts of data and fast growing computing resources (i.e., GPU and TPU), and partially due to the recent advances in deep learning technology. Neural networks, in particular, have been successfully used to process regular data such as images and videos. However, for many applications with graph-structured data, due to the irregular structure of graphs, many powerful operations in deep learning can not be readily applied. In recent years, there is a growing interest in extending deep learning to graphs. We first propose graph convolutional networks (GCNs) for the task of classification or regression on time-varying graph signals, where the signal at each vertex is given as a time series. An important element of the GCN design is filter design. We consider filtering signals in either the vertex (spatial) domain, or the frequency (spectral) domain. Two basic architectures are proposed. In the spatial GCN architecture, the GCN uses a graph shift operator as the basic building block to incorporate the underlying graph structure into the convolution layer. The spatial filter directly utilizes the graph connectivity information. It defines the filter to be a polynomial in the graph shift operator to obtain the convolved features that aggregate neighborhood information of each node. In the spectral GCN architecture, a frequency filter is used instead. A graph Fourier transform operator or a graph wavelet transform operator first transforms the raw graph signal to the spectral domain, then the spectral GCN uses the coe"cients from the graph Fourier transform or graph wavelet transform to compute the convolved features. The spectral filter is defined using the graph’s spectral parameters. There are additional challenges to process time-varying graph signals as the signal value at each vertex changes over time. The GCNs are designed to recognize di↵erent spatiotemporal patterns from high-dimensional data defined on a graph. The proposed models have been tested on simulation data and real data for graph signal classification and regression. For the classification problem, we consider the power line outage identification problem using simulation data. The experiment results show that the proposed models can successfully classify abnormal signal patterns and identify the outage location. For the regression problem, we use the New York city bike-sharing demand dataset to predict the station-level hourly demand. The prediction accuracy is superior to other models. We next study graph neural network (GNN) models, which have been widely used for learning graph-structured data. Due to the permutation-invariant requirement of graph learning tasks, a basic element in graph neural networks is the invariant and equivariant linear layers. Previous work by Maron et al. (2019) provided a maximal collection of invariant and equivariant linear layers and a simple deep neural network model, called k-IGN, for graph data defined on k-tuples of nodes. It is shown that the expressive power of k-IGN is equivalent to k-Weisfeiler-Lehman (WL) algorithm in graph isomorphism tests. However, the dimension of the invariant layer and equivariant layer is the k-th and 2k-th bell numbers, respectively. Such high complexity makes it computationally infeasible for k-IGNs with k > 3. We show that a much smaller dimension for the linear layers is su"cient to achieve the same expressive power. We provide two sets of orthogonal bases for the linear layers, each with only 3(2k & 1) & k basis elements. Based on these linear layers, we develop neural network models GNN-a and GNN-b, and show that for the graph data defined on k-tuples of data, GNN-a and GNN-b achieve the expressive power of the k-WL algorithm and the (k + 1)-WL algorithm in graph isomorphism tests, respectively. In molecular prediction tasks on benchmark datasets, we demonstrate that low-order neural network models consisting of the proposed linear layers achieve better performance than other neural network models. In particular, order-2 GNN-b and order-3 GNN-a both have 3-WL expressive power, but use a much smaller basis and hence much less computation time than known neural network models. Finally, we study generative neural network models for graphs. Generative models are often used in semi-supervised learning or unsupervised learning. We address two types of generative tasks. In the first task, we try to generate a component of a large graph, such as predicting if a link exists between a pair of selected nodes, or predicting the label of a selected node/edge. The encoder embeds the input graph to a latent vector space via vertex embedding, and the decoder uses the vertex embedding to compute the probability of a link or node label. In the second task, we try to generate an entire graph. The encoder embeds each input graph to a point in the latent space. This is called graph embedding. The generative model then generates a graph from a sampled point in the latent space. Di↵erent from the previous work, we use the proposed equivariant and invariant layers in the inference model for all tasks. The inference model is used to learn vertex/graph embeddings and the generative model is used to learn the generative distributions. Experiments on benchmark datasets have been performed for a range of tasks, including link prediction, node classification, and molecule generation. Experiment results show that the high expressive power of the inference model directly improves latent space embedding, and hence the generated samples.
Show less
- Title
- X-Ray Diffraction Studies of Activation and Relaxation In Fast and Slow Rat Skeletal Muscle
- Creator
- Gong, Henry M.
- Date
- 2022
- Description
-
The contractile properties of fast-twitch and slow-twitch skeletal muscles are primarily determined by the myosin isoform content and...
Show moreThe contractile properties of fast-twitch and slow-twitch skeletal muscles are primarily determined by the myosin isoform content and modulated by a variety of sarcomere proteins. X-ray diffraction studies of regulatory mechanisms in muscle contraction have focused predominately on fast- or mixed-fiber muscle with slow muscle being much less studied. Here, we used time-resolved x-ray diffraction to investigate the dynamic behavior of the myofilament proteins in relatively pure slow fiber rat soleus (SOL) and pure fast fiber rat extensor digitorum longus (EDL) muscle during twitch and tetanic contractions at optimal lengths (Lo), 95% Lo, and 90% Lo. Before the delivery of stimulation, reduction in muscle length led to decrease in passive tension. The x-ray reflections upon reduction in length showed no transition in the myosin heads from ordered OFF state, where heads are held close to the thick filament backbone, to disordered ON states, where heads are free to bind to thin filament, in both muscles. When stimulation was delivered to both muscles for twitch contractions at Lo, x-ray signatures indicating the transition of myosin heads to ON states were observed in EDL but not in soleus muscle. During tetanic contractions, changes in the disposition of myosin heads as active tension develops is a cooperative process in EDL muscle whereas in soleus muscle this relationship is less cooperative. Moreover, this high cooperativity was maintained in EDL at all lengths tested here, but cooperativity decreased upon reduction in lengths in soleus. The observed reduced extensibility of the thick filaments in soleus muscles as compared to EDL muscles indicate a molecular basis for this behavior. These data indicate that for the EDL thick filament activation is a cooperative strain-induced mechano-sensing mechanism, whereas for the soleus thick filament xiii activation has a more graded response. Lastly, x-ray data collected at different lengths demonstrated that the effect of length on soleus is more pronounce compared to EDL, particularly noticeable in the thick filament during relaxation phase after stimulation ceased. These observations indicate that soleus is more length-dependent than EDL. These different approaches to thick filament regulation in fast- and slow-twitch muscles may be designed to allow for short duration, strong contractions versus sustained finely controlled contractions, respectively.
Show less
- Title
- Pressure Feedback Control on a UCAS Model in Random Gusts
- Creator
- He, Xiaowei
- Date
- 2021
- Description
-
This research focuses on efficient active flow control (AFC) of the aerodynamic loads on a generic tailless delta wing in various flow/flight...
Show moreThis research focuses on efficient active flow control (AFC) of the aerodynamic loads on a generic tailless delta wing in various flow/flight conditions, such as, flying through atmosphere gusts, fast pitching, and other rapid maneuvers that would cause the aircraft to experience unsteady aerodynamic effects. A feedback control scheme that uses the surface pressure measurements to estimate the actual aerodynamic loads that act on the aircraft is put forward, with the hypothesis that a pressure surrogate can replace the inertia-based sensors to provide the controller with faster and/or more accurate feedback signals of the real-time aerodynamic load. The control performance of the AFC actuation and conventional elevons were evaluated. Results showed that the AFC with a momentum coefficient input of 2% was equivalent to 27-deg elevon deflection in terms of roll moment change and the control derivative of the AFC is at least doubled than that of the elevons.Streamwise and cross-flow gusts were simulated in the Andrew Fejer Unsteady Wind Tunnel at IIT. A spectral feedback approach was tested by generating the horizontal velocity components of the von Karman and the Dryden turbulence spectra. The velocity components in the test section were controlled temporally and spatially to generate transverse cross-flow gusts with designated wavelengths and frequencies. Sparse surface pressure measurements on the aircraft surface were used to develop lower-order models to estimate the instantaneous aerodynamic loads using the Sparse Identification of Nonlinear Dynamics (SINDy) algorithm. The pressure-based models acted as surrogates of the aerodynamic loads to provide feedback signals to the closed-loop controller to alleviate the gust effects on the wing. The control results showed that the pressure feedback scheme was sufficient to provide feedback signals to the controller to reduce the roll moment fluctuations caused by the dynamic perturbations down to 20% comparing to 30% to 50% in previous studies.
Show less
- Title
- AI IN MEDICINE: ENABLING INTELLIGENT IMAGING, PROGNOSIS, AND MINIMALLY INVASIVE SURGERY
- Creator
- Getty, Neil
- Date
- 2022
- Description
-
While an extremely rich research field, compared to other applications of AI such as natural language processing (NLP) and image processing...
Show moreWhile an extremely rich research field, compared to other applications of AI such as natural language processing (NLP) and image processing/generation, AI in medicine has been much slower to be applied in real-world clinical settings. Often the stakes of failure are more dire, the access of private and proprietary data more costly, and the burden of proof required by expert clinicians is much higher. Beyond these barriers, the often typical data-driven approach towards validation is interrupted by a need for expertise to analyze results. Whereas the results of a trained Imagenet or machine translation model are easily verified by a computational researcher, analysis in medicine can be much more multi-disciplinary demanding. AI in medicine is motivated by a great demand for progress in health-care, but an even greater responsibility for high accuracy, model transparency, and expert validation.This thesis develops machine and deep learning techniques for medical image enhancement, patient outcome prognosis, and minimally invasive robotic surgery awareness and augmentation. Each of the works presented were undertaken in di- rect collaboration with medical domain experts, and the efforts could not have been completed without them. Pursuing medical image enhancement we worked with radiologists, neuroscientists and a neurosurgeon. In patient outcome prognosis we worked with clinical neuropsychologists and a cardiovascular surgeon. For robotic surgery we worked with surgical residents and a surgeon expert in minimally invasive surgery. Each of these collaborations guided priorities for problem and model design, analysis, and long-term objectives that ground this thesis as a concerted effort towards clinically actionable medical AI. The contributions of this thesis focus on three specific medical domains. (1) Deep learning for medical brain scans: developed processing pipelines and deep learn- ing models for image annotation, registration, segmentation and diagnosis in both traumatic brain injury (TBI) and brain tumor cohorts. A major focus of these works is on the efficacy of low-data methods, and techniques for validation of results without any ground truth annotations. (2) Outcome prognosis for TBI and risk prediction for Cardiovascular Disease (CVD): we developed feature extraction pipelines and models for TBI and CVD patient clinical outcome prognosis and risk assessment. We design risk prediction models for CVD patients using traditional Cox modeling, machine learning, and deep learning techniques. In this works we conduct exhaustive data and model ablation study, with a focus on feature saliency analysis, model transparency, and usage of multi-modal data. (3) AI for enhanced and automated robotic surgery: we developed computer vision and deep learning techniques for understanding and augmenting minimally invasive robotic surgery scenes. We’ve developed models to recognize surgical actions from vision and kinematic data. Beyond model and techniques, we also curated novel datasets and prediction benchmarks from simulated and real endoscopic surgeries. We show the potential for self-supervised techniques in surgery, as well as multi-input and multi-task models.
Show less
- Title
- Constellation and Detection Design for Non-orthogonal Multiple Access System
- Creator
- Hao, Xing
- Date
- 2022
- Description
-
It is well known that the Non-Orthogonal Multiple Access (NOMA) system has the capability to achieve higher spectral efficiency and massive...
Show moreIt is well known that the Non-Orthogonal Multiple Access (NOMA) system has the capability to achieve higher spectral efficiency and massive connectivity. In this thesis, some optimized designs in both code-domain and power-domain NOMA systems are studied. Overall, the main contributions are listed as follows:Firstly, we investigate a NOMA system based on the combinatorial design with a novel constellation design for eliminating the surjective mapping from the linear adding data of multiuser and lowering the complexity of constellation design and Multiuser Detection (MUD). And for further enlarge the connectivity, we propose a Low-Density Codes structure to build a trade-off between the diversity and multiusers in resources by expurgating excessive interference on coding matrices. Therefore, our scheme can not only provide a one-to-one mapping pattern with a sparser multiple access structure but also be adjusted with more flexibility to achieve diversity and transmit a large number of users.Secondly, we proposed a constellation mapping scheme based on sub-optimized signal constellation designs by shaping the receiver’s constellation with a strategy that allows differentiated users by which resolvable points will be received allowing simpler detection and design.Thirdly, a novel NOMA system in uplink with time-delayed symbols is investigated, in which a modified Successive Interference Cancellation (SIC) scheme is used at the receiver side. In conventional SIC, when the transmission power is distributed to one user with trivial shifts to other users, the Bit Error Rate (BER) performance will be decreased significantly. Thus, we evaluated a modified SIC by adding artificial time offsets to the conventional power domain-NOMA (PD-NOMA) between users, which can provide higher degrees of freedom for power allocation of users and reduce mutual interference. And then, the added time offsets can provide additional resources to detect the superimposed signals, then the combination of users’ estimations of overall time slots will be considered to get detection improvements. Numerical results demonstrate that the BER performance of our modified SIC outperforms the PD-NOMA with other SIC-based schemes.Thirdly, we propose a new modulation scheme based on polynomial phase signals (PPS) for downlink and uplink non-orthogonal multiple-access (NOMA) transceivers in both the code and power domains. The PPS leads to outstanding spectral efficiency and bit error rate (BER) performance. We also propose a design criterion for CD-NOMA systems to enable the NOMA system to deploy a large number of users with more flexibility as well as lower design and detection complexity than traditional CD-NOMA systems, such as SCMA and PDMA.
Show less
- Title
- Fault Detection and Localization in Flying Capacitor Multilevel Converters
- Creator
- Hekmati, Parham
- Date
- 2021
- Description
-
This dissertation addresses fault detection, fault localization, and recovery in different topologies of the flying capacitor multilevel...
Show moreThis dissertation addresses fault detection, fault localization, and recovery in different topologies of the flying capacitor multilevel converters to guarantee the safe post-fault operation of the system and maintain load supply. There are multiple contributions of this dissertation, including techniques for device open-circuit fault (OCF) detection in stacked multicell converters (SMCs), a windows detector circuit to track the output terminal voltage levels and current directions, a fast and straightforward active power device OC fault detection and localization technique for the family of flying capacitor multilevel converters (FCMCs), a model-based open circuit fault detection and localization technique for the Buck-FCMC, a new estimator for tracking the voltage of flying capacitors, and fault detection and localization for interleaved converters. Each of these contributions is summarized below.The first contribution of this dissertation proposes a fast and straightforward technique for power device OCF detection in SMCs. The fault detection concept only needs to sense the converter's output terminal voltage and current. The sensed output terminal voltage is compared to a predicted one to detect and localize the OCF. A front-end routing circuit is then added to the SMC to maintain the operation of the converter post fault. The second contribution proposes a window detector circuit to track the output terminal voltage levels and current directions. The window detector circuit detects output terminal voltage level and current direction instead of requiring high sample rates and interrupt loops in the controller.The third contribution proposes a fast and straightforward active power device OCF detection and localization technique for the family of FCMCs, including DC to DC FCMCs, single or multi-phase H-bridge FCMCs, and cascaded H-bridge multilevel converters. This technique only needs to sense voltage and direction of current at the output terminals of the converters to detect and localize the fault. The method compares the measured and the expected terminal voltage while considering the commanded switch states and the terminal current direction. As switches transition to different states, healthy switches are excluded from the set of possible faulty switches until only one faulty switch remains. Coordination of the asynchronous operation of FPGA, DSP, and sensors is addressed for practical implementation. The fourth contribution is a model-based OCF detection and localization technique for the Buck-FCMC using model predictive control. In this technique, state-space equations of the system are developed. Comparison of the measured output inductor with the predicted one from the state-space model is used for the OCF detection and localization. This technique can potentially be used for other converters of the FCMC family. The fifth contribution is a new estimator for tracking the voltage of flying capacitors as the internal states of the FCMC. Using the proposed flying capacitor voltage estimator reduces the number of required sensors compared to the conventional model-based methods. At the same time, the overall technique's robustness to dynamic changes, including startup and load changes, is maintained. The last contribution is open and short circuit switch fault detection and localization for interleaved converters using the harmonic analysis of the output terminal parameters. With this method, monitoring electrical parameters of each leg of the interleaves converters is no longer required for fault detection and localization purposes.
Show less
- Title
- Corporate Insider Holdings and Analyst Recommendations
- Creator
- Gogolak, William Peter
- Date
- 2022
- Description
-
I pursued two competing theories about insider stock holding levels and analyst recommendations. The complementary hypothesis states that top...
Show moreI pursued two competing theories about insider stock holding levels and analyst recommendations. The complementary hypothesis states that top management and analysts conduct actions in a comparable manner; the contradicting hypothesis states that insiders and analysts exhibit opposite market actions (Hsieh and Ng, 2019). I examined insider stock holding levels and analyst recommendations. I analyzed a sample of S&P 500 firms from 2011-2020. In this sample, I found that the relationship between insider holding levels and analyst recommendations are opposite in concurrent time periods; thus, supporting the contradictory hypothesis. I also analyzed lagged insider holdings levels in a granger causality test. This test supports the idea that top management stock holdings increase when analysts downgrade stocks, and the opposite effect it true when analysts upgrade stocks. Using a sample of S&P 500 firms from 2011 – 2020, I provided support to my hypothesis that aggregated analyst recommendations forecast future aggregate equity returns. Furthermore, I conducted a test to support my conclusion that changes to insider holding levels should be used to forecast changes in future equity returns, beyond what is already explained by analyst recommendations. I argue two compelling additions that I make to the existing body of work regarding aggregate stock prediction. First, I build upon existing papers by using Bloomberg aggregate analyst recommendations as opposed to the IBES datasets. Second, I expand upon recent index forecasting papers by incorporating both aggregate analyst recommendations and aggregate insider holding levels into aggregate stock return models.
Show less
- Title
- Deep Learning and Model Predictive Methods for the Control of Fuel-Flexible Compression Ignition Engines
- Creator
- Peng, Qian
- Date
- 2022
- Description
-
Compression ignited diesel engines are widely used for transportation and power generation because of their high fuel efficiency. However,...
Show moreCompression ignited diesel engines are widely used for transportation and power generation because of their high fuel efficiency. However, diesel engines can cause concerning environmental pollution because of their high nitrogen oxide (NOx) and soot emissions. In addition to meeting the stringent emission regulations, the demand to reduce greenhouse gas emissions has become urgent due to the more frequent destructive catastrophes caused by global warming in recent decades. In an effort to reduce emissions and improve fuel economy, many techniques have been developed and investigated by researchers. Air handling systems like exhaust gas recirculation and variable geometry turbochargers are the most widely used techniques on the market for modern diesel engines. Meanwhile, the concept of low temperature combustion is widely investigated by researchers. Low temperature combustion can increase the portion of pre-mixed fuel-air combustion to reduce the peak in-cylinder temperature so that the formation of NOx can be suppressed. Furthermore, the combustion characteristics and performance of bio-derived fuel blends are also studied to reduce overall greenhouse gas emissions through the reduced usage of fossil fuels. All the above mentioned systems are complicated because they involve not only chemical reactions but also complex fluid motion and mixing processes. As such, the control of these systems is always challenging and limits their commercial application. Currentlymost control methods are feed-forward control based on load condition and engine speed due to the simplicity in real-time application. With the development of faster control unit and deep learning techniques, the application of more complex control algorithms is possible to further improve the emissions and fuel economy. This work focuses on improvements to the control of engine air handling systems and combustion processes that leverage alternative fuels.Complex air handling systems, featuring technologies such as exhaust gas recirculation (EGR) and variable geometry turbochargers (VGTs), are commonly used in modern diesel engines to meet stringent emissions and fuel economy requirements. The control of diesel air handling systems with EGR and VGTs is challenging because of their nonlinearity and coupled dynamics. In this thesis, artificial neural networks (ANNs) and recurrent neural networks (RNNs) are applied to control the low pressure (LP) EGR valve position and VGT vane position simultaneously on a light-duty multi-cylinder diesel engine. In addition, experimental examination of a low temperature combustion based on gasoline compression ignition as well as its control has also been studied in this work. This type of combustion has been explored on traditional diesel engines in order to meet increasingly stringent emission regulations without sacrificing efficiency. In this study, a six-cylinder heavy-duty diesel engine was operated in a mixing controlled gasoline compression ignition mode to investigatethe influence of fuels and injection strategies on the combustion characteristics, emissions, and thermal efficiencies. Fuels, including ethanol (E), isobutanol (IB), and diisobutylene (DIB), were blended with a gasoline fuel to form E10, E30, IB30, and DIB30 based on volumetric fraction. These four blends along with gasoline formed the five test fuels. With these fuels, three injections strategies were investigated, including late pilot injection, early pilot injection, and port fuel injection/direct injection. The impact of moderate exhaust gas recirculation on nitrogen oxides and soot emissions was examined to determine the most promising fuel/injection strategy for emissions reduction. In addition, first and second law analyses were performed to provide insights into the efficiency, loss, and exergy destruction of the various gasoline fuel blends at low and medium load conditions. Overall, the emission output, thermal efficiency, and combustion performances of the five fuels were found to be similar and their differences are modest under most test conditions.While experimental work showed that low temperature combustion with alternative fuels could be effective, control is still challenging due to not only the properties of different gasoline-type fuels but also the impacts of injection strategies on the in-cylinder reactivity. As such, a computationally efficient zero-dimension combustion model can significantly reduce the cost of control development. In this study, a previously developed zero-dimension combustion model for gasoline compression ignition was extended to multiple gasoline-type fuel blends and a port fuel injection/direct fuel injection strategy. Tests were conducted on a 12.4-liter heavy-duty engine with five fuel blends. A modification was made to the functional ignition delay model to cover the significantly different ignition delay behavior between conventional and oxygenated fuel blends. The parameters in the model were calibrated with only gasoline data at a load of 14 bar brake mean effective pressure. The results showed that this physics-based model can be applied to the other four fuel blends at three differentpilot injection strategies without recalibration. In order to also facilitate the control of emissions, machine learning models were investigated to capture NOx emissions. A kernel-based extreme learning machine (K-ELM) performed best and had a coefficient of correlation (R-squared) of 0.998. The combustion and NOx emission models are valid for not only conventional gasoline fuel but also oxygenated alternative fuel blends at three different pilot injection strategies. In order to track key combustion metrics while keeping noise and emissions within constraints, a model predictive control(MPC) was applied for a compression ignition engine operating with a range of potential fuels and fuel injection strategies. The MPC is validated under different scenarios, including a load step change, fuel type change, and injection strategy change, with proportional-integral (PI) control as the baseline. The simulation results show that MPC can optimize the overall performance through modifying the main injection timing, pilot fuel mass, and exhaust gas recirculation (EGR) fraction.
Show less
- Title
- Integrating Provenance Management and Query Optimization
- Creator
- Niu, Xing
- Date
- 2021
- Description
-
Provenance, information about the origin of data and the queries and/or updates that produced it, is critical for debugging queries and...
Show moreProvenance, information about the origin of data and the queries and/or updates that produced it, is critical for debugging queries and transactions, auditing, establishing trust in data, and many other use cases.While how to model and capture the provenance of database queries has been studied extensively, optimization was recognized as an important problem in provenance management which includes storing, capturing, querying provenance and so on. However, previous work has almost exclusively focused on how to compress provenance to reduce storage cost, there is a lack of work focusing on optimizing provenance capture process. Many approaches for capturing database provenance are using SQL query language and representing provenance information as a standard relation. However, even sophisticated query optimizers often fail to produce efficient execution plans for such queries because of the query complexity and uncommon structures. To address this problem, we study algebraic equivalences and alternative ways of generating queries for provenance capture. Furthermore, we present an extensible heuristic and cost-based optimization framework utilizing these optimizations. While provenance has been well studied, no database optimizer is aware of using provenance information to optimize the query processing. Intuitively, provenance records exactly what data is relevant for a query. We can use this feature of provenance to figure out and filter out irrelevant input data of a query early on and such that the query processing will be speeded up. The reason is that instead of fully accessing the input dataset, we only run the query on the relevant input data. In this work, we develop provenance-based data skipping (PBDS), a novel approach that generates provenance sketches which are concise encodings of what data is relevant for a query. In addition, a provenance sketch captured for one query is used to speed up subsequent queries, possibly by utilizing physical design artifacts such as indexes and zone maps. The work we present in this thesis demonstrates a tight integration between provenance management and query optimization can lead a significant performance improvement of query processing as well as traditional database management task.
Show less
- Title
- Extreme Fine-grained Parallelism On Modern Many-Core Architectures
- Creator
- Nookala, Poornima
- Date
- 2022
- Description
-
Processors with 100s of threads of execution and GPUs with 1000s of cores are among the state-of-the-art in high-end computing systems. This...
Show moreProcessors with 100s of threads of execution and GPUs with 1000s of cores are among the state-of-the-art in high-end computing systems. This transition to many-core computing has required the community to develop new algorithms to overcome significant latency bottlenecks through massive concurrency. Implementing efficient parallel runtimes that can scale up to hundreds of threads with extremely fine-grained tasks (less than 100 microseconds) remains a challenge. We propose XQueue, a novel lockless concurrent queueing system that can scale up to hundreds of threads. We integrate XQueue into LLVM OpenMP and implement X-OpenMP, a library for lightweight tasking on modern many-core systems with hundreds of cores. We show that it is possible to implement a parallel execution model using lock-less techniques for enabling applications to strongly scale on many-core architectures. While the fork-join model is suitable for on-node parallelism, the use of joins and synchronization induces artificial dependencies which can lead to under utilization of resources. Data-flow based parallelism is crucial to overcome the limitations of fork-join parallelism by specifying dependencies at a finer granularity. It is also crucial for parallel runtime systems to support heterogeneous platforms to better utilize the hardware resources that are available in modern day supercomputers. The existing parallel programming environments that support distributed memory either discover the DAG entirely on all processes which limits the scalability or introduce explicit communications which increases the complexity of programming. We implement Template Task Graph (TTG), a novel programming model and its C++ implementation by marrying the ideas of control and data flowgraph programming. TTG can address the issues of performance portability without sacrificing scalability or programmability by providing higher-level abstractions than conventionally provided by task-centric programming systems, but without impeding the ability of these runtimes to manage task creation and execution as well as data and resource management efficiently. TTG implementation currently supports distributed memory execution over 2 different task runtimes PaRSEC and MADNESS.
Show less
- Title
- Towards a Secure and Resilient Smart Grid Cyberinfrastructure Using Software-Defined Networking
- Creator
- Qu, Yanfeng
- Date
- 2022
- Description
-
To enhance the cyber-resilience and security of the smart grid against malicious attacks and system errors, we present software-defined...
Show moreTo enhance the cyber-resilience and security of the smart grid against malicious attacks and system errors, we present software-defined networking (SDN)-based communication architecture design for smart grid operation. Our design utilizes SDN technology, which improves network manageability, and provides application-oriented visibility and direct programmability, to deploy the multiple SDN-aware applications to enhance grid security and resilience including optimization-based network management to recover Phasor Measurement Unit (PMU) network connectivity and restore power system observability; Flow-based anomaly detection and optimization-based network management to mitigate Manipulation of demand of IoT (MadIoT) attack. We also developed a prototype system in a cyber-physical testbed and conducted extensive evaluation experiments using the IEEE 30-bus system, IEEE 118-bus system, and IIT campus microgrid.
Show less