Search results
(1 - 11 of 11)
- Title
- DAMAGE ASSESSMENT OF CIVIL STRUCTURES AFTER NATURAL DISASTERS USING DEEP LEARNING AND SATELLITE IMAGERY
- Creator
- Jones, Scott F
- Date
- 2019
- Description
-
Since 1980, millions of people have been harmed by natural disasters that have cost communities across the world over three trillion dollars....
Show moreSince 1980, millions of people have been harmed by natural disasters that have cost communities across the world over three trillion dollars. After a natural disaster has occurred, the creation of maps that identify the damage to buildings and infrastructure is imperative. Currently, many organizations perform this task manually, using pre- and post-disaster images and well-trained professionals to determine the degree and extent of damage. This manual task can take days to complete. I propose to do this task automatically using post-disaster satellite imagery. I use a pre-trained neural network, SegNet, and replaced its last layer with a simple damage classification scheme. This final layer of the network is re-trained using cropped segments of the satellite image of the disaster. The data were obtained from a publicly accessible source, the Copernicus EMS system. They provided three channel (RGB) reference and damage grading maps that were used to create the images of the ground truth and the damaged terrain. I then retrained the final layer of the network to identify civil structures that had been damaged. The resulting network was 85% accurate at labelling the pixels in an image of the disaster from typhoon Haiyan. The test results show that it is possible to create these maps quickly and efficiently.
Show less
- Title
- Fast mesh based reconstruction for cardiac-gated SPECT and methodology for medical image quality assessment
- Creator
- Massanes Basi, Francesc
- Date
- 2018
- Description
-
In this work, we are studying two different subjects that are intricately connected. For the first subject we are considering tools to...
Show moreIn this work, we are studying two different subjects that are intricately connected. For the first subject we are considering tools to improve the quality of single photon emission computed tomography (SPECT) imaging. Currently, SPECT images assist physicians to evaluate perfusion levels within the myocardium, aide in the diagnosis of various types of carcinomas, and measure pulmonary function. The SPECT technique relies on injecting a radioactive material into the patient's body and then detecting the emitted radiation by means of a gamma camera. However, the amount of radioactive material that can be given to a patient is limited by the negative effects that the radiation will have on the patient's health. This causes SPECT images to be highly corrupted by noise. We will focus our work on cardiac SPECT, which adds the challenge of the heart's continuous motion during the acquisition process. First, we describe the methodology used in SPECT imaging and reconstruction. Our methodology uses a content adaptive model, which uses more samples on the regions of the body that we want to be reconstructed more accurately and less in other areas. Then we describe our algorithm and our novel implementation that lets us use the content adaptive model to perform the reconstruction. In this work, we show that our implementation outperforms the reconstruction method used for clinical applications. In the second subject we are evaluating tools to measure image quality in the context of medical diagnosis. In signal processing, accuracy is typically measured as the amount of similarity between an original signal and its reconstruction. This similarity is traditionally a numeric metric that does not take into account the intended purpose of the reconstructed images. In the field of medical imaging, a reconstructed image is meant to aid a physician to perform a diagnostic task. Therefore, the quality of the reconstruction should be measured by how much it helps to perform the diagnostic task. A model observer is a computer tool that aims to mimic the performance of human observer, usually a radiologist, at a relevant diagnosis task. In this work we present our linear model observer designed to automatically select the features needed to model a human observer response. This is a novelty from the model observers currently being used in the medical imaging field, which instead usually have ad-hoc chosen features. Our model observer dependents only on the resolution of the image, not the type of imaging technique used to acquire the image.
Show less
- Title
- A Complete Machine Learning Approach for Predicting Lithium-Ion Cell Combustion
- Creator
- Almagro Yravedra, Fernando
- Date
- 2020
- Description
-
The object of the herein thesis work document is to develop a functional predictive model, able to predict the combustion of a US18650 Sony...
Show moreThe object of the herein thesis work document is to develop a functional predictive model, able to predict the combustion of a US18650 Sony Lithium-Ion cell given its current and previous states. In order to build the model, a realistic electro-thermal model of the cell under study is developed in Matlab Simulink, being used to recreate the cell's behavior under a set of real operating conditions. The data generated by the electro-thermal model is used to train a recurrent neural network, which returns the chance of future combustion of the US18650 Sony Lithium-Ion cell. Independently obtained data is used to test and validate the developed recurrent neural network using advanced metrics.
Show less
- Title
- IMPACT OF DATA SHAPE, FIDELITY, AND INTER-OBSERVER REPRODUCIBILITY ON CARDIAC MAGNETIC RESONANCE IMAGE PIPELINES
- Creator
- Obioma, Blessing Ngozi
- Date
- 2020
- Description
-
Artificial Intelligence (AI) holds a great promise in the healthcare. It provides a variety of advantages with its application in clinical...
Show moreArtificial Intelligence (AI) holds a great promise in the healthcare. It provides a variety of advantages with its application in clinical diagnosis, disease prediction, and treatment, with such interests intensifying in the medical image field. AI can automate various cumbersome data processing techniques in medical imaging such as segmentation of left ventricular chambers and image-based classification of diseases. However, full clinical implementation and adaptation of emerging AI-based tools face challenges due to the inherently opaque nature of such AI algorithms based on Deep Neural Networks (DNN), for which computer-trained bias is not only difficult to detect by physician users but is also difficult to safely design in software development. In this work, we examine AI application in Cardiac Magnetic Resonance (CMR) using an automated image classification task, and thereby propose an AI quality control framework design that differentially evaluates the black-box DNN via carefully prepared input data with shape and fidelity variations to probe system responses to these variations. Two variants of the Visual Geometric Graphics with 19 neural layers (VGG19) was used for classification, with a total of 60,000 CMR images. Findings from this work provides insights on the importance of quality training data preparation and demonstrates the importance of data shape variability. It also provides gateway for computation performance optimization in training and validation time.
Show less
- Title
- REVEALING LINGUISTIC BIAS
- Creator
- Karmarkar, Sathyaveer S.
- Date
- 2021
- Description
-
Readers currently face bias in articles written by writers who focus more on partiality towards any person or organization than showing the...
Show moreReaders currently face bias in articles written by writers who focus more on partiality towards any person or organization than showing the real facts. The study aims to detect and reveal such bias against them and try to portray real facts without any partiality against any person or organization. The data is fetched by selecting various articles from Google, especially those containing some bias in them. The bias was checked by measuring the subjectivity and polarity of the article using multiple libraries such as NLTK etc. We created a google form to take readers’ views showing them randomly either the biased article or the improved article after changing bias and getting their opinions.
Show less
- Title
- Towards Assisting Human-Human Conversations
- Creator
- Nanaware, Tejas Suryakant
- Date
- 2021
- Description
-
The idea of the research is to understand the open-topic conversations and ways to provide assistance to humans who face difficulties in...
Show moreThe idea of the research is to understand the open-topic conversations and ways to provide assistance to humans who face difficulties in initiating conversations and overcome social anxiety so as to be able to talk and have successful conversations. By providing humans with assistive conversational support, we can augment the conversation that can be carried out. The AdvisorBot can also help to reduce the time taken to type and convey the message if the AdvisorBot is context aware and capable of providing good responses.There has been a significant research for creating conversational chatbots in open-domain conversations that have claimed to have passed the Turing Test and can converse with humans while not seeming like a bot. However, if these chatbots can converse like humans, can they provide actual assistance in human conversations? This research study observes and improves the advanced open-domain conversational chatbots that are put in practice for providing conversational assistance.While performing this thesis research, the chatbots were deployed to provide conversational assistance and a human study was performed to identify and improve the ways to tackle social anxiety by connecting strangers to perform conversations that would be aided by AdvisorBot. Through the questionnaires that the research subjects filled during their participation, and by performing linguistic analysis, the quality of the AdvisorBot can be improved so that humans can achieve better conversational skills and are able to clearly convey their message while conversing. The results were further enhanced by using transfer learning techniques and quickly improve the quality of the AdvisorBot.
Show less
- Title
- A Hybrid Data-Driven Simulation Framework For Integrated Energy-Air Quality (iE-AQ) Modeling at Multiple Urban Scales
- Creator
- Ashayeri, Mehdi
- Date
- 2020
- Description
-
To date, limited work has been done to collectively incorporate two key urban challenges: climate change and air pollution for the design of...
Show moreTo date, limited work has been done to collectively incorporate two key urban challenges: climate change and air pollution for the design of sustainable and healthy built environments. Main limitations to doing so include the existence of large spatiotemporal gaps in local outdoor air pollution data and a lack of a formal theoretical framework to effectively integrate localized urban air pollution data into sustainable built environment design strategies such as natural ventilation in buildings. This work hypothesizes that emerging advanced computational modeling approaches, including artificial intelligence (AI) and machine learning (ML) techniques, along with big open data set initiatives, can be used to fill some of those gaps. This can be achieved if urban air quality explanatory factors are properly identified and effectively connected to the current building performance simulation workflows.Therefore, the primary objective of this dissertation is to develop a hybrid AI-based data-driven simulation framework for integrated Energy-Air Quality (iE-AQ) modeling to quantify the combined energy reduction profiles and health risks implications of sustainable built environment design. This framework (1) incorporates dynamic human-centered factors, including mobility and building occupancy among others into the model, (2) interlinks land use regression (LUR), inverse distance weighting (IDW), and building energy simulation (BES) approaches via the R computational platform for developing the model, and (3) develops a web-based platform and interactive tool for visualizing and communicating the results. A series of novel machine learning approaches are tested within the workflow to improve efficiency and accuracy of the simulation model. A multi-scale model of urban air quality (using PM2.5 concentrations as the end point) and weather localization model with high spatiotemporal resolution was developed for Chicago, IL using low-cost sensor data. The integrated energy and air quality model was tested for the prototype office building at multiple urban scales in Chicago through applying air pollution-adjusted natural ventilation suitable hours.Results showed that the proposed ML approaches improved model accuracy above traditional simulation and statistical modeling approaches and that incorporating dynamic building-related factors such as human activity patterns can further improve urban air quality prediction models. The results of integrated energy and air quality (iE-AQ) analysis highlight that the energy saving potentials for natural ventilation considering local ambient air pollution and micro-climate data vary from 5.2% to 17% within Chicago. The proposed framework and tool have the potential to aid architects, engineers, planners and urban health policymakers in designing sustainable cities and empowering analytical solutions for reducing human health risk.
Show less
- Title
- Machine Learning for NDE Imaging Applications
- Creator
- Zhang, Xin
- Date
- 2023
- Description
-
Infrared Thermography and Ultrasonic Imaging of materials are promising non-destructive evaluation (NDE) methods but signals face challenges...
Show moreInfrared Thermography and Ultrasonic Imaging of materials are promising non-destructive evaluation (NDE) methods but signals face challenges to be analyzed and characterized due to the nature of complex signal patterns and poor signal-to-noise ratios (SNR). Industries such as nuclear energy, are constructed with components produced using high-strength superalloys. These metallic components face challenges for wide deployment because material defects and mechanical conditions need to be non-destructively evaluated to identify potential danger before they enter service. Low NDE performance and lack of automation, particularly considering the complex environment in the in-situation NDE and nuclear power plant, present a major challenge to implement conventional NDE. This study solves the problems of using the advantages of machine learning as signal processing methods for Infrared Thermography and Ultrasonic NDE imaging applications. In Pulsed Infrared Thermography (PIT), for quality control of metal additive manufacturing, we proposed an intelligent PIT NDE system and developed innovative unsupervised learning models and thermal tomography 3D imaging algorithms to detect calibrated internal defects (pores) of various sizes and depths for different nuclear-grade metallic structures. Unsupervised learning aims to learn the latent principal patterns (dictionaries) in PIT data to detect defects with minimal human supervision. Difficulties to detect defects by using PIT are thermal imaging noise patterns; uneven heating of the specimen; defects of micron-level size with overly weak temperature signals and so on. The unsupervised learning methods overcome these barriers and achieve the high defect detection accuracies (F-score) of 0.96 to detect large defects and 0.89 to detect microscopic defects, and can successfully detect defects with diameter of only 0.101-mm. In addition, we researched and developed innovative unsupervised learning models to compress high-resolution PIT imaging data and achieve the average high compression ratio >30 and a highest compression of 46 with reconstruction accuracy peak signal-to-noise ratio (PSNR) >73dB while preserving weak thermal features corresponding to microscopic defects. In ultrasonic NDE imaging, for structural health monitoring of materials, we built a high-performance ultrasonic computational system to inspect the integrity of high-strength metallic materials which are used in high-temperature corrosive environments of nuclear reactors. For system automation, we have been developing neural networks with various architectures for grain size estimation by characterizing the ultrasonic backscattered signals with high accuracy and data-efficiency. In addition, we introduce a response-based teacher-student knowledge distillation training framework to train neural networks and achieve 99.27% characterization accuracy with a high image processed throughput of 192 images/second on testing. Furthermore, we introduce a reinforcement learning based neural architecture search framework to automatically model the optimal neural networks design for ultrasonic flaws detection. At last, we comprehensively researched the performance of using unsupervised learning methods to compress 3D ultrasonic data and achieve high compression performance using only 4.25% of the acquired experimental data.
Show less
- Title
- Hedge Fund Replication With Deep Neural Networks And Generative Adversarial Networks
- Creator
- Chatterji, Devin Mathew
- Date
- 2022
- Description
-
Hedge fund replication is a means for allowing investors to achieve hedge fund-like returns, which are usually only available to institutions....
Show moreHedge fund replication is a means for allowing investors to achieve hedge fund-like returns, which are usually only available to institutions. Hedge funds in total have over $3 trillion in assets under management (AUM). More traditional money managers would like to offer hedge fund-like returns to retail investors by replicating their performance. There are two primary challenges with existing hedge fund replication methods, difficulty capturing the nonlinear and dynamic exposures of hedge funds with respect to the factors, and difficulty in identifying the right factors that reflect those exposures. It has been shown in previous research that deep neural networks (DNN) outperform other linear and machine learning models when working with financial applications. This is due to the ability of DNNs to model complex relationships, such as non-linearities and interaction effects, between input features without over-fitting. My research investigates DNNs and generative adversarial networks (GAN) in order to address the challenges of factor-based hedge fund replication. Neither of these methods have been applied to the hedge fund replication problem. My research contributes to the literature by showing that the use of these DNNs and GANs addresses the existing challenges in hedge fund replication and improves on results in the literature.
Show less
- Title
- Machine learning applications to video surveillance camera placement and medical imaging quality assessment
- Creator
- Lorente Gomez, Iris
- Date
- 2022
- Description
-
In this work, we used machine learning techniques and data analysis to approach two applications. The first one, in collaboration with the...
Show moreIn this work, we used machine learning techniques and data analysis to approach two applications. The first one, in collaboration with the Chicago Police Department (CPD), involves analyzing and quantifying the effect that the installation of cameras had on crime, and developing a predictive model with the goal of optimizing video surveillance camera location in the streets. While video surveillance has become increasingly prevalent in policing, its intended effect on crime prevention has not been comprehensively studied in major cities in the US. In this study, we retrospectively analyzed the crime activities in the vicinity of 2,021 surveillance cameras installed between 2005 and 2016 in the city of Chicago. Using Difference-in-Differences (DiD) analysis, we examined the daily crime counts that occurred within the fields-of-view of these cameras over a 12-month period, both before and after the cameras were installed. We also investigated their potential effect on crime displacement and diffusion by examining the crime activities in a buffer zone (up to 900 ft) extended from the cameras. The results show that, collectively, there was an 18.6% reduction in crime counts within the direct viewsheds of all of the study cameras (excluding District 01 where the Loop -Chicago's business center- is located). In addition, we adapted the methodology to quantify the effect of individual cameras. The quantified effect on crime is the prediction target of our 2-stage machine learning algorithm that aims to estimate the effect that installing a videocamera in a given location will have on crime. In the first stage, we trained a classifier to predict if installing a videocamera in a given location will result in a statistically significant decrease in crime. If so, the data goes through a regression model trained to estimate the quantified effect on crime that the camera installation will have. Finally, we propose two strategies, using our 2-stage predictive model, to find the optimal locations for camera installations given a budget. Our proposed strategies result in a larger decrease in crime than a baseline strategy based on choosing the locations with higher crime density.The second application that forms this thesis belongs to the field of model observers for medical imaging quality assessment. With the advance of medical imaging devices and technology, there is a need to evaluate and validate new image reconstruction algorithms. Image quality is traditionally evaluated by using numerical figures of merit that indicate similarity between the reconstruction and the original. In medical imaging, a good reconstruction strategy should be one that helps the radiologist perform a correct diagnosis. For this reason, medical imaging reconstruction strategies should be evaluated on a task-based approach by measuring human diagnosis accuracy. Model observers (MO) are algorithms capable of acting as human surrogates to evaluate reconstruction strategies, reducing significantly the time and cost of organizing sessions with expert radiologists. In this work, we develop a methodology to estimate a deep learning based model observer for a defect localization task using a synthetic dataset that simulates images with statistical properties similar to trans-axial sections of X-ray computed tomography (CT). In addition, we explore how the models access diagnostic information from the images using psychophysical methods that have been previously employed to analyze how the humans extract the information. Our models are independently trained for five different humans and are able to generalize to images with noise statistic backgrounds that were not seen during the model training stage. In addition, our results indicate that the diagnostic information extracted by the models matches the one extracted by the humans.
Show less
- Title
- IMPACT OF DATA SHAPE, FIDELITY, AND INTER-OBSERVER REPRODUCIBILITY ON CARDIAC MAGNETIC RESONANCE IMAGE PIPELINES
- Creator
- Obioma, Blessing Ngozi
- Date
- 2020
- Description
-
Artificial Intelligence (AI) holds a great promise in the healthcare. It provides a variety of advantages with its application in clinical...
Show moreArtificial Intelligence (AI) holds a great promise in the healthcare. It provides a variety of advantages with its application in clinical diagnosis, disease prediction, and treatment, with such interests intensifying in the medical image field. AI can automate various cumbersome data processing techniques in medical imaging such as segmentation of left ventricular chambers and image-based classification of diseases. However, full clinical implementation and adaptation of emerging AI-based tools face challenges due to the inherently opaque nature of such AI algorithms based on Deep Neural Networks (DNN), for which computer-trained bias is not only difficult to detect by physician users but is also difficult to safely design in software development. In this work, we examine AI application in Cardiac Magnetic Resonance (CMR) using an automated image classification task, and thereby propose an AI quality control framework design that differentially evaluates the black-box DNN via carefully prepared input data with shape and fidelity variations to probe system responses to these variations. Two variants of the Visual Geometric Graphics with 19 neural layers (VGG19) was used for classification, with a total of 60,000 CMR images. Findings from this work provides insights on the importance of quality training data preparation and demonstrates the importance of data shape variability. It also provides gateway for computation performance optimization in training and validation time.
Show less