Search results
(1 - 16 of 16)
- Title
- SUPPORT VECTOR MACHINE BASED CLASSIFICATION FOR TRAFFIC SIGNS AND ULTRASONIC FLAW DETECTION
- Creator
- Virupakshappa, Kushal
- Date
- 2015, 2015-12
- Description
-
The use of machine learning techniques for the advanced signal and image processing applications is gaining importance due to performance...
Show moreThe use of machine learning techniques for the advanced signal and image processing applications is gaining importance due to performance increases in accuracy and robustness. Support Vector Machine (SVM) is a machine learning method used for classification and regression analysis of complex real-world problems that may be difficult to analyze theoretically. In this dissertation, the use of SVM for the application of ultrasonic flaw detection and traffic sign classification has been investigated and new methods are introduced. For traffic sign detection, Bag of visual Words technique has been implemented on Speeded Up Robust Feature (SURF) descriptors of the traffic signs and later the sturdy classifier SVM is used to categorize the traffic signs to its respective groups. Experimental results demonstrate that the proposed method of implementation can reach an accuracy of 95.2 % . For ultrasonic aw detection, subband decomposition filters are used to generate the necessary feature vectors for the SVM classifier. Experimental results, using A-scan data measurements from a steel block, show that a very high classification accuracy can be achieved. Robust performance of the classifier is due to proper selection of frequency-diverse feature vectors and successful training. SVM has also been used for regression analysis to locate and amplify the aw by suppressing the clutter noise. The results show that the use of SVM is reliable and achievable for both the applications.
M.S. in Electrical Engineering, December 2015
Show less
- Title
- EMBEDDED SYSTEM DESIGN FOR TRAFFIC SIGN RECOGNITION USING MACHINE LEARNING ALGORITHMS
- Creator
- Han, Yan
- Date
- 2016, 2016-12
- Description
-
Traffic sign recognition system, taken as an important component of an intelligent vehicle system, has been an active research area and it has...
Show moreTraffic sign recognition system, taken as an important component of an intelligent vehicle system, has been an active research area and it has been investigated vigorously in the last decade. It is an important step for introducing intelligent vehicles into the current road transportation systems. Based on image processing and machine learning technologies, TSR systems are being developed cautiously by many manufacturers and have been set up on vehicles as part of a driving assistant system in recent years. Traffic signs are designed and placed in locations to be easily identified from its surroundings by human eyes. Hence, an intelligent system that can identify these signs as good as a human, needs to address a lot of challenges. Here, ―good‖ can be interpreted as accurate and fast. Therefore, developing a reliable, real-time and robust TSR system is the main motivation for this dissertation. Multiple TSR system approaches based on computer vision and machine learning technologies are introduced and they are implemented on different hardware platforms. Proposed TSR algorithms are comprised of two parts: sign detection based on color and shape analysis and sign classification based on machine learning technologies including nearest neighbor search, support vector machine and deep neural networks. Target hardware platforms include Xilinx ZedBoard FPGA and NVIDIA Jetson TX1 that provides GPU acceleration. Overall, based on a well-known benchmark suite, 96% detection accuracy is achieved while executing at 1.6 frames per seconds on the GPU board.
Ph.D. in Computer Engineering, December 2016
Show less
- Title
- DEEP LEARNING FOR IMAGE PROCESSING WITH APPLICATIONS TO MEDICAL IMAGING
- Creator
- Zarshenas, Amin
- Date
- 2019
- Description
-
Deep Learning is a subfield of machine learning concerned with algorithms that learn hierarchical data representations. Deep learning has...
Show moreDeep Learning is a subfield of machine learning concerned with algorithms that learn hierarchical data representations. Deep learning has proven extremely successful in many computer vision tasks including object detection and recognition. In this thesis, we aim to develop and design deep-learning models to better perform image processing and tackle three important problems: natural image denoising, computed tomography (CT) dose reduction, and bone suppression in chest radiography (“chest x-ray”: CXR). As the first contribution of this thesis, we aimed to answer to probably the most critical design questions, under the task of natural image denoising. To this end, we defined a class of deep learning models, called neural network convolution (NNC). We investigated several design modules for designing NNC for image processing. Based on our analysis, we design a deep residual NNC (R-NNC) for this task. One of the important challenges in image denoising regards to a scenario in which the images have varying noise levels. Our analysis showed that training a single R-NNC on images at multiple noise levels results in a network that cannot handle very high noise levels; and sometimes, it blurs the high-frequency information on less noisy areas. To address this problem, we designed and developed two new deep-learning structures, namely, noise-specific NNC (NS-NNC) and a DeepFloat model, for the task of image denoising at varying noise levels. Our models achieved the highest denoising performance comparing to the state-of-the-art techniques.As the second contribution of the thesis, we aimed to tackle the task of CT dose reduction by means of our NNC. Studies have shown that high dose of CT scans can increase the risk of radiation-induced cancer in patients dramatically; therefore, it is very important to reduce the radiation dose as much as possible. For this problem, we introduced a mixture of anatomy-specific (AS) NNC experts. The basic idea is to train multiple NNC models for different anatomic segments with different characteristics, and merge the predictions based on the segmentations. Our phantom and clinical analysis showed that more than 90% dose reduction would be achieved using our AS NNC model.We exploited our findings from image denoising and CT dose reduction, to tackle the challenging task of bone suppression in CXRs. Most lung nodules that are missed by radiologists as well as by computer-aided detection systems overlap with bones in CXRs. Our purpose was to develop an imaging system to virtually separate ribs and clavicles from lung nodules and soft-tissue in CXRs. To achieve this, we developed a mixture of anatomy-specific, orientation-frequency-specific (ASOFS) expert deep NNC model. While our model was able to decompose the CXRs, to achieve an even higher bone suppression performance, we employed our deep R-NNC for the bone suppression application. Our model was able to create bone and soft-tissue images from single CXRs, without requiring specialized equipment or increasing the radiation dose.
Show less
- Title
- APPLICATION OF MACHINE LEARNING TO ELECTRICAL DATA ANALYSIS
- Creator
- Bao, Zhen
- Date
- 2017, 2017-05
- Description
-
The dissertation is composed of four parts: modeling demand response capability by internet data centers processing batch computing jobs,...
Show moreThe dissertation is composed of four parts: modeling demand response capability by internet data centers processing batch computing jobs, cloud storage based power consumption management in internet data center, identifying hot socket problem in smart meters, and online event detection for non-intrusive load monitoring without knowing label. Mathematical models are constructed to fulfill the research of the four targets, and numerical examples are used to test the effectiveness of the models. The first two parts optimize jobs in Data Center in order to find the best way of utilizing the existing computing resources and storage. Mixed-integer programming (MIP) is used in the formulation. The purpose of the third part is to identify the hot socket problem in smart meter. Machine learning method has been used to locate the bad installation of smart meters by analyzing historical data from smart meters. The fourth part is non-intrusive load monitoring for residential load in houses. Signal processing and deep learning methods are used to identify the specific loads from high frequency signals.
Ph.D. in Electrical Engineering, May 2017
Show less
- Title
- Fast mesh based reconstruction for cardiac-gated SPECT and methodology for medical image quality assessment
- Creator
- Massanes Basi, Francesc
- Date
- 2018
- Description
-
In this work, we are studying two different subjects that are intricately connected. For the first subject we are considering tools to...
Show moreIn this work, we are studying two different subjects that are intricately connected. For the first subject we are considering tools to improve the quality of single photon emission computed tomography (SPECT) imaging. Currently, SPECT images assist physicians to evaluate perfusion levels within the myocardium, aide in the diagnosis of various types of carcinomas, and measure pulmonary function. The SPECT technique relies on injecting a radioactive material into the patient's body and then detecting the emitted radiation by means of a gamma camera. However, the amount of radioactive material that can be given to a patient is limited by the negative effects that the radiation will have on the patient's health. This causes SPECT images to be highly corrupted by noise. We will focus our work on cardiac SPECT, which adds the challenge of the heart's continuous motion during the acquisition process. First, we describe the methodology used in SPECT imaging and reconstruction. Our methodology uses a content adaptive model, which uses more samples on the regions of the body that we want to be reconstructed more accurately and less in other areas. Then we describe our algorithm and our novel implementation that lets us use the content adaptive model to perform the reconstruction. In this work, we show that our implementation outperforms the reconstruction method used for clinical applications. In the second subject we are evaluating tools to measure image quality in the context of medical diagnosis. In signal processing, accuracy is typically measured as the amount of similarity between an original signal and its reconstruction. This similarity is traditionally a numeric metric that does not take into account the intended purpose of the reconstructed images. In the field of medical imaging, a reconstructed image is meant to aid a physician to perform a diagnostic task. Therefore, the quality of the reconstruction should be measured by how much it helps to perform the diagnostic task. A model observer is a computer tool that aims to mimic the performance of human observer, usually a radiologist, at a relevant diagnosis task. In this work we present our linear model observer designed to automatically select the features needed to model a human observer response. This is a novelty from the model observers currently being used in the medical imaging field, which instead usually have ad-hoc chosen features. Our model observer dependents only on the resolution of the image, not the type of imaging technique used to acquire the image.
Show less
- Title
- A Complete Machine Learning Approach for Predicting Lithium-Ion Cell Combustion
- Creator
- Almagro Yravedra, Fernando
- Date
- 2020
- Description
-
The object of the herein thesis work document is to develop a functional predictive model, able to predict the combustion of a US18650 Sony...
Show moreThe object of the herein thesis work document is to develop a functional predictive model, able to predict the combustion of a US18650 Sony Lithium-Ion cell given its current and previous states. In order to build the model, a realistic electro-thermal model of the cell under study is developed in Matlab Simulink, being used to recreate the cell's behavior under a set of real operating conditions. The data generated by the electro-thermal model is used to train a recurrent neural network, which returns the chance of future combustion of the US18650 Sony Lithium-Ion cell. Independently obtained data is used to test and validate the developed recurrent neural network using advanced metrics.
Show less
- Title
- Reconfigurable High-Performance Computation and Communication Platform for Ultrasonic Applications
- Creator
- Wang, Boyang
- Date
- 2021
- Description
-
In industrial and medical applications, ultrasonic signals are used in nondestructive testing (NDT), medical imaging, navigation, and...
Show moreIn industrial and medical applications, ultrasonic signals are used in nondestructive testing (NDT), medical imaging, navigation, and communication. This study presents the architecture of high-performance computational systems designed for ultrasonic nondestructive testing, data compression using machine learning, and a multilayer perceptron neural network for ultrasonic flaw detection and grain size characterization. We researched and developed a real-time software-defined ultrasonic communication system for transmitting information through highly reverberant and dispersive solid channels. Orthogonal frequency-division multiplexing is explored to combat the severe multipath effect in the solid channels and achieve an optimal bitrate solution. In this study, a reconfigurable, high-performance, low-cost, and real-time ultrasonic data acquisition and signal processing platform is designed based on an all-programmable system-on-chip (APSoC). We designed the unsupervised learning models using wavelet packet transformation optimized by convolutional autoencoder for massive ultrasonic data compression. The proposed learning models can achieve a compression accuracy of 98% by using only 6% of the original data. For ultrasonic signal analysis in NDT applications, we utilized the multilayer perceptron neural network (MLPNN) to detect flaw echoes masked by strong microstructure scattering noise (i.e., about zero dB SNR or less) with detection accuracy above 99%. It is of high interest to characterize materials using ultrasonic scattering properties for grain size estimation and classification. We successfully designed an MLPNN to classify the grain sizes of materials with an accuracy of 99%. Furthermore, a software-defined ultrasonic communication system based on the APSoC is designed for real-time data transmission through solid channels. Transducers with a center frequency of 2.5 MHz are used to transmit and receive information-bearing ultrasonic waves in solid channels where the communication bit rate can reach up to 1.5 Mbps.
Show less
- Title
- AUTOMATION OF ULTRASONIC FLAW DETECTION APPLICATIONS USING DEEP LEARNING ALGORITHMS
- Creator
- Virupakshappa, Kushal
- Date
- 2021
- Description
-
The Industrial Revolution-4.0 promises to integrate multiple technologies including but not limited to automation, cloud computing, robotics,...
Show moreThe Industrial Revolution-4.0 promises to integrate multiple technologies including but not limited to automation, cloud computing, robotics, and Artificial Intelligence. The non-Destructive Testing (NDT) industry has been shifting towards automation as well. For ultrasound-based NDT, these technological advancements facilitate smart systems hosting complex signal processing algorithms. Therefore, this thesis introduces the effective use of AI algorithms in challenging NDT scenarios. The first objective is to investigate and evaluate the performance of both supervised and unsupervised machine learning algorithms and optimize them for ultrasonic flaw detection utilizing Amplitude-scan (A-scan) data. Several inferences and optimization algorithms have been evaluated. It has been observed that proper choice of features for specific inference algorithms leads to accurate flaw detection. The second objective of this study is the hardware realization of the ultrasonic flaw detection algorithms on embedded systems. Support Vector Machine algorithm has been implemented on a Tegra K1 GPU platform and Supervised Machine Learning algorithms have been implemented on a Zynq FPGA for a comparative study. The third main objective is to introduce new deep learning architectures for more complex flaw detection applications including classification of flaw types and robust detection of multiple flaws in B-scan data. The proposed Deep Learning pipeline combines a novel grid-based localization architecture with meta-learning. This provides a generalized flaw detection solution wherein additional flaw types can be used for inference without retraining or changing the deep learning architecture. Results show that the proposed algorithm performs well in more complex scenarios with high clutter noise and the results are comparable with traditional CNN and achieve the goal of generality and robustness.
Show less
- Title
- Deep Learning Methods For Wireless Networks Optimization
- Creator
- Zhang, Shuai
- Date
- 2022
- Description
-
The resurgence of deep learning techniques has brought forth fundamental changes to how hard problems could be solved. It used to be held that...
Show moreThe resurgence of deep learning techniques has brought forth fundamental changes to how hard problems could be solved. It used to be held that the solutions to complex wireless network problems require accurate mathematical modeling of the network operation, but now the success of deep learning has shown that a data-driven method could generate powerful and useful representations such that the problem could be solved efficiently with surprisingly competent performance. Network researchers have recognized this and started to capitalize on the learning methods’ prowess. But most works follow the existing black-box learning paradigms without much accommodation to the nature and essence of the underlying network problems. This thesis focuses on a particular type of classical problem: multiple commodity flow scheduling in an interference-limited environment. Though it does not permit efficient exact algorithms due to its NP-hard complexity, we use it as an entry point to demonstrate from three angles how the learning-based methods can help improve the network performance. In the first part, we leverage the graphical neural network (GNN) techniques and propose a two-stage topology-aware machine learning framework, which trains a graph embedding unit and a link usage prediction module jointly to discover links that are likely to be used in optimal scheduling. The second part of the thesis is an attempt to find a learning method that has a closer algorithmic affinity to the traditional DCG method. We make use of reinforcement learning to incrementally generate a better partial solution such that a high quality solution may be found in a more efficient manner. As the third part of the research, we revisit the MCF problem from a novel viewpoint: instead of leaning on the neural networks to directly generate the good solutions, we use them to associate the current problem instance with historical ones that are similar in structure. These matched instances’ solutions offer a highly useful starting point to allow efficient discovery of the new instance’s solution.
Show less
- Title
- Defense-in-Depth for Cyber-Secure Network Architectures of Industrial Control Systems
- Creator
- Arnold, David James
- Date
- 2024
- Description
-
Digitization and modernization efforts have yielded greater efficiency, safety, and cost-savings for Industrial Control Systems (ICS). To...
Show moreDigitization and modernization efforts have yielded greater efficiency, safety, and cost-savings for Industrial Control Systems (ICS). To achieve these gains, the Internet of Things (IoT) has become an integral component of network infrastructures. However, integrating embedded devices expands the network footprint and softens cyberattack resilience. Additionally, legacy devices and improper security configurations are weak points for ICS networks. As a result, ICSs are a valuable target for hackers searching for monetary gains or planning to cause destruction and chaos. Furthermore, recent attacks demonstrate a heightened understanding of ICS network configurations within hacking communities. A Defense-in-Depth strategy is the solution to these threats, applying multiple security layers to detect, interrupt, and prevent cyber threats before they cause damage. Our solution detects threats by deploying an Enhanced Data Historian for Detecting Cyberattacks. By introducing Machine Learning (ML), we enhance cyberattack detection by fusing network traffic and sensor data. Two computing models are examined: 1) a distributed computing model and 2) a localized computing model. The distributed computing model is powered by Apache Spark, introducing redundancy for detecting cyberattacks. In contrast, the localized computing model relies on a network traffic visualization methodology for efficiently detecting cyberattacks with a Convolutional Neural Network. These applications are effective in detecting cyberattacks with nearly 100% accuracy. Next, we prevent eavesdropping by applying Homomorphic Encryption for Secure Computing. HE cryptosystems are a unique family of public key algorithms that permit operations on encrypted data without revealing the underlying information. Through the Microsoft SEAL implementation of the CKKS algorithm, we explored the challenges of introducing Homomorphic Encryption to real-world applications. Despite these challenges, we implemented two ML models: 1) a Neural Network and 2) Principal Component Analysis. Finally, we hinder attackers by integrating a Cyberattack Lockdown Network with Secure Ultrasonic Communication. When a cyberattack is detected, communication for safety-critical elements is redirected through an ultrasonic communication channel, establishing physical network segmentation with compromised devices. We present proof-of-concept work in transmitting video via ultrasonic communication over an Aluminum Rectangular Bar. Within industrial environments, existing piping infrastructure presents an optimal solution for cost-effectively preventing eavesdropping. The effectiveness of these solutions is discussed within the scope of the nuclear industry.
Show less
- Title
- Large Language Model Based Machine Learning Techniques for Fake News Detection
- Creator
- Chen, Pin-Chien
- Date
- 2024
- Description
-
With advanced technology, it’s widely recognized that everyone owns one or more personal devices. Consequently, people are evolving into...
Show moreWith advanced technology, it’s widely recognized that everyone owns one or more personal devices. Consequently, people are evolving into content creators on social media or the streaming platforms sharing their personal ideas regardless of their education or expertise level. Distinguishing fake news is becoming increasingly crucial. However, the recent research only presents comparisons of detecting fake news between one or more models across different datasets. In this work, we applied Natural Language Processing (NLP) techniques with Naïve Bayes and DistilBERT machine learning method combing and augmenting four datasets. The results show that the balanced accuracy is higher than the average in the recent studies. This suggests that our approach holds for improving fake news detection in the era of widespread content creation.
Show less
- Title
- Adaptive Learning Approach of a Domain-Aware CNN-Based Model Observer
- Creator
- Bogdanovic, Nebojsa
- Date
- 2023
- Description
-
Application of convolutional neural networks (CNNs) for performing defect detection tasks and their use as model observers (MO) has become...
Show moreApplication of convolutional neural networks (CNNs) for performing defect detection tasks and their use as model observers (MO) has become increasingly popular in the medical imaging field. Building upon this use of CNN MOs, we have trained the CNNs to discern between the data it was trained on, and the previously unseen images. We termed this ability domain awareness. To achieve domain awareness, we are simultaneously training a new variation of U-Net CNN to perform defect detection task, as well as to reconstruct a noisy input image. We have shown that the values of the reconstruction mean squared error can be used as a good indicator of how well the algorithm performs in the defect localization task, making a big step towards developing a domain aware CNN MO. Additionally, we have proposed an adaptive learning approach for training these algorithms, and compared them to the non-adaptive learning approach. The main results that we achieved were for the ideal observers, but we also extended these results to human observer data. We have compared different architectures of CNNs with different numbers and sizes of layers, as well as introduced data augmentation to further improve upon our results. Finally, our results show that the proposed adaptive learning approach with introduced data augmentation drastically improves upon the results of a non-adaptive approach in both human and ideal observer cases.
Show less
- Title
- Machine Learning (ML) for Extreme Weather Power Outage Forecasting in Power Distribution Networks
- Creator
- Bahrami, Anahita
- Date
- 2023
- Description
-
The Midwest region experiences a diverse range of severe weather conditions throughout the year. During the warmer months, thunderstorms,...
Show moreThe Midwest region experiences a diverse range of severe weather conditions throughout the year. During the warmer months, thunderstorms, heavy rain, lightning, tornadoes, and high winds pose a threat, while the colder season brings ice storms, snowstorms, high winds, and sleet storms, all of which can cause significant damage to the environment, properties, transportation systems, and power grids. The average climate in the Midwest is influenced by factors such as latitude, solar input, water systems' typical positions and movements, topography, the Great Lakes, and human activities. The combination of these conditions during different seasons contributes to the development of various types of storms. Therefore, it is crucial to predict the impacts of such atmospheric events on distribution and transmission lines, enabling utilities to assess and implement preventive measures and strategies to minimize the economic losses associated with these disasters. Additionally, the accurate classification of storm modes through an automated system allows operators to study trends in relation to climate change and implement necessary strategies to ensure grid reliability and resilience.In recent years, a significant number of power outages have occurred due to extreme ice formation on transmission and distribution networks, posing a threat to the power grid's resilience and reliability. To prepare power providers for snowstorms, extensive research has been conducted on snow accretion on power lines. Over the past two decades, many scientists have turned to machine learning (ML) algorithms for predicting ice accretion on overhead conductors, as ML models demonstrate superior accuracy compared to statistical forecasting models when it comes to forecasting challenging and fine-grained problems. However, most existing models primarily focus on predicting ice formation on power lines and fail to forecast the resulting damage to the distribution network. Therefore, this project proposes a model for predicting power outages caused by snow and ice storms in the distribution network. The goal is to aid in the planning process for disaster response and ensure the resilience and reliability of the power grid. The proposed outage prediction model incorporates statistical and machine learning techniques, taking into account features related to weather conditions, storm events, and information about the power network feeders.
Show less
- Title
- Defense-in-Depth for Cyber-Secure Network Architectures of Industrial Control Systems
- Creator
- Arnold, David James
- Date
- 2024
- Description
-
Digitization and modernization efforts have yielded greater efficiency, safety, and cost-savings for Industrial Control Systems (ICS). To...
Show moreDigitization and modernization efforts have yielded greater efficiency, safety, and cost-savings for Industrial Control Systems (ICS). To achieve these gains, the Internet of Things (IoT) has become an integral component of network infrastructures. However, integrating embedded devices expands the network footprint and softens cyberattack resilience. Additionally, legacy devices and improper security configurations are weak points for ICS networks. As a result, ICSs are a valuable target for hackers searching for monetary gains or planning to cause destruction and chaos. Furthermore, recent attacks demonstrate a heightened understanding of ICS network configurations within hacking communities. A Defense-in-Depth strategy is the solution to these threats, applying multiple security layers to detect, interrupt, and prevent cyber threats before they cause damage. Our solution detects threats by deploying an Enhanced Data Historian for Detecting Cyberattacks. By introducing Machine Learning (ML), we enhance cyberattack detection by fusing network traffic and sensor data. Two computing models are examined: 1) a distributed computing model and 2) a localized computing model. The distributed computing model is powered by Apache Spark, introducing redundancy for detecting cyberattacks. In contrast, the localized computing model relies on a network traffic visualization methodology for efficiently detecting cyberattacks with a Convolutional Neural Network. These applications are effective in detecting cyberattacks with nearly 100% accuracy. Next, we prevent eavesdropping by applying Homomorphic Encryption for Secure Computing. HE cryptosystems are a unique family of public key algorithms that permit operations on encrypted data without revealing the underlying information. Through the Microsoft SEAL implementation of the CKKS algorithm, we explored the challenges of introducing Homomorphic Encryption to real-world applications. Despite these challenges, we implemented two ML models: 1) a Neural Network and 2) Principal Component Analysis. Finally, we hinder attackers by integrating a Cyberattack Lockdown Network with Secure Ultrasonic Communication. When a cyberattack is detected, communication for safety-critical elements is redirected through an ultrasonic communication channel, establishing physical network segmentation with compromised devices. We present proof-of-concept work in transmitting video via ultrasonic communication over an Aluminum Rectangular Bar. Within industrial environments, existing piping infrastructure presents an optimal solution for cost-effectively preventing eavesdropping. The effectiveness of these solutions is discussed within the scope of the nuclear industry.
Show less
- Title
- Large Language Model Based Machine Learning Techniques for Fake News Detection
- Creator
- Chen, Pin-Chien
- Date
- 2024
- Description
-
With advanced technology, it’s widely recognized that everyone owns one or more personal devices. Consequently, people are evolving into...
Show moreWith advanced technology, it’s widely recognized that everyone owns one or more personal devices. Consequently, people are evolving into content creators on social media or the streaming platforms sharing their personal ideas regardless of their education or expertise level. Distinguishing fake news is becoming increasingly crucial. However, the recent research only presents comparisons of detecting fake news between one or more models across different datasets. In this work, we applied Natural Language Processing (NLP) techniques with Naïve Bayes and DistilBERT machine learning method combing and augmenting four datasets. The results show that the balanced accuracy is higher than the average in the recent studies. This suggests that our approach holds for improving fake news detection in the era of widespread content creation.
Show less
- Title
- Adaptive Learning Approach of a Domain-Aware CNN-Based Model Observer
- Creator
- Bogdanovic, Nebojsa
- Date
- 2023
- Description
-
Application of convolutional neural networks (CNNs) for performing defect detection tasks and their use as model observers (MO) has become...
Show moreApplication of convolutional neural networks (CNNs) for performing defect detection tasks and their use as model observers (MO) has become increasingly popular in the medical imaging field. Building upon this use of CNN MOs, we have trained the CNNs to discern between the data it was trained on, and the previously unseen images. We termed this ability domain awareness. To achieve domain awareness, we are simultaneously training a new variation of U-Net CNN to perform defect detection task, as well as to reconstruct a noisy input image. We have shown that the values of the reconstruction mean squared error can be used as a good indicator of how well the algorithm performs in the defect localization task, making a big step towards developing a domain aware CNN MO. Additionally, we have proposed an adaptive learning approach for training these algorithms, and compared them to the non-adaptive learning approach. The main results that we achieved were for the ideal observers, but we also extended these results to human observer data. We have compared different architectures of CNNs with different numbers and sizes of layers, as well as introduced data augmentation to further improve upon our results. Finally, our results show that the proposed adaptive learning approach with introduced data augmentation drastically improves upon the results of a non-adaptive approach in both human and ideal observer cases.
Show less