Search results
(1 - 8 of 8)
- Title
- Intraoperative tumor margin detection using nanoparticles: protocol optimization through kinetic modeling
- Creator
- Xu, Xiaochun
- Date
- 2018
- Description
-
Clear margins (no tumor on the surface of the resected tissues) is essential to minimize tumor recurrence and prolong survival for wide local...
Show moreClear margins (no tumor on the surface of the resected tissues) is essential to minimize tumor recurrence and prolong survival for wide local excision cancer surgeries. However, standard methods of margin assessment cannot be carried out within the time frame of surgery (meaning patients with positive margins are suggested to undergo call-back surgeries). Intraoperative molecular imaging of cell surface receptors can offer a solution; however, substantial nonspecific diffusion and retention of imaging agents in resected tissues remains a significant challenge to identifying cancer reliably. Recently, “paired-agent” methods—which employ co-administration of a control-imaging agent with a targeting agent—have been applied to thick-sample staining and rinsing applications to account for background staining. This dissertation aimed to optimize paired-agent molecular imaging tumor-to-healthy tissue discrimination through mathematical modeling.Two simplified mathematical models—the rinsing paired-agent model (RPAM) and the serial staining model (SSM)—were derived and tested in accurate simulation models (also developed as a component of this dissertation,) and in preclinical cancer models. More specifically, RPAM was demonstrated to be capable of providing more accurate estimates of receptor concentration than more standard “ratiometric” methods (essentially dividing the targeted agent signal by the control agent signal), and the model was insensitive to the variability of rinsing time from one image to the next. Though it was noted in experiments, that regardless of the approach taken, a very large fraction of signal was removed upon the first rinse, leading to large “gaps” in the data that would be available to RPAM. The SSM, on the other hand, provided a model that could be applied to serial staining data, which yielded a more gradual change in signal between imaging.Considering the multidimensional complexity of paired-agent topical tissue molecular imaging (with diffusion, imaging agent chemical/binding properties, tissue staining, rinsing, imaging, and data analysis protocols all being subject to alteration), thorough optimization margin analysis imaging protocols is untractable using experiments alone. Therefore, a salient feature of this dissertation was the development and validation of a “forward” mathematical diffusion and binding model for in silico testing of proposed paired-agent staining and rinsing protocols in thick tissue.
Show less
- Title
- IMPACT OF DATA SHAPE, FIDELITY, AND INTER-OBSERVER REPRODUCIBILITY ON CARDIAC MAGNETIC RESONANCE IMAGE PIPELINES
- Creator
- Obioma, Blessing Ngozi
- Date
- 2020
- Description
-
Artificial Intelligence (AI) holds a great promise in the healthcare. It provides a variety of advantages with its application in clinical...
Show moreArtificial Intelligence (AI) holds a great promise in the healthcare. It provides a variety of advantages with its application in clinical diagnosis, disease prediction, and treatment, with such interests intensifying in the medical image field. AI can automate various cumbersome data processing techniques in medical imaging such as segmentation of left ventricular chambers and image-based classification of diseases. However, full clinical implementation and adaptation of emerging AI-based tools face challenges due to the inherently opaque nature of such AI algorithms based on Deep Neural Networks (DNN), for which computer-trained bias is not only difficult to detect by physician users but is also difficult to safely design in software development. In this work, we examine AI application in Cardiac Magnetic Resonance (CMR) using an automated image classification task, and thereby propose an AI quality control framework design that differentially evaluates the black-box DNN via carefully prepared input data with shape and fidelity variations to probe system responses to these variations. Two variants of the Visual Geometric Graphics with 19 neural layers (VGG19) was used for classification, with a total of 60,000 CMR images. Findings from this work provides insights on the importance of quality training data preparation and demonstrates the importance of data shape variability. It also provides gateway for computation performance optimization in training and validation time.
Show less
- Title
- Quantification of Vascular Permeability in the Retina Using Fluorescein Videoangiography Data as a Biomarker for Early Diabetic Retinopathy
- Creator
- Kayaalp Nalbant, Elif
- Date
- 2023
- Description
-
Diabetic retinopathy, which is the most common reason for blindness in the working-age population, affects over one-third of those who have...
Show moreDiabetic retinopathy, which is the most common reason for blindness in the working-age population, affects over one-third of those who have had diabetes for over ten years. High blood sugar level (hyperglycemia) in the blood damages blood vessels and tight junction at the blood-retinal barrier (BRB). Chronic inflammation leads to changes in vascular health, and over time blood vessels tend to get damaged and exhibit higher “leakage” or permeability. In the late stage of DR, hemorrhages can occur, leading to irreversible damage of neuronal tissue in the retina and vision loss. In the clinic, there are some biomarkers and imaging modalities used to diagnose DR based on some of the more severe products of DR (e.g., hemorrhage), but there is no non-invasive, highly sensitive method to detect diabetic retinopathy before clinical signs occur, when mitigating therapies could be more effective. In this thesis, indicator dilution theory was explored to modeling the temporal dynamics of fluorescein in the retina after intravenous injection, with an aim to quantitatively map subtle changes in retinal blood flow and vascular permeability that could preempt subsequent irreversible damage. Specifically, a simplified version of indicator dilution theory—namely the “adiabatic approximation in tissue homogeneity” (AATH) model—was used to estimate physiological parameters such as the blood flow (F) and the extraction fraction (E: a parameter coupled with vascular permeability) from retinal fluorescein videoangiography data. The AATH fitting protocol was optimized through simulations using a more complex model (the AATH-vascular heterogeneity model, AATH-VH). It was determined that a two-step least square fitting method was more sensitive than a single-step least square fitting of AATH to simulated data to evaluate vascular permeability in early diabetic retinopathy. The optimized data analysis protocol was then evaluated in an initial clinical study comparing healthy control subjects to those with moderate non-proliferative DR. Volumetric blood flow and retinal vascular permeability maps were compared between patient groups with clear increases in extraction fraction observed in the mild NPDR patients compared to control. These promising early data have been the foundation to an ongoing 5 year study tracking 100 Diabetic patients with no DR so see if early changes in vascular permeability can predict which patients are more likely to progress to DR.
Show less
- Title
- DEVELOPMENT AND EVALUATION OF MRI TEMPLATES OF THE MIITRA ATLAS
- Creator
- RIDWAN, ABDUR RAQUIB
- Date
- 2021
- Description
-
Digital human brain atlases play a pivotal role in conducting wide range of neuroimaging studies and are commonly used as references for...
Show moreDigital human brain atlases play a pivotal role in conducting wide range of neuroimaging studies and are commonly used as references for spatial normalization in voxel-wise analysis, region-of interest analyses, automated tissue-segmentation, functional connectivity analyses, etc. A brain atlas typically consists of MRI-based multi-modal templates and semantic labels delineating brain regions according to the characteristics of the underlying tissue. In recent times there has been a plethora of magnetic resonance imaging (MRI) studies on older adults without dementia to explore the role of brain characteristics associated with cognitive functions in old age with the ultimate goal to develop strategies for prevention of cognitive decline. Increasing the accuracy in terms of sensitivity and specificity of such neuroimaging studies require an atlas with a comprehensive set of high-quality templates representative of the brain characteristics typical of older adults and detailed labels accurately mapping brain regions of interest. However, such an atlas has not been constructed for older adults without dementia. Hence this thesis aims to build high quality MRI templates which are the cornerstone resources needed for the development of a comprehensive, high quality, multi-channel, longitudinal, probabilistic digital human brain atlas for older adults termed as Multi-channel Illinois Institute of Technology and Rush University Aging (MIITRA) atlas. This dissertation focuses on a) to develop and evaluate a high performing 1mm isotropic structural T1-weighted brain template, b) to investigate the development and evaluation of a spatio-temporally consistent longitudinal structural T1-weighted template of the older adult brain, c) to develop and evaluate an unbiased 0.5 mm isotropic super-resolved high resolution and detail-preserving structural T1 weighted template of the older adult brain, d) to develop an unbiased 0.5 mm super-resolved high resolution and detail-preserving structural PD weighted and T2-weighted template of the older adult brain, e) to investigate and provide future directions in the development of a 0.5 mm super-resolved high resolution DTI template of the older adult brain, and f) to construct a novel approach in the development of MRI templates using both space and frequency information of spatially normalized older adult data. The thesis based on the aforementioned foundational points was constructed as follows: Firstly, this thesis presents the development of a 1mm isotropic T1-weighted structural template of the older adult brain utilizing state of the art registration algorithm ANTs with parameters carefully optimized for older adults, in an iterative groupwise spatial normalization framework. The preprocessing steps were also thoroughly investigated to ensure high quality data. It was demonstrated through systematic comparison of this new template to several other standardized and study-specific T1-weighted templates that a) it exhibited high image sharpness, b) allowed for high spatial normalization accuracy and detection of smaller inter-group morphometric differences compared to other standardized templates, c) had similar performance to that of study-specific templates and d) was highly representative of the older adult brain. Secondly, with the acquired technical know-how from the aforementioned research findings a new method was introduced for the construction of a spatio-temporally consistent longitudinal template based on high quality cross-sectional older adult data from a large cohort. The new template was compared to templates generated with previously published methods in terms of spatio-temporal consistency and image quality and was shown to have superior performance. In addition, a novel approach was introduced for image quality enhancement of the longitudinal templates utilizing both space and frequency information. Thirdly, the thesis presents a method that involves a) thoroughly refining registration parameters, b) patch-based tissue-guided sparse-representation approach in a super-resolved unbiased minimum deformation space to construct and evaluate an unbiased 0.5 mm isotropic super-resolved high resolution and detail-preserving structural T1 weighted template of the older adult brain. This method accounts for misregistration specially in the cortical regions, ensuring sharp delineation of structures representative of the older adult brain. The new template developed using this approach maintained high anatomical consistency with sharp and detailed cortical features in the brain and exhibited higher image sharpness compared to other high-resolution standardized templates and allowed for high spatial normalization accuracy when used as a reference for normalization of older adult data. Additionally, this approach of template building was investigated on DTI tensors of older adult participants, and the constructed DTI template was shown to perform better than templates developed using the best approach currently present in the literature. Finally, the thesis presents the development of an unbiased 0.5 mm super-resolved high resolution and detail-preserving structural PD weighted and T2-weighted template of the older adult brain, from nonlocal super-resolution based upsampled PD and T2w older adult participant data, using this new template building approach.
Show less
- Title
- RADIAL MAP ASSESSMENT APPROACH FOR DEEP LEARNING DENOISED CARDIAC MAGNETIC RESONANCE RECONSTRUCTION SHARPNESS
- Creator
- Mo, Fei
- Date
- 2021
- Description
-
Deep Learning (DL) and Artificial Intelligence (AI) play important roles in the computer-aided medical diagnostics and precision medicine...
Show moreDeep Learning (DL) and Artificial Intelligence (AI) play important roles in the computer-aided medical diagnostics and precision medicine fields, capable of complementing human operators in disease diagnosis and treatment but optimizing and streamlining medical image display. While incredibly powerful, images produced via Deep Learning or Artificial Intelligence should be analyzed critically in order to be cognizant of how the algorithms are producing the new image and what the new imagine is. One such opportunity arose in the form of a unique collaborative project: the technical development of an image assessment tool that would analyze outputs between DL-based and non DL-based Magnetic Resonance Imaging reconstruction methods.More specifically, we examine the operator input dependence of the existing reference method in terms of accuracy and precision performance, and subsequently propose a new metric approach that preserves the heuristics of the intended quantification, overcomes operator dependence, and provides a relative comparative scoring approach that may normalize for angular dependence of examined images. In chapter 2 of this thesis, we provide a background description pertaining to the two imaging science principles that yielded our proposed method description and study design. First, if treated naively, the examined linear measurement approach exhibits potential bias with respect to the coordinate lattice space of the examined image. Second, the examined DL-based image reconstruction methods used in this thesis warrants an elaborate and explicit description of the measured noise and signal present in the reconstructed images. This specific reconstruction approach employs an iterative scheme with an embedded DL-based substep or filter to which we are blinded. In chapters 3 and 4 of this thesis, the imaging and DL-based image reconstruction experiments are described. These experiments employ cardiac MRI datasets from multiple clinical centers. We first outline the clinical and technical background for this approach, and then examine the quality of DL-based reconstructed image sharpness by two alternative methods: 1) by employing the gold-standard method that addresses the lattice point irregularity using a ‘re-gridding’ method, and 2) by applying our novel proposed method inspired by radial MRI k-space sampling, which exploits the mathematical properties of uniform radial sampling to yield the target voxel counts in the ‘gridded’ polar coordinate system. This new measure of voxel counts is shown to overcome the limitation due to the operator-dependence for the conventional approach. Furthermore, we propose this metric as a relative and comparative index between two alternative reconstruction methods from the same MRI k-space.
Show less
- Title
- Intraoperative Assessment of Surgical Margins in Head And Neck Cancer Resection Using Time-Domain Fluorescence Imaging
- Creator
- Cleary, Brandon M.
- Date
- 2023
- Description
-
Rapid and accurate determination of surgical margin depth in fluorescence guided surgery has been a difficult issue to overcome, leading to...
Show moreRapid and accurate determination of surgical margin depth in fluorescence guided surgery has been a difficult issue to overcome, leading to over- or under-resection of cancerous tissues and follow-up treatments such as ‘call-back’ surgery and chemotherapy. Current techniques utilizing direct measurement of tumor margins in frozen section pathology are slow, which can prevent surgeons from acting on information before a patient is sent home. Other fluorescence techniques require the measurement of margins via captured images that are overlayed with fluorescent data. This method is flawed, as measuring depth from captured images loses spatial information. Intensity-based fluorescence techniques utilizing tumor-to-background ratios do not decouple the effects of concentration from the depth information acquired. Thus, it is necessary to perform an objective measurement to determine depths of surgical margins. This thesis focuses on the theory, device design, simulation development, and overall viability of time-domain fluorescence imaging as an alternative method of determining surgical margin depths. Characteristic regressions were generated using a thresholding method on acquired time-domain fluorescence signals, which were used to convert time-domain data to a depth value. These were applied to an image space to generate a depth map of a modelled tissue sample. All modeling was performed on homogeneous media using Monte Carlo simulations, providing high accuracy at the cost of increased computational time. In practice, the imaging process should be completed within a span of under 20 minutes for a full tissue sample, rather than 20 minutes for a single slice of the sample. This thesis also explores the effects of different thresholding levels on the accuracy of depth determination, as well as the precautions to be taken regarding hardware limitations and signal noise.
Show less
- Title
- Retrospective Quantitative T1 Imaging to Examine Characteristics of Multiple Sclerosis Lesions
- Creator
- Young, Griffin James
- Date
- 2024
- Description
-
Quantitative MRI plays an essential role in assessing tissue abnormality and diseaseprogression in multiple sclerosis (MS). Specifically, T1...
Show moreQuantitative MRI plays an essential role in assessing tissue abnormality and diseaseprogression in multiple sclerosis (MS). Specifically, T1 relaxometry is gaining popularity as elevated T1 values have been shown to correlate with increased inflammation, demyelination, and gliosis. The predominant issue is that relaxometry requires parametric mapping through advanced imaging techniques not commonly included in standard clinical protocols. This leaves an information gap in large clinical datasets from which quantitative mapping could have been performed. We introduce T1-REQUIRE, a retrospective T1 mapping method that approximates T1 values from a single T1-weighted MR image. This method has already been shown to be accurate within 10% of a clinically available reference standard in healthy controls but will be further validated in MS cohorts. We also further aim to determine T1-REQUIRE’s statistical significance as a unique biomarker for the assessment of MS lesions as they relate to clinical disability and disease burden. A 14-subject comparison between T1-REQUIRE maps derived from 3D T1 weighted turbo field echoes (3D T1w TFE) and an inversion-recovery fast field echo (IRFFE) revealed a whole-brain voxel-wise Pearson’s correlation of r = 0.89 (p < 0.001) and mean bias of 3.99%. In MS white matter lesions, r = 0.81, R2 = 0.65 (p < 0.001, N = 159), bias = 10.07%, and in normal appearing white matter (NAWM), r = 0.82, R 2 = 0.67 (p < 0.001), bias = 9.48%. Mean lesional T1-REQUIRE and MTR correlated significantly (r = -0.68, p < 0.001, N = 587) similar to previously published literature. Median lesional MTR correlated significantly with EDSS (rho = -0.34, p = 0.037), and lesional T1-REQUIRE exhibited xiii significant correlations with global brain tissue atrophy as measured by brain parenchymal fraction (BPF) (r = -0.41, p = 0.010, N = 38). Multivariate linear regressions between T1- REQUIRE NAWM provided meaningful statistical relationships with EDSS (β = 0.03, p = 0.027, N = 38), as well as did mean MTR values in the Thalamus (β = -0.27, p = 0.037, N = 38). A new spoiled gradient echo variation of T1-REQUIRE was assessed as a proof of concept in a small 5-subject MS cohort compared with IR-FFE T1 maps, with a whole brain voxel-wise correlation of r = 0.88, R2 = 0.77 (p < 0.001), and Bias = 0.19%. Lesional T1 comparisons reached a correlation of r = 0.75, R2 = 0.56 (p < 0.001, N = 42), and Bias = 10.81%. The significance of these findings means that there is the potential to provide supplementary quantitative information in clinical datasets where quantitative protocols were not implemented. Large MS data repositories previously only containing structural T1 weighted images now may be used in big data relaxometric studies with the potential to lead to new findings in newly uncovered datasets. Furthermore, T1-REQUIRE has the potential for immediate use in clinics where standard T1 mapping sequences aren’t able to be readily implemented.
Show less
- Title
- IMPACT OF DATA SHAPE, FIDELITY, AND INTER-OBSERVER REPRODUCIBILITY ON CARDIAC MAGNETIC RESONANCE IMAGE PIPELINES
- Creator
- Obioma, Blessing Ngozi
- Date
- 2020
- Description
-
Artificial Intelligence (AI) holds a great promise in the healthcare. It provides a variety of advantages with its application in clinical...
Show moreArtificial Intelligence (AI) holds a great promise in the healthcare. It provides a variety of advantages with its application in clinical diagnosis, disease prediction, and treatment, with such interests intensifying in the medical image field. AI can automate various cumbersome data processing techniques in medical imaging such as segmentation of left ventricular chambers and image-based classification of diseases. However, full clinical implementation and adaptation of emerging AI-based tools face challenges due to the inherently opaque nature of such AI algorithms based on Deep Neural Networks (DNN), for which computer-trained bias is not only difficult to detect by physician users but is also difficult to safely design in software development. In this work, we examine AI application in Cardiac Magnetic Resonance (CMR) using an automated image classification task, and thereby propose an AI quality control framework design that differentially evaluates the black-box DNN via carefully prepared input data with shape and fidelity variations to probe system responses to these variations. Two variants of the Visual Geometric Graphics with 19 neural layers (VGG19) was used for classification, with a total of 60,000 CMR images. Findings from this work provides insights on the importance of quality training data preparation and demonstrates the importance of data shape variability. It also provides gateway for computation performance optimization in training and validation time.
Show less