Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
37
result(s) for
"Goldgof, Dmitry"
Sort by:
Revealing Tumor Habitats from Texture Heterogeneity Analysis for Classification of Lung Cancer Malignancy and Aggressiveness
by
Hall, Lawrence
,
Depeursinge, Adrien
,
Schabath, Matthew
in
692/4028/67/1612/1350
,
692/4028/67/2321
,
692/4028/67/2322
2019
We propose an approach for characterizing structural heterogeneity of lung cancer nodules using Computed Tomography Texture Analysis (CTTA). Measures of heterogeneity were used to test the hypothesis that heterogeneity can be used as predictor of nodule malignancy and patient survival. To do this, we use the National Lung Screening Trial (NLST) dataset to determine if heterogeneity can represent differences between nodules in lung cancer and nodules in non-lung cancer patients. 253 participants are in the training set and 207 participants in the test set. To discriminate cancerous from non-cancerous nodules at the time of diagnosis, a combination of heterogeneity and radiomic features were evaluated to produce the best area under receiver operating characteristic curve (AUROC) of 0.85 and accuracy 81.64%. Second, we tested the hypothesis that heterogeneity can predict patient survival. We analyzed 40 patients diagnosed with lung adenocarcinoma (20 short-term and 20 long-term survival patients) using a leave-one-out cross validation approach for performance evaluation. A combination of heterogeneity features and radiomic features produce an AUROC of 0.9 and an accuracy of 85% to discriminate long- and short-term survivors.
Journal Article
Quantitative imaging biomarkers: A review of statistical methods for computer algorithm comparisons
by
Myers, Kyle J
,
Barboriak, Daniel P
,
Barnhart, Huiman X
in
Algorithms
,
Bias
,
Biological markers
2015
Quantitative biomarkers from medical images are becoming important tools for clinical diagnosis, staging, monitoring, treatment planning, and development of new therapies. While there is a rich history of the development of quantitative imaging biomarker (QIB) techniques, little attention has been paid to the validation and comparison of the computer algorithms that implement the QIB measurements. In this paper we provide a framework for QIB algorithm comparisons. We first review and compare various study designs, including designs with the true value (e.g. phantoms, digital reference images, and zero-change studies), designs with a reference standard (e.g. studies testing equivalence with a reference standard), and designs without a reference standard (e.g. agreement studies and studies of algorithm precision). The statistical methods for comparing QIB algorithms are then presented for various study types using both aggregate and disaggregate approaches. We propose a series of steps for establishing the performance of a QIB algorithm, identify limitations in the current statistical literature, and suggest future directions for research.
Journal Article
Ensembles of Convolutional Neural Networks for Survival Time Estimation of High-Grade Glioma Patients from Multimodal MRI
by
Ben Ahmed, Kaoutar
,
Hall, Lawrence O.
,
Gatenby, Robert
in
Accuracy
,
artificial intelligence
,
Brain cancer
2022
Glioma is the most common type of primary malignant brain tumor. Accurate survival time prediction for glioma patients may positively impact treatment planning. In this paper, we develop an automatic survival time prediction tool for glioblastoma patients along with an effective solution to the limited availability of annotated medical imaging datasets. Ensembles of snapshots of three dimensional (3D) deep convolutional neural networks (CNN) are applied to Magnetic Resonance Image (MRI) data to predict survival time of high-grade glioma patients. Additionally, multi-sequence MRI images were used to enhance survival prediction performance. A novel way to leverage the potential of ensembles to overcome the limitation of labeled medical image availability is shown. This new classification method separates glioblastoma patients into long- and short-term survivors. The BraTS (Brain Tumor Image Segmentation) 2019 training dataset was used in this work. Each patient case consisted of three MRI sequences (T1CE, T2, and FLAIR). Our training set contained 163 cases while the test set included 46 cases. The best known prediction accuracy of 74% for this type of problem was achieved on the unseen test set.
Journal Article
1191 Use of Artificial Intelligence for Identification of Celiac and Vascular Lesions on Capsule Endoscopy
by
Morera, Hunter H.
,
Vidyarthi, Gitanjali
,
Singh, Arush
in
Accuracy
,
Artificial intelligence
,
Endoscopy
2019
INTRODUCTION:Capsule endoscopy is an important tool for noninvasive identification of gastrointestinal pathology. It requires a physician to review images which can be tedious and time consuming. The use of artificial intelligence has increased in popularity for colonoscopy for identification of polyps and vascular lesions. We aim to use computer-assisted image analysis using convolutional neural networks (CNNs) for the identification of inflammatory and vascular lesions on capsule endoscopy.METHODS:We examined a total of 2371 publicly available wireless capsule endoscopy (WCE) images obtained using MiroCam® (IntroMedic Co, Seoul, Korea) capsule endoscopes. The capsule images illustrated normal and assorted small bowel findings including polypoid, vascular and, inflammatory lesions and were notated with each finding. Images were divided into “normal” (725 images), “inflammatory” (225 images), and “vascular” (300 images) for the purposes of this analysis. Pre-processing was performed on these images. Augmentation of the image was performed by flipping and rotating the image to obtain multiple views of pathologic and normal images. The machine learning algorithm was trained on the original and augmented images, testing was only performed on original images.RESULTS:Using this machine learning architecture, a total testing accuracy of 73.7% was able to be achieved for the differentiation of normal versus inflammatory and a total testing accuracy of 70.2% was able to be achieved for the differentiation of normal versus vascular capsule endoscopy images. In a set of 2371 capsule endoscopy images, the CNN identified both vascular and inflammatory images using a 5-Fold Cross validation with an accuracy of 73.7% for normal compared to inflammatory images (Figure 1) and 70.20% for normal versus vascular images. The area under the receiver operating characteristic curve (ROC) for identification of inflammatory images was 0.70 (95% CI 0.664-0.736) compared to 0.68 (95% CI 0.646-0.714) for vascular lesions.CONCLUSION:This system demonstrates that AI can be used to find more subtle inflammatory and vascular lesions. The CNN system detected and localized polyps well within real-time constraints using an ordinary desktop machine with a contemporary graphics-processing unit. It can increase the findings of pathology and decrease time needed to review capsule endoscopy studies, but requires further validation in large multicenter trials.
Journal Article
Delta radiomic features improve prediction for lung cancer incidence: A nested case–control analysis of the National Lung Screening Trial
2018
Background
Current guidelines for lung cancer screening increased a positive scan threshold to a 6 mm longest diameter. We extracted radiomic features from baseline and follow‐up screens and performed size‐specific analyses to predict lung cancer incidence using three nodule size classes (<6 mm [small], 6‐16 mm [intermediate], and ≥16 mm [large]).
Methods
We extracted 219 features from baseline (T0) nodules and 219 delta features which are the change from T0 to first follow‐up (T1). Nodules were identified for 160 incidence cases diagnosed with lung cancer at T1 or second follow‐up screen (T2) and for 307 nodule‐positive controls that had three consecutive positive screens not diagnosed as lung cancer. The cases and controls were split into training and test cohorts; classifier models were used to identify the most predictive features.
Results
The final models revealed modest improvements for baseline and delta features when compared to only baseline features. The AUROCs for small‐ and intermediate‐sized nodules were 0.83 (95% CI 0.76‐0.90) and 0.76 (95% CI 0.71‐0.81) for baseline‐only radiomic features, respectively, and 0.84 (95% CI 0.77‐0.90) and 0.84 (95% CI 0.80‐0.88) for baseline and delta features, respectively. When intermediate and large nodules were combined, the AUROC for baseline‐only features was 0.80 (95% CI 0.76‐0.84) compared with 0.86 (95% CI 0.83‐0.89) for baseline and delta features.
Conclusions
We found modest improvements in predicting lung cancer incidence by combining baseline and delta radiomics. Radiomics could be used to improve current size‐based screening guidelines.
We demonstrated that combining delta radiomics with baseline radiomics generally improved the performance statistics to predict lung cancer incidence when compared to using only baseline radiomic features. We note inconsistent results in the performance statistics when we comparing overall models compared to models based on nodule size. As such, our findings suggest there is a trade‐off in terms of performance using nodule size‐specific models vs. an overall model.
Journal Article
Synergizing Deep Learning-Enabled Preprocessing and Human–AI Integration for Efficient Automatic Ground Truth Generation
by
Wickline, Samuel A.
,
Hall, Lawrence
,
Collazo, Christopher
in
active deep learning
,
Algorithms
,
Artificial intelligence
2024
The progress of incorporating deep learning in the field of medical image interpretation has been greatly hindered due to the tremendous cost and time associated with generating ground truth for supervised machine learning, alongside concerns about the inconsistent quality of images acquired. Active learning offers a potential solution to these problems of expanding dataset ground truth by algorithmically choosing the most informative samples for ground truth labeling. Still, this effort incurs the costs of human labeling, which needs minimization. Furthermore, automatic labeling approaches employing active learning often exhibit overfitting tendencies while selecting samples closely aligned with the training set distribution and excluding out-of-distribution samples, which could potentially improve the model’s effectiveness. We propose that the majority of out-of-distribution instances can be attributed to inconsistent cross images. Since the FDA approved the first whole-slide image system for medical diagnosis in 2017, whole-slide images have provided enriched critical information to advance the field of automated histopathology. Here, we exemplify the benefits of a novel deep learning strategy that utilizes high-resolution whole-slide microscopic images. We quantitatively assess and visually highlight the inconsistencies within the whole-slide image dataset employed in this study. Accordingly, we introduce a deep learning-based preprocessing algorithm designed to normalize unknown samples to the training set distribution, effectively mitigating the overfitting issue. Consequently, our approach significantly increases the amount of automatic region-of-interest ground truth labeling on high-resolution whole-slide images using active deep learning. We accept 92% of the automatic labels generated for our unlabeled data cohort, expanding the labeled dataset by 845%. Additionally, we demonstrate expert time savings of 96% relative to manual expert ground-truth labeling.
Journal Article
Reduction of Video Capsule Endoscopy Reading Times Using Deep Learning with Small Data
by
Vidyarthi, Gitanjali
,
Morera, Hunter
,
Baviriseaty, Niharika
in
Analysis
,
Artificial intelligence
,
Artificial neural networks
2022
Video capsule endoscopy (VCE) is an innovation that has revolutionized care within the field of gastroenterology, but the time needed to read the studies generated has often been cited as an area for improvement. With the aid of artificial intelligence, various fields have been able to improve the efficiency of their core processes by reducing the burden of irrelevant stimuli on their human elements. In this study, we have created and trained a convolutional neural network (CNN) capable of significantly reducing capsule endoscopy reading times by eliminating normal parts of the video while retaining abnormal ones. Our model, a variation of ResNet50, was able to reduce VCE video length by 47% on average and capture abnormal segments on VCE with 100% accuracy on three VCE videos as confirmed by the reading physician. The ability to successfully pre-process VCE footage as we have demonstrated will greatly increase the practicality of VCE technology without the expense of hundreds of hours of physician annotated videos.
Journal Article
Large language models robustness against perturbation
2025
Large Language Models (LLMs) have demonstrated impressive performance across various natural language processing (NLP) tasks, including text summarization, classification, and generation. Despite their success, LLMs are primarily trained on curated datasets that lack human-induced errors, such as typos or variations in word choice. As a result, LLMs may produce unexpected outputs when processing text containing such perturbations. In this paper, we investigate the resilience of LLMs to two types of text perturbations: typos and word substitutions. Using two public datasets, we evaluate the impact of these perturbations on text generation using six state-of-the-art models, including GPT-4o and LLaMA3.3-70B. Although previous studies have primarily examined the effects of perturbations in classification tasks, our research focuses on their impact on text generation. The results indicate that LLMs are sensitive to text perturbations, leading to variations in generated outputs, which have implications for their robustness and reliability in real-world applications.
Journal Article
Future roles of artificial intelligence in early pain management of newborns
2021
The advent of increasingly sophisticated medical technology, surgical interventions, and supportive healthcare measures is raising survival probabilities for babies born premature and/or with life‐threatening health conditions. In the United States, this trend is associated with greater numbers of neonatal surgeries and higher admission rates into neonatal intensive care units (NICU) for newborns at all birth weights. Following surgery, current pain management in NICU relies primarily on narcotics (opioids) such as morphine and fentanyl (about 100 times more potent than morphine) that lead to a number of complications, including prolonged stays in NICU for opioid withdrawal. In this paper, we review current practices and challenges for pain assessment and treatment in NICU and outline ongoing efforts using Artificial Intelligence (AI) to support pain‐ and opioid‐sparing approaches for newborns in the future. A major focus for these next‐generation approaches to NICU‐based pain management is proactive pain mitigation (avoidance) aimed at preventing harm to neonates from both postsurgical pain and opioid withdrawal. AI‐based frameworks can use single or multiple combinations of continuous objective variables, that is, facial and body movements, crying frequencies, and physiological data (vital signs), to make high‐confidence predictions about time‐to‐pain onset following postsurgical sedation. Such predictions would create a therapeutic window prior to pain onset for mitigation with non‐narcotic pharmaceutical and nonpharmaceutical interventions. These emerging AI‐based strategies have the potential to minimize or avoid damage to the neonate's body and psyche from postsurgical pain and opioid withdrawal.
Journal Article