Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
95 result(s) for "Aerts, Hugo J. W. L."
Sort by:
Deep learning for lung cancer prognostication: A retrospective multi-cohort radiomics study
Non-small-cell lung cancer (NSCLC) patients often demonstrate varying clinical courses and outcomes, even within the same tumor stage. This study explores deep learning applications in medical imaging allowing for the automated quantification of radiographic characteristics and potentially improving patient stratification. We performed an integrative analysis on 7 independent datasets across 5 institutions totaling 1,194 NSCLC patients (age median = 68.3 years [range 32.5-93.3], survival median = 1.7 years [range 0.0-11.7]). Using external validation in computed tomography (CT) data, we identified prognostic signatures using a 3D convolutional neural network (CNN) for patients treated with radiotherapy (n = 771, age median = 68.0 years [range 32.5-93.3], survival median = 1.3 years [range 0.0-11.7]). We then employed a transfer learning approach to achieve the same for surgery patients (n = 391, age median = 69.1 years [range 37.2-88.0], survival median = 3.1 years [range 0.0-8.8]). We found that the CNN predictions were significantly associated with 2-year overall survival from the start of respective treatment for radiotherapy (area under the receiver operating characteristic curve [AUC] = 0.70 [95% CI 0.63-0.78], p < 0.001) and surgery (AUC = 0.71 [95% CI 0.60-0.82], p < 0.001) patients. The CNN was also able to significantly stratify patients into low and high mortality risk groups in both the radiotherapy (p < 0.001) and surgery (p = 0.03) datasets. Additionally, the CNN was found to significantly outperform random forest models built on clinical parameters-including age, sex, and tumor node metastasis stage-as well as demonstrate high robustness against test-retest (intraclass correlation coefficient = 0.91) and inter-reader (Spearman's rank-order correlation = 0.88) variations. To gain a better understanding of the characteristics captured by the CNN, we identified regions with the most contribution towards predictions and highlighted the importance of tumor-surrounding tissue in patient stratification. We also present preliminary findings on the biological basis of the captured phenotypes as being linked to cell cycle and transcriptional processes. Limitations include the retrospective nature of this study as well as the opaque black box nature of deep learning networks. Our results provide evidence that deep learning networks may be used for mortality risk stratification based on standard-of-care CT images from NSCLC patients. This evidence motivates future research into better deciphering the clinical and biological basis of deep learning networks as well as validation in prospective data.
Robust Radiomics Feature Quantification Using Semiautomatic Volumetric Segmentation
Due to advances in the acquisition and analysis of medical imaging, it is currently possible to quantify the tumor phenotype. The emerging field of Radiomics addresses this issue by converting medical images into minable data by extracting a large number of quantitative imaging features. One of the main challenges of Radiomics is tumor segmentation. Where manual delineation is time consuming and prone to inter-observer variability, it has been shown that semi-automated approaches are fast and reduce inter-observer variability. In this study, a semiautomatic region growing volumetric segmentation algorithm, implemented in the free and publicly available 3D-Slicer platform, was investigated in terms of its robustness for quantitative imaging feature extraction. Fifty-six 3D-radiomic features, quantifying phenotypic differences based on tumor intensity, shape and texture, were extracted from the computed tomography images of twenty lung cancer patients. These radiomic features were derived from the 3D-tumor volumes defined by three independent observers twice using 3D-Slicer, and compared to manual slice-by-slice delineations of five independent physicians in terms of intra-class correlation coefficient (ICC) and feature range. Radiomic features extracted from 3D-Slicer segmentations had significantly higher reproducibility (ICC = 0.85±0.15, p = 0.0009) compared to the features extracted from the manual segmentations (ICC = 0.77±0.17). Furthermore, we found that features extracted from 3D-Slicer segmentations were more robust, as the range was significantly smaller across observers (p = 3.819e-07), and overlapping with the feature ranges extracted from manual contouring (boundary lower: p = 0.007, higher: p = 5.863e-06). Our results show that 3D-Slicer segmented tumor volumes provide a better alternative to the manual delineation for feature quantification, as they yield more reproducible imaging descriptors. Therefore, 3D-Slicer can be employed for quantitative image feature extraction and image data mining research in large patient cohorts.
Artificial intelligence in radiology
Artificial intelligence (AI) algorithms, particularly deep learning, have demonstrated remarkable progress in image-recognition tasks. Methods ranging from convolutional neural networks to variational autoencoders have found myriad applications in the medical image analysis field, propelling it forward at a rapid pace. Historically, in radiology practice, trained physicians visually assessed medical images for the detection, characterization and monitoring of diseases. AI methods excel at automatically recognizing complex patterns in imaging data and providing quantitative, rather than qualitative, assessments of radiographic characteristics. In this Opinion article, we establish a general understanding of AI methods, particularly those pertaining to image-based tasks. We explore how these methods could impact multiple facets of radiology, with a general focus on applications in oncology, and demonstrate ways in which these methods are advancing the field. Finally, we discuss the challenges facing clinical implementation and provide our perspective on how the domain could be advanced.
Radiographic prediction of meningioma grade by semantic and radiomic features
The clinical management of meningioma is guided by tumor grade and biological behavior. Currently, the assessment of tumor grade follows surgical resection and histopathologic review. Reliable techniques for pre-operative determination of tumor grade may enhance clinical decision-making. A total of 175 meningioma patients (103 low-grade and 72 high-grade) with pre-operative contrast-enhanced T1-MRI were included. Fifteen radiomic (quantitative) and 10 semantic (qualitative) features were applied to quantify the imaging phenotype. Area under the curve (AUC) and odd ratios (OR) were computed with multiple-hypothesis correction. Random-forest classifiers were developed and validated on an independent dataset (n = 44). Twelve radiographic features (eight radiomic and four semantic) were significantly associated with meningioma grade. High-grade tumors exhibited necrosis/hemorrhage (ORsem = 6.6, AUCrad = 0.62-0.68), intratumoral heterogeneity (ORsem = 7.9, AUCrad = 0.65), non-spherical shape (AUCrad = 0.61), and larger volumes (AUCrad = 0.69) compared to low-grade tumors. Radiomic and sematic classifiers could significantly predict meningioma grade (AUCsem = 0.76 and AUCrad = 0.78). Furthermore, combining them increased the classification power (AUCradio = 0.86). Clinical variables alone did not effectively predict tumor grade (AUCclin = 0.65) or show complementary value with imaging data (AUCcomb = 0.84). We found a strong association between imaging features of meningioma and histopathologic grade, with ready application to clinical management. Combining qualitative and quantitative radiographic features significantly improved classification power.
Deep learning classification of lung cancer histology using CT images
Tumor histology is an important predictor of therapeutic response and outcomes in lung cancer. Tissue sampling for pathologist review is the most reliable method for histology classification, however, recent advances in deep learning for medical image analysis allude to the utility of radiologic data in further describing disease characteristics and for risk stratification. In this study, we propose a radiomics approach to predicting non-small cell lung cancer (NSCLC) tumor histology from non-invasive standard-of-care computed tomography (CT) data. We trained and validated convolutional neural networks (CNNs) on a dataset comprising 311 early-stage NSCLC patients receiving surgical treatment at Massachusetts General Hospital (MGH), with a focus on the two most common histological types: adenocarcinoma (ADC) and Squamous Cell Carcinoma (SCC). The CNNs were able to predict tumor histology with an AUC of 0.71(p = 0.018). We also found that using machine learning classifiers such as k-nearest neighbors (kNN) and support vector machine (SVM) on CNN-derived quantitative radiomics features yielded comparable discriminative performance, with AUC of up to 0.71 (p = 0.017). Our best performing CNN functioned as a robust probabilistic classifier in heterogeneous test sets, with qualitatively interpretable visual explanations to its predictions. Deep learning based radiomics can identify histological phenotypes in lung cancer. It has the potential to augment existing approaches and serve as a corrective aid for diagnosticians.
Comparison of Texture Features Derived from Static and Respiratory-Gated PET Images in Non-Small Cell Lung Cancer
PET-based texture features have been used to quantify tumor heterogeneity due to their predictive power in treatment outcome. We investigated the sensitivity of texture features to tumor motion by comparing static (3D) and respiratory-gated (4D) PET imaging. Twenty-six patients (34 lesions) received 3D and 4D [18F]FDG-PET scans before the chemo-radiotherapy. The acquired 4D data were retrospectively binned into five breathing phases to create the 4D image sequence. Texture features, including Maximal correlation coefficient (MCC), Long run low gray (LRLG), Coarseness, Contrast, and Busyness, were computed within the physician-defined tumor volume. The relative difference (δ3D-4D) in each texture between the 3D- and 4D-PET imaging was calculated. Coefficient of variation (CV) was used to determine the variability in the textures between all 4D-PET phases. Correlations between tumor volume, motion amplitude, and δ3D-4D were also assessed. 4D-PET increased LRLG ( = 1%-2%, p < 0.02), Busyness ( = 7%-19%, p < 0.01), and decreased MCC ( = 1%-2%, p < 7.5 × 10(-3)), Coarseness ( = 5%-10%, p < 0.05) and Contrast ( = 4%-6%, p > 0.08) compared to 3D-PET. Nearly negligible variability was found between the 4D phase bins with CV < 5% for MCC, LRLG, and Coarseness. For Contrast and Busyness, moderate variability was found with CV = 9% and 10%, respectively. No strong correlation was found between the tumor volume and δ3D-4D for the texture features. Motion amplitude had moderate impact on δ for MCC and Busyness and no impact for LRLG, Coarseness, and Contrast. Significant differences were found in MCC, LRLG, Coarseness, and Busyness between 3D and 4D PET imaging. The variability between phase bins for MCC, LRLG, and Coarseness was negligible, suggesting that similar quantification can be obtained from all phases. Texture features, blurred out by respiratory motion during 3D-PET acquisition, can be better resolved by 4D-PET imaging. 4D-PET textures may have better prognostic value as they are less susceptible to tumor motion.
Large language models to identify social determinants of health in electronic health records
Social determinants of health (SDoH) play a critical role in patient outcomes, yet their documentation is often missing or incomplete in the structured data of electronic health records (EHRs). Large language models (LLMs) could enable high-throughput extraction of SDoH from the EHR to support research and clinical care. However, class imbalance and data limitations present challenges for this sparsely documented yet critical information. Here, we investigated the optimal methods for using LLMs to extract six SDoH categories from narrative text in the EHR: employment, housing, transportation, parental status, relationship, and social support. The best-performing models were fine-tuned Flan-T5 XL for any SDoH mentions (macro-F1 0.71), and Flan-T5 XXL for adverse SDoH mentions (macro-F1 0.70). Adding LLM-generated synthetic data to training varied across models and architecture, but improved the performance of smaller Flan-T5 models (delta F1 + 0.12 to +0.23). Our best-fine-tuned models outperformed zero- and few-shot performance of ChatGPT-family models in the zero- and few-shot setting, except GPT4 with 10-shot prompting for adverse SDoH. Fine-tuned models were less likely than ChatGPT to change their prediction when race/ethnicity and gender descriptors were added to the text, suggesting less algorithmic bias ( p  < 0.05). Our models identified 93.8% of patients with adverse SDoH, while ICD-10 codes captured 2.0%. These results demonstrate the potential of LLMs in improving real-world evidence on SDoH and assisting in identifying patients who could benefit from resource support.
Deep convolutional neural networks to predict cardiovascular risk from computed tomography
Coronary artery calcium is an accurate predictor of cardiovascular events. While it is visible on all computed tomography (CT) scans of the chest, this information is not routinely quantified as it requires expertise, time, and specialized equipment. Here, we show a robust and time-efficient deep learning system to automatically quantify coronary calcium on routine cardiac-gated and non-gated CT. As we evaluate in 20,084 individuals from distinct asymptomatic (Framingham Heart Study, NLST) and stable and acute chest pain (PROMISE, ROMICAT-II) cohorts, the automated score is a strong predictor of cardiovascular events, independent of risk factors (multivariable-adjusted hazard ratios up to 4.3), shows high correlation with manual quantification, and robust test-retest reliability. Our results demonstrate the clinical value of a deep learning system for the automated prediction of cardiovascular events. Implementation into clinical practice would address the unmet need of automating proven imaging biomarkers to guide management and improve population health. Coronary artery calcium is an accurate predictor of cardiovascular events but this information is not routinely quantified. Here the authors show a robust and time-efficient deep learning system to automatically quantify coronary calcium on CT scans and predict cardiovascular events in a large, multicentre study.
Deep Learning for Fully-Automated Localization and Segmentation of Rectal Cancer on Multiparametric MR
Multiparametric Magnetic Resonance Imaging (MRI) can provide detailed information of the physical characteristics of rectum tumours. Several investigations suggest that volumetric analyses on anatomical and functional MRI contain clinically valuable information. However, manual delineation of tumours is a time consuming procedure, as it requires a high level of expertise. Here, we evaluate deep learning methods for automatic localization and segmentation of rectal cancers on multiparametric MR imaging. MRI scans (1.5T, T2-weighted, and DWI) of 140 patients with locally advanced rectal cancer were included in our analysis, equally divided between discovery and validation datasets. Two expert radiologists segmented each tumor. A convolutional neural network (CNN) was trained on the multiparametric MRIs of the discovery set to classify each voxel into tumour or non-tumour. On the independent validation dataset, the CNN showed high segmentation accuracy for reader1 (Dice Similarity Coefficient (DSC = 0.68) and reader2 (DSC = 0.70). The area under the curve (AUC) of the resulting probability maps was very high for both readers, AUC = 0.99 (SD = 0.05). Our results demonstrate that deep learning can perform accurate localization and segmentation of rectal cancer in MR imaging in the majority of patients. Deep learning technologies have the potential to improve the speed and accuracy of MRI-based rectum segmentations.
Peritumoral radiomics features predict distant metastasis in locally advanced NSCLC
Radiomics provides quantitative tissue heterogeneity profiling and is an exciting approach to developing imaging biomarkers in the context of precision medicine. Normal-appearing parenchymal tissues surrounding primary tumors can harbor microscopic disease that leads to increased risk of distant metastasis (DM). This study assesses whether computed-tomography (CT) imaging features of such peritumoral tissues can predict DM in locally advanced non-small cell lung cancer (NSCLC). 200 NSCLC patients of histological adenocarcinoma were included in this study. The investigated lung tissues were tumor rim, defined to be 3mm of tumor and parenchymal tissue on either side of the tumor border and the exterior region extended from 3 to 9mm outside of the tumor. Fifteen stable radiomic features were extracted and evaluated from each of these regions on pre-treatment CT images. For comparison, features from expert-delineated tumor contours were similarly prepared. The patient cohort was separated into training and validation datasets for prognostic power evaluation. Both univariable and multivariable analyses were performed for each region using concordance index (CI). Univariable analysis reveals that six out of fifteen tumor rim features were significantly prognostic of DM (p-value < 0.05), as were ten features from the visible tumor, and only one of the exterior features was. Multivariablely, a rim radiomic signature achieved the highest prognostic performance in the independent validation sub-cohort (CI = 0.64, p-value = 2.4×10-5) significantly over a multivariable clinical model (CI = 0.53), a visible tumor radiomics model (CI = 0.59), or an exterior tissue model (CI = 0.55). Furthermore, patient stratification by the combined rim signature and clinical predictor led to a significant improvement on the clinical predictor alone and also outperformed stratification using the combined tumor signature and clinical predictor. We identified peritumoral rim radiomic features significantly associated with DM. This study demonstrated that peritumoral imaging characteristics may provide additional valuable information over the visible tumor features for patient risk stratification due to cancer metastasis.