Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
105
result(s) for
"Oikonomou, Anastasia"
Sort by:
COVID-CT-MD, COVID-19 computed tomography scan dataset applicable in machine learning and deep learning
by
Heidarian, Shahin
,
Enshaei, Nastaran
,
Naderkhani, Farnoosh
in
631/1647/245
,
692/699/255/2514
,
Adult
2021
Novel Coronavirus (COVID-19) has drastically overwhelmed more than 200 countries affecting millions and claiming almost 2 million lives, since its emergence in late 2019. This highly contagious disease can easily spread, and if not controlled in a timely fashion, can rapidly incapacitate healthcare systems. The current standard diagnosis method, the Reverse Transcription Polymerase Chain Reaction (RT- PCR), is time consuming, and subject to low sensitivity. Chest Radiograph (CXR), the first imaging modality to be used, is readily available and gives immediate results. However, it has notoriously lower sensitivity than Computed Tomography (CT), which can be used efficiently to complement other diagnostic methods. This paper introduces a new COVID-19 CT scan dataset, referred to as COVID-CT-MD, consisting of not only COVID-19 cases, but also healthy and participants infected by Community Acquired Pneumonia (CAP). COVID-CT-MD dataset, which is accompanied with lobe-level, slice-level and patient-level labels, has the potential to facilitate the COVID-19 research, in particular COVID-CT-MD can assist in development of advanced Machine Learning (ML) and Deep Neural Network (DNN) based solutions.
Measurement(s)
Low Dose Computed Tomography of the Chest • viral infectious disease
Technology Type(s)
digital curation • image processing technique
Factor Type(s)
sex • gender • age group • weight • clinical characteristics • covid-19 RT-PCR result • follow-up data
Sample Characteristic - Organism
Homo sapiens
Machine-accessible metadata file describing the reported data:
https://doi.org/10.6084/m9.figshare.13583015
Journal Article
Predefined and data-driven CT radiomics predict recurrence-free and overall survival in patients with pulmonary metastases treated with stereotactic body radiotherapy
2024
This retrospective study explores two radiomics methods combined with other clinical variables for predicting recurrence free survival (RFS) and overall survival (OS) in patients with pulmonary metastases treated with stereotactic body radiotherapy (SBRT).
111 patients with 163 metastases treated with SBRT were included with a median follow-up time of 927 days. First-order radiomic features were extracted using two methods: 2D CT texture analysis (CTTA) using TexRAD software, and a data-driven technique: functional principal components analysis (FPCA) using segmented tumoral and peri-tumoural 3D regions.
Using both Kaplan-Meier analysis with its log-rank tests and multivariate Cox regression analysis, the best radiomic features of both methods were selected: CTTA-based \"entropy\" and the FPCA-based first mode of variation of tumoural CT density histogram: \"F1.\" Predictive models combining radiomic variables and age showed a C-index of 0.62 95% with a CI of (0.57-0.67). \"Clinical indication for SBRT\" and \"lung primary cancer origin\" were strongly associated with RFS and improved the RFS C-index: 0.67 (0.62-0.72) when combined with the best radiomic features. The best multivariate Cox model for predicting OS combined CTTA-based features-skewness and kurtosis-with size and \"lung primary cancer origin\" with a C-index of 0.67 (0.61-0.74).
In conclusion, concise predictive models including CT density-radiomics of metastases, age, clinical indication, and lung primary cancer origin can help identify those patients with probable earlier recurrence or death prior to SBRT treatment so that more aggressive treatment can be applied.
Journal Article
COVID-rate: an automated framework for segmentation of COVID-19 lesions from chest CT images
by
Enshaei, Nastaran
,
Heidarian, Shahin
,
Mohammadi, Arash
in
692/699/255/2514
,
692/700/1421
,
Chest
2022
Novel Coronavirus disease (COVID-19) is a highly contagious respiratory infection that has had devastating effects on the world. Recently, new COVID-19 variants are emerging making the situation more challenging and threatening. Evaluation and quantification of COVID-19 lung abnormalities based on chest Computed Tomography (CT) images can help determining the disease stage, efficiently allocating limited healthcare resources, and making informed treatment decisions. During pandemic era, however, visual assessment and quantification of COVID-19 lung lesions by expert radiologists become expensive and prone to error, which raises an urgent quest to develop practical autonomous solutions. In this context, first, the paper introduces an open-access COVID-19 CT segmentation dataset containing 433 CT images from 82 patients that have been annotated by an expert radiologist. Second, a Deep Neural Network (DNN)-based framework is proposed, referred to as the
COVID-Rate
, that autonomously segments lung abnormalities associated with COVID-19 from chest CT images. Performance of the proposed
COVID-Rate
framework is evaluated through several experiments based on the introduced and external datasets. Third, an unsupervised enhancement approach is introduced that can reduce the gap between the training set and test set and improve the model generalization. The enhanced results show a dice score of 0.8069 and specificity and sensitivity of 0.9969 and 0.8354, respectively. Furthermore, the results indicate that the
COVID-Rate
model can efficiently segment COVID-19 lesions in both 2D CT images and whole lung volumes. Results on the external dataset illustrate generalization capabilities of the
COVID-Rate
model to CT images obtained from a different scanner.
Journal Article
3D-MCN: A 3D Multi-scale Capsule Network for Lung Nodule Malignancy Prediction
by
Naderkhani, Farnoosh
,
Mohammadi, Arash
,
Tyrrell, Pascal N.
in
631/67/1612/1350
,
631/67/2321
,
Accuracy
2020
Despite the advances in automatic lung cancer malignancy prediction, achieving high accuracy remains challenging. Existing solutions are mostly based on Convolutional Neural Networks (CNNs), which require a large amount of training data. Most of the developed CNN models are based only on the main nodule region, without considering the surrounding tissues. Obtaining high sensitivity is challenging with lung nodule malignancy prediction. Moreover, the interpretability of the proposed techniques should be a consideration when the end goal is to utilize the model in a clinical setting. Capsule networks (CapsNets) are new and revolutionary machine learning architectures proposed to overcome shortcomings of CNNs. Capitalizing on the success of CapsNet in biomedical domains, we propose a novel model for lung tumor malignancy prediction. The proposed framework, referred to as the 3D Multi-scale Capsule Network (3D-MCN), is uniquely designed to benefit from: (i) 3D inputs, providing information about the nodule in 3D; (ii) Multi-scale input, capturing the nodule’s local features, as well as the characteristics of the surrounding tissues, and; (iii) CapsNet-based design, being capable of dealing with a small number of training samples. The proposed 3D—MCN architecture predicted lung nodule malignancy with a high accuracy of 93.12%, sensitivity of 94.94%, area under the curve (AUC) of 0.9641, and specificity of 90% when tested on the LIDC-IDRI dataset. When classifying patients as having a malignant condition (i.e., at least one malignant nodule is detected) or not, the proposed model achieved an accuracy of 83%, and a sensitivity and specificity of 84% and 81% respectively.
Journal Article
Human-level COVID-19 diagnosis from low-dose CT scans using a two-stage time-distributed capsule network
by
Naderkhani, Farnoosh
,
Heidarian, Shahin
,
Enshaei, Nastaran
in
631/1647/245
,
692/699/255/2514
,
Artificial Intelligence
2022
Reverse transcription-polymerase chain reaction is currently the gold standard in COVID-19 diagnosis. It can, however, take days to provide the diagnosis, and false negative rate is relatively high. Imaging, in particular chest computed tomography (CT), can assist with diagnosis and assessment of this disease. Nevertheless, it is shown that standard dose CT scan gives significant radiation burden to patients, especially those in need of multiple scans. In this study, we consider low-dose and ultra-low-dose (LDCT and ULDCT) scan protocols that reduce the radiation exposure close to that of a single X-ray, while maintaining an acceptable resolution for diagnosis purposes. Since thoracic radiology expertise may not be widely available during the pandemic, we develop an Artificial Intelligence (AI)-based framework using a collected dataset of LDCT/ULDCT scans, to study the hypothesis that the AI model can provide human-level performance. The AI model uses a two stage capsule network architecture and can rapidly classify COVID-19, community acquired pneumonia (CAP), and normal cases, using LDCT/ULDCT scans. Based on a cross validation, the AI model achieves COVID-19 sensitivity of
89.5
%
±
0.11
, CAP sensitivity of
95
%
±
0.11
, normal cases sensitivity (specificity) of
85.7
%
±
0.16
, and accuracy of
90
%
±
0.06
. By incorporating clinical data (demographic and symptoms), the performance further improves to COVID-19 sensitivity of
94.3
%
±
0.05
, CAP sensitivity of
96.7
%
±
0.07
, normal cases sensitivity (specificity) of
91
%
±
0.09
, and accuracy of
94.1
%
±
0.03
. The proposed AI model achieves human-level diagnosis based on the LDCT/ULDCT scans with reduced radiation exposure. We believe that the proposed AI model has the potential to assist the radiologists to accurately and promptly diagnose COVID-19 infection and help control the transmission chain during the pandemic.
Journal Article
Survival analysis in lung cancer patients with interstitial lung disease
2021
Lung cancer patients with interstitial lung disease (ILD) are prone for higher morbidity and mortality and their treatment is challenging. The purpose of this study is to investigate whether the survival of lung cancer patients is affected by the presence of ILD documented on CT.
146 patients with ILD at initial chest CT were retrospectively included in the study. 146 lung cancer controls without ILD were selected. Chest CTs were evaluated for the presence of pulmonary fibrosis which was classified in 4 categories. Presence and type of emphysema, extent of ILD and emphysema, location and histologic type of cancer, clinical staging and treatment were evaluated. Kaplan-Meier estimates and Cox regression models were used to assess survival probability and hazard of death of different groups. P value < 0.05 was considered significant.
5-year survival for the study group was 41% versus 48% for the control group (log-rank test p = 0.0092). No significant difference in survival rate was found between the four different categories of ILD (log-rank test, p = 0.195) and the different histologic types (log-rank test, p = 0.4005). A cox proportional hazard model was used including presence of ILD, clinical stage and age. The hazard of death among patients with ILD was 1.522 times that among patients without ILD (95%CI, p = 0.029).
Patients with lung cancer and CT evidence of ILD have a significantly shorter survival compared to patients with lung cancer only. Documenting the type and grading the severity of ILD in lung cancer patients will significantly contribute to their challenging management.
Journal Article
DRTOP: deep learning-based radiomics for the time-to-event outcome prediction in lung cancer
2020
Hand-crafted radiomics has been used for developing models in order to predict time-to-event clinical outcomes in patients with lung cancer. Hand-crafted features, however, are pre-defined and extracted without taking the desired target into account. Furthermore, accurate segmentation of the tumor is required for development of a reliable predictive model, which may be objective and a time-consuming task. To address these drawbacks, we propose a deep learning-based radiomics model for the time-to-event outcome prediction, referred to as DRTOP that takes raw images as inputs, and calculates the image-based risk of death or recurrence, for each patient. Our experiments on an in-house dataset of 132 lung cancer patients show that the obtained image-based risks are significant predictors of the time-to-event outcomes. Computed Tomography (CT)-based features are predictors of the overall survival (OS), with the hazard ratio (HR) of 1.35, distant control (DC), with HR of 1.06, and local control (LC), with HR of 2.66. The Positron Emission Tomography (PET)-based features are predictors of OS and recurrence free survival (RFS), with hazard ratios of 1.67 and 1.18, respectively. The concordance indices of
68
%
,
63
%
, and
64
%
for predicting the OS, DC, and RFS show that the deep learning-based radiomics model is as accurate or better in predicting predefined clinical outcomes compared to hand-crafted radiomics, with concordance indices of
51
%
,
64
%
, and
47
%
, for predicting the OS, DC, and RFS, respectively. Deep learning-based radiomics has the potential to offer complimentary predictive information in the personalized management of lung cancer patients.
Journal Article
Differentiation of COVID-19 from other types of viral pneumonia and severity scoring on baseline chest radiographs: Comparison of deep learning with multi-reader evaluation
2025
Chest X-ray (CXR) imaging plays a pivotal role in the diagnosis and prognosis of viral pneumonia. However, distinguishing COVID-19 CXRs from other viral infections remains challenging due to highly similar radiographic features. Most existing deep learning (DL) models focus on differentiating COVID-19 from community-acquired pneumonia (CAP) rather than other viral pneumonias and often overlook baseline CXRs, missing the critical window for early detection and intervention. Moreover, manual severity scoring of COVID-19 CXRs by radiologists is subjective and time-intensive, highlighting the need for automated systems. This study introduces a DL system for distinguishing COVID-19 from other viral pneumonias on baseline CXRs acquired within three days of PCR testing, and for automated severity scoring of COVID-19 CXRs. The system was developed using a dataset of 2,547 patients (808 COVID-19, 936 non-COVID viral pneumonia, and 803 normal cases) and validated externally on several publicly accessible datasets. Compared to four experienced radiologists, the model achieved higher diagnostic accuracy (76.4% vs. 71.8%) and enhanced COVID-19 identification (F1-score: 74.1% vs. 61.3%), with an AUC of 93% for distinguishing between viral pneumonia and normal cases, and 89.8% for differentiating COVID-19 from other viral pneumonias. The severity-scoring module exhibited a high Pearson correlation of 93% and a low mean absolute error (MAE) of 2.35 compared to the radiologists’ consensus. External validation on independent public datasets confirmed the model’s generalizability. Subgroup analyses stratified by patient age, sex, and severity levels further demonstrated consistent performance, supporting the system’s robustness across diverse clinical populations. These findings suggest that the proposed DL system could assist radiologists in the early diagnosis and severity assessment of COVID-19 from baseline CXRs, particularly in resource-limited settings.
Journal Article
Histogram-based models on non-thin section chest CT predict invasiveness of primary lung adenocarcinoma subsolid nodules
by
Salazar, Pascal
,
Petersen, Alexander
,
Hwang, David M.
in
631/67/1612/1350
,
692/4028/67/2321
,
Adenocarcinoma
2019
109 pathologically proven subsolid nodules (SSN) were segmented by 2 readers on non-thin section chest CT with a lung nodule analysis software followed by extraction of CT attenuation histogram and geometric features. Functional data analysis of histograms provided data driven features (FPC1,2,3) used in further model building. Nodules were classified as pre-invasive (P1, atypical adenomatous hyperplasia and adenocarcinoma
in situ
), minimally invasive (P2) and invasive adenocarcinomas (P3). P1 and P2 were grouped together (T1) versus P3 (T2). Various combinations of features were compared in predictive models for binary nodule classification (T1/T2), using multiple logistic regression and non-linear classifiers. Area under ROC curve (AUC) was used as diagnostic performance criteria. Inter-reader variability was assessed using Cohen’s Kappa and intra-class coefficient (ICC). Three models predicting invasiveness of SSN were selected based on AUC. First model included 87.5 percentile of CT lesion attenuation (Q.875), interquartile range (IQR), volume and maximum/minimum diameter ratio (AUC:0.89, 95%CI:[0.75 1]). Second model included FPC1, volume and diameter ratio (AUC:0.91, 95%CI:[0.77 1]). Third model included FPC1, FPC2 and volume (AUC:0.89, 95%CI:[0.73 1]). Inter-reader variability was excellent (Kappa:0.95, ICC:0.98). Parsimonious models using histogram and geometric features differentiated invasive from minimally invasive/pre-invasive SSN with good predictive performance in non-thin section CT.
Journal Article
Lung Nodule Malignancy Classification Integrating Deep and Radiomic Features in a Three-Way Attention-Based Fusion Module
by
Heidarian, Shahin
,
Mohammadi, Arash
,
Ganeshan, Balaji
in
Accuracy
,
Algorithms
,
Artificial intelligence
2025
In this study, we propose a novel hybrid framework for assessing the invasiveness of an in-house dataset of 114 pathologically proven lung adenocarcinomas presenting as subsolid nodules on Computed Tomography (CT). Nodules were classified into group 1 (G1), which included atypical adenomatous hyperplasia, adenocarcinoma in situ, and minimally invasive adenocarcinomas, and group 2 (G2), which included invasive adenocarcinomas. Our approach includes a three-way Integration of Visual, Spatial, and Temporal features with Attention, referred to as I-VISTA, obtained from three processing algorithms designed based on Deep Learning (DL) and radiomic models, leading to a more comprehensive analysis of nodule variations. The aforementioned processing algorithms are arranged in the following three parallel paths: (i) The Shifted Window (SWin) Transformer path, which is a hierarchical vision Transformer that extracts nodules’ related spatial features; (ii) The Convolutional Auto-Encoder (CAE) Transformer path, which captures informative features related to inter-slice relations via a modified Transformer encoder architecture; and (iii) a 3D Radiomic-based path that collects quantitative features based on texture analysis of each nodule. Extracted feature sets are then passed through the Criss-Cross attention fusion module to discover the most informative feature patterns and classify nodules type. The experiments were evaluated based on a ten-fold cross-validation scheme. I-VISTA framework achieved the best performance of overall accuracy, sensitivity, and specificity (mean ± std) of 93.93 ± 6.80%, 92.66 ± 9.04%, and 94.99 ± 7.63% with an Area under the ROC Curve (AUC) of 0.93 ± 0.08 for lung nodule classification among ten folds. The hybrid framework integrating DL and hand-crafted 3D Radiomic model outperformed the standalone DL and hand-crafted 3D Radiomic model in differentiating G1 from G2 subsolid nodules identified on CT.
Journal Article