Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
361
result(s) for
"692/700/1421/1846/2771"
Sort by:
Vision transformer and explainable transfer learning models for auto detection of kidney cyst, stone and tumor from CT-radiography
by
Islam, Md Nazmul
,
Soylu, Ahmet
,
Uddin, Md Zia
in
639/166/985
,
692/700
,
692/700/1421/1846/2771
2022
Renal failure, a public health concern, and the scarcity of nephrologists around the globe have necessitated the development of an AI-based system to auto-diagnose kidney diseases. This research deals with the three major renal diseases categories: kidney stones, cysts, and tumors, and gathered and annotated a total of 12,446 CT whole abdomen and urogram images in order to construct an AI-based kidney diseases diagnostic system and contribute to the AI community’s research scope e.g., modeling digital-twin of renal functions. The collected images were exposed to exploratory data analysis, which revealed that the images from all of the classes had the same type of mean color distribution. Furthermore, six machine learning models were built, three of which are based on the state-of-the-art variants of the Vision transformers EANet, CCT, and Swin transformers, while the other three are based on well-known deep learning models Resnet, VGG16, and Inception v3, which were adjusted in the last layers. While the VGG16 and CCT models performed admirably, the swin transformer outperformed all of them in terms of accuracy, with an accuracy of 99.30 percent. The F1 score and precision and recall comparison reveal that the Swin transformer outperforms all other models and that it is the quickest to train. The study also revealed the blackbox of the VGG16, Resnet50, and Inception models, demonstrating that VGG16 is superior than Resnet50 and Inceptionv3 in terms of monitoring the necessary anatomy abnormalities. We believe that the superior accuracy of our Swin transformer-based model and the VGG16-based model can both be useful in diagnosing kidney tumors, cysts, and stones.
Journal Article
Denoising diffusion probabilistic models for 3D medical image generation
by
Kuhl, Christiane
,
Engelhardt, Sandy
,
Khader, Firas
in
639/705/117
,
692/700/1421/1628
,
692/700/1421/1846/2771
2023
Recent advances in computer vision have shown promising results in image generation. Diffusion probabilistic models have generated realistic images from textual input, as demonstrated by DALL-E 2, Imagen, and Stable Diffusion. However, their use in medicine, where imaging data typically comprises three-dimensional volumes, has not been systematically evaluated. Synthetic images may play a crucial role in privacy-preserving artificial intelligence and can also be used to augment small datasets. We show that diffusion probabilistic models can synthesize high-quality medical data for magnetic resonance imaging (MRI) and computed tomography (CT). For quantitative evaluation, two radiologists rated the quality of the synthesized images regarding \"realistic image appearance\", \"anatomical correctness\", and \"consistency between slices\". Furthermore, we demonstrate that synthetic images can be used in self-supervised pre-training and improve the performance of breast segmentation models when data is scarce (Dice scores, 0.91 [without synthetic data], 0.95 [with synthetic data]).
Journal Article
Large-scale pancreatic cancer detection via non-contrast CT and deep learning
2023
Pancreatic ductal adenocarcinoma (PDAC), the most deadly solid malignancy, is typically detected late and at an inoperable stage. Early or incidental detection is associated with prolonged survival, but screening asymptomatic individuals for PDAC using a single test remains unfeasible due to the low prevalence and potential harms of false positives. Non-contrast computed tomography (CT), routinely performed for clinical indications, offers the potential for large-scale screening, however, identification of PDAC using non-contrast CT has long been considered impossible. Here, we develop a deep learning approach, pancreatic cancer detection with artificial intelligence (PANDA), that can detect and classify pancreatic lesions with high accuracy via non-contrast CT. PANDA is trained on a dataset of 3,208 patients from a single center. PANDA achieves an area under the receiver operating characteristic curve (AUC) of 0.986–0.996 for lesion detection in a multicenter validation involving 6,239 patients across 10 centers, outperforms the mean radiologist performance by 34.1% in sensitivity and 6.3% in specificity for PDAC identification, and achieves a sensitivity of 92.9% and specificity of 99.9% for lesion detection in a real-world multi-scenario validation consisting of 20,530 consecutive patients. Notably, PANDA utilized with non-contrast CT shows non-inferiority to radiology reports (using contrast-enhanced CT) in the differentiation of common pancreatic lesion subtypes. PANDA could potentially serve as a new tool for large-scale pancreatic cancer screening.
A deep learning model provides high accuracy in detecting pancreatic lesions in multicenter data, outperforming radiology specialists.
Journal Article
Towards automatic pulmonary nodule management in lung cancer screening with deep learning
by
van Riel, Sarah J.
,
Marchianò, Alfonso
,
Schaefer-Prokop, Cornelia
in
639/705/117
,
692/700/1421/1846/2771
,
Cancer screening
2017
The introduction of lung cancer screening programs will produce an unprecedented amount of chest CT scans in the near future, which radiologists will have to read in order to decide on a patient follow-up strategy. According to the current guidelines, the workup of screen-detected nodules strongly relies on nodule size and nodule type. In this paper, we present a deep learning system based on multi-stream multi-scale convolutional networks, which automatically classifies all nodule types relevant for nodule workup. The system processes raw CT data containing a nodule without the need for any additional information such as nodule segmentation or nodule size and learns a representation of 3D data by analyzing an arbitrary number of 2D views of a given nodule. The deep learning system was trained with data from the Italian MILD screening trial and validated on an independent set of data from the Danish DLCST screening trial. We analyze the advantage of processing nodules at multiple scales with a multi-stream convolutional network architecture, and we show that the proposed deep learning system achieves performance at classifying nodule type that surpasses the one of classical machine learning approaches and is within the inter-observer variability among four experienced human observers.
Journal Article
A multicenter clinical AI system study for detection and diagnosis of focal liver lesions
by
Ying, Hanning
,
Xu, Xingxin
,
Ren, Yiyue
in
631/114/1305
,
692/4020/4021/1607
,
692/699/1503/1607/1610
2024
Early and accurate diagnosis of focal liver lesions is crucial for effective treatment and prognosis. We developed and validated a fully automated diagnostic system named Liver Artificial Intelligence Diagnosis System (LiAIDS) based on a diverse sample of 12,610 patients from 18 hospitals, both retrospectively and prospectively. In this study, LiAIDS achieved an F1-score of 0.940 for benign and 0.692 for malignant lesions, outperforming junior radiologists (benign: 0.830-0.890, malignant: 0.230-0.360) and being on par with senior radiologists (benign: 0.920-0.950, malignant: 0.550-0.650). Furthermore, with the assistance of LiAIDS, the diagnostic accuracy of all radiologists improved. For benign and malignant lesions, junior radiologists’ F1-scores improved to 0.936-0.946 and 0.667-0.680 respectively, while seniors improved to 0.950-0.961 and 0.679-0.753. Additionally, in a triage study of 13,192 consecutive patients, LiAIDS automatically classified 76.46% of patients as low risk with a high NPV of 99.0%. The evidence suggests that LiAIDS can serve as a routine diagnostic tool and enhance the diagnostic capabilities of radiologists for liver lesions.
Early detection and accurate diagnosis of focal liver lesions are crucial for effective treatment and prognosis. Here, the authors present a fully automated diagnostic system that leverages multi-phase CT scans and clinical features, for diagnosing liver lesions.
Journal Article
Generalized ComBat harmonization methods for radiomic features with multi-modal distributions and multiple batch effects
by
Haghighi, Babak
,
Noël, Peter B.
,
Shinohara, Russell T.
in
639/705/531
,
692/308/53/2422
,
692/700/1421/1846/2771
2022
Radiomic features have a wide range of clinical applications, but variability due to image acquisition factors can affect their performance. The harmonization tool ComBat is a promising solution but is limited by inability to harmonize multimodal distributions, unknown imaging parameters, and multiple imaging parameters. In this study, we propose two methods for addressing these limitations. We propose a sequential method that allows for harmonization of radiomic features by multiple imaging parameters (Nested ComBat). We also employ a Gaussian Mixture Model (GMM)-based method (GMM ComBat) where scans are split into groupings based on the shape of the distribution used for harmonization as a batch effect and subsequent harmonization by a known imaging parameter. These two methods were evaluated on features extracted with CapTK and PyRadiomics from two public lung computed tomography datasets. We found that Nested ComBat exhibited similar performance to standard ComBat in reducing the percentage of features with statistically significant differences in distribution attributable to imaging parameters. GMM ComBat improved harmonization performance over standard ComBat (− 11%, − 10% for Lung3/CAPTK, Lung3/PyRadiomics harmonizing by kernel resolution). Features harmonized with a variant of the Nested method and the GMM split method demonstrated similar c-statistics and Kaplan–Meier curves when used in survival analyses.
Journal Article
Interstitial lung disease diagnosis and prognosis using an AI system integrating longitudinal data
2023
For accurate diagnosis of interstitial lung disease (ILD), a consensus of radiologic, pathological, and clinical findings is vital. Management of ILD also requires thorough follow-up with computed tomography (CT) studies and lung function tests to assess disease progression, severity, and response to treatment. However, accurate classification of ILD subtypes can be challenging, especially for those not accustomed to reading chest CTs regularly. Dynamic models to predict patient survival rates based on longitudinal data are challenging to create due to disease complexity, variation, and irregular visit intervals. Here, we utilize RadImageNet pretrained models to diagnose five types of ILD with multimodal data and a transformer model to determine a patient’s 3-year survival rate. When clinical history and associated CT scans are available, the proposed deep learning system can help clinicians diagnose and classify ILD patients and, importantly, dynamically predict disease progression and prognosis.
Accurate diagnosis of interstitial lung disease subtypes and prediction of patient survival rates remains challenging. Here, the authors develop AI algorithms to combine patient’s clinical history and longitudinal CT images to help clinicians diagnose and classify subtypes and dynamically predict disease progression and prognosis.
Journal Article
AppendiXNet: Deep Learning for Diagnosis of Appendicitis from A Small Dataset of CT Exams Using Video Pretraining
by
Irvin, Jeremy
,
Mastrodicasa, Domenico
,
Bereket, Michael
in
692/700/1421
,
692/700/1421/1846/2771
,
Adult
2020
The development of deep learning algorithms for complex tasks in digital medicine has relied on the availability of large labeled training datasets, usually containing hundreds of thousands of examples. The purpose of this study was to develop a 3D deep learning model, AppendiXNet, to detect appendicitis, one of the most common life-threatening abdominal emergencies, using a small training dataset of less than 500 training CT exams. We explored whether pretraining the model on a large collection of natural videos would improve the performance of the model over training the model from scratch. AppendiXNet was pretrained on a large collection of YouTube videos called Kinetics, consisting of approximately 500,000 video clips and annotated for one of 600 human action classes, and then fine-tuned on a small dataset of 438 CT scans annotated for appendicitis. We found that pretraining the 3D model on natural videos significantly improved the performance of the model from an AUC of 0.724 (95% CI 0.625, 0.823) to 0.810 (95% CI 0.725, 0.895). The application of deep learning to detect abnormalities on CT examinations using video pretraining could generalize effectively to other challenging cross-sectional medical imaging tasks when training data is limited.
Journal Article
Federated deep learning for detecting COVID-19 lung abnormalities in CT: a privacy-preserving multinational validation study
by
Dou, Qi
,
Yu, Kevin
,
Heng, Pheng Ann
in
692/53/2421
,
692/700/1421/1846/2771
,
Artificial intelligence
2021
Data privacy mechanisms are essential for rapidly scaling medical training databases to capture the heterogeneity of patient data distributions toward robust and generalizable machine learning systems. In the current COVID-19 pandemic, a major focus of artificial intelligence (AI) is interpreting chest CT, which can be readily used in the assessment and management of the disease. This paper demonstrates the feasibility of a federated learning method for detecting COVID-19 related CT abnormalities with external validation on patients from a multinational study. We recruited 132 patients from seven multinational different centers, with three internal hospitals from Hong Kong for training and testing, and four external, independent datasets from Mainland China and Germany, for validating model generalizability. We also conducted case studies on longitudinal scans for automated estimation of lesion burden for hospitalized COVID-19 patients. We explore the federated learning algorithms to develop a privacy-preserving AI model for COVID-19 medical image diagnosis with good generalization capability on unseen multinational datasets. Federated learning could provide an effective mechanism during pandemics to rapidly develop clinically useful AI across institutions and countries overcoming the burden of central aggregation of large amounts of sensitive data.
Journal Article
Competitive performance of a modularized deep neural network compared to commercial algorithms for low-dose CT image reconstruction
by
Padole, Atul
,
Wang, Ge
,
Khera, Ruhani Doda
in
631/114/1305
,
692/700/1421
,
692/700/1421/1846/2771
2019
Commercial iterative reconstruction techniques help to reduce the radiation dose of computed tomography (CT), but altered image appearance and artefacts can limit their adoptability and potential use. Deep learning has been investigated for low-dose CT (LDCT). Here, we design a modularized neural network for LDCT and compare it with commercial iterative reconstruction methods from three leading CT vendors. Although popular networks are trained for an end-to-end mapping, our network performs an end-to-process mapping so that intermediate denoised images are obtained with associated noise reduction directions towards a final denoised image. The learned workflow allows radiologists in the loop to optimize the denoising depth in a task-specific fashion. Our network was trained with the Mayo LDCT Dataset and tested on separate chest and abdominal CT exams from Massachusetts General Hospital. The best deep learning reconstructions were systematically compared to the best iterative reconstructions in a double-blinded reader study. This study confirms that our deep learning approach performs either favourably or comparably in terms of noise suppression and structural fidelity, and is much faster than commercial iterative reconstruction algorithms.
Reducing the radiation dose for medical CT scans can provide a less invasive imaging method, but requires a method for reconstructing an image up to the image quality from a full-dose scan. In this article, Wang and colleagues show that the deep learning approach, combined with the feedback from radiologists, produces higher quality reconstructions than or similar to that using the current commercial methods.
Journal Article