Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
3,589
result(s) for
"Computer aided diagnosis"
Sort by:
Evaluating the performance of a deep learning‐based computer‐aided diagnosis (DL‐CAD) system for detecting and characterizing lung nodules: Comparison with the performance of double reading by radiologists
2019
Background
The study was conducted to evaluate the performance of a state‐of‐the‐art commercial deep learning‐based computer‐aided diagnosis (DL‐CAD) system for detecting and characterizing pulmonary nodules.
Methods
Pulmonary nodules in 346 healthy subjects (male: female = 221:125, mean age 51 years) from a lung cancer screening program conducted from March to November 2017 were screened using a DL‐CAD system and double reading independently, and their performance in nodule detection and characterization were evaluated. An expert panel combined the results of the DL‐CAD system and double reading as the reference standard.
Results
The DL‐CAD system showed a higher detection rate than double reading, regardless of nodule size (86.2% vs. 79.2%; P < 0.001): nodules ≥ 5 mm (96.5% vs. 88.0%; P = 0.008); nodules < 5 mm (84.3% vs. 77.5%; P < 0.001). However, the false positive rate (per computed tomography scan) of the DL‐CAD system (1.53, 529/346) was considerably higher than that of double reading (0.13, 44/346; P < 0.001). Regarding nodule characterization, the sensitivity and specificity of the DL‐CAD system for distinguishing solid nodules > 5 mm (90.3% and 100.0%, respectively) and ground‐glass nodules (100.0% and 96.1%, respectively) were close to that of double reading, but dropped to 55.5% and 93%, respectively, when discriminating part solid nodules.
Conclusion
Our DL‐CAD system detected significantly more nodules than double reading. In the future, false positive findings should be further reduced and characterization accuracy improved.
Journal Article
Deep-Learning-Based Computer-Aided Systems for Breast Cancer Imaging: A Critical Review
by
Rodríguez-Álvarez, María José
,
Lakshminarayanan, Vasudevan
,
Jiménez-Gaona, Yuliana
in
Breast cancer
,
computer-aided diagnosis
,
Computer-aided medical diagnosis
2020
This paper provides a critical review of the literature on deep learning applications in breast tumor diagnosis using ultrasound and mammography images. It also summarizes recent advances in computer-aided diagnosis/detection (CAD) systems, which make use of new deep learning methods to automatically recognize breast images and improve the accuracy of diagnoses made by radiologists. This review is based upon published literature in the past decade (January 2010–January 2020), where we obtained around 250 research articles, and after an eligibility process, 59 articles were presented in more detail. The main findings in the classification process revealed that new DL-CAD methods are useful and effective screening tools for breast cancer, thus reducing the need for manual feature extraction. The breast tumor research community can utilize this survey as a basis for their current and future studies.
Journal Article
Prediction of Nodal Metastasis in Lung Cancer Using Deep Learning of Endobronchial Ultrasound Images
by
Ito, Yuki
,
Nakajima, Takahiro
,
Otsuka, Takeshi
in
Anesthesia
,
Artificial intelligence
,
Bronchoscopy
2022
Endobronchial ultrasound-guided transbronchial needle aspiration (EBUS-TBNA) is a valid modality for nodal lung cancer staging. The sonographic features of EBUS helps determine suspicious lymph nodes (LNs). To facilitate this use of this method, machine-learning-based computer-aided diagnosis (CAD) of medical imaging has been introduced in clinical practice. This study investigated the feasibility of CAD for the prediction of nodal metastasis in lung cancer using endobronchial ultrasound images. Image data of patients who underwent EBUS-TBNA were collected from a video clip. Xception was used as a convolutional neural network to predict the nodal metastasis of lung cancer. The prediction accuracy of nodal metastasis through deep learning (DL) was evaluated using both the five-fold cross-validation and hold-out methods. Eighty percent of the collected images were used in five-fold cross-validation, and all the images were used for the hold-out method. Ninety-one patients (166 LNs) were enrolled in this study. A total of 5255 and 6444 extracted images from the video clip were analyzed using the five-fold cross-validation and hold-out methods, respectively. The prediction of LN metastasis by CAD using EBUS images showed high diagnostic accuracy with high specificity. CAD during EBUS-TBNA may help improve the diagnostic efficiency and reduce invasiveness of the procedure.
Journal Article
Novel Hypertrophic Cardiomyopathy Diagnosis Index Using Deep Features and Local Directional Pattern Techniques
by
Chinmay Dharmik
,
U. Rajendra Acharya
,
Edward J. Ciaccio
in
Accuracy
,
Automation
,
Cardiomyopathy
2022
Hypertrophic cardiomyopathy (HCM) is a genetic disorder that exhibits a wide spectrum of clinical presentations, including sudden death. Early diagnosis and intervention may avert the latter. Left ventricular hypertrophy on heart imaging is an important diagnostic criterion for HCM, and the most common imaging modality is heart ultrasound (US). The US is operator-dependent, and its interpretation is subject to human error and variability. We proposed an automated computer-aided diagnostic tool to discriminate HCM from healthy subjects on US images. We used a local directional pattern and the ResNet-50 pretrained network to classify heart US images acquired from 62 known HCM patients and 101 healthy subjects. Deep features were ranked using Student’s t-test, and the most significant feature (SigFea) was identified. An integrated index derived from the simulation was defined as 100·log10(SigFea/2) in each subject, and a diagnostic threshold value was empirically calculated as the mean of the minimum and maximum integrated indices among HCM and healthy subjects, respectively. An integrated index above a threshold of 0.5 separated HCM from healthy subjects with 100% accuracy in our test dataset.
Journal Article
DAD-Net: Classification of Alzheimer’s Disease Using ADASYN Oversampling Technique and Optimized Neural Network
by
Saqib Mahmood
,
Mian Muhammad Sadiq Fareed
,
Meng Joo Er
in
Accuracy
,
Advertising executives
,
Algorithms
2022
Alzheimer’s Disease (AD) is a neurological brain disorder that causes dementia and neurological dysfunction, affecting memory, behavior, and cognition. Deep Learning (DL), a kind of Artificial Intelligence (AI), has paved the way for new AD detection and automation methods. The DL model’s prediction accuracy depends on the dataset’s size. The DL models lose their accuracy when the dataset has an imbalanced class problem. This study aims to use the deep Convolutional Neural Network (CNN) to develop a reliable and efficient method for identifying Alzheimer’s disease using MRI. In this study, we offer a new CNN architecture for diagnosing Alzheimer’s disease with a modest number of parameters, making it perfect for training a smaller dataset. This proposed model correctly separates the early stages of Alzheimer’s disease and displays class activation patterns on the brain as a heat map. The proposed Detection of Alzheimer’s Disease Network (DAD-Net) is developed from scratch to correctly classify the phases of Alzheimer’s disease while reducing parameters and computation costs. The Kaggle MRI image dataset has a severe problem with class imbalance. Therefore, we used a synthetic oversampling technique to distribute the image throughout the classes and avoid the problem. Precision, recall, F1-score, Area Under the Curve (AUC), and loss are all used to compare the proposed DAD-Net against DEMENET and CNN Model. For accuracy, AUC, F1-score, precision, and recall, the DAD-Net achieved the following values for evaluation metrics: 99.22%, 99.91%, 99.19%, 99.30%, and 99.14%, respectively. The presented DAD-Net outperforms other state-of-the-art models in all evaluation metrics, according to the simulation results.
Journal Article
Application of external torque enhances the detection of subtle syndesmotic ankle instability in a weight-bearing CT
2023
Purpose
Acute syndesmotic ankle injuries continue to impose a diagnostic dilemma and it remains unclear whether weightbearing and/or external rotation should be added during the imaging process. Therefore, the aim of this study was to assess if combined weightbearing and external rotation increases the diagnostic sensitivity of syndesmotic ankle instability using weightbearing CT (WBCT) imaging, compared to isolated weightbearing.
Methods
In this retrospective study, patients with an acute syndesmotic ankle injury were analysed using a WBCT (
N
= 21; Age = 31.6 ± 14.1 years old). Inclusion criteria were an MRI confirmed syndesmotic ligament injury imaged by a WBCT of the ankle during weightbearing and combined weightbearing-external rotation. Exclusion criteria consisted of fracture associated syndesmotic injuries. Three-dimensional (3D) models were generated from the CT slices. Tibiofibular displacement and talar rotation were quantified using automated 3D measurements (anterior tibiofibular distance (ATFD), Alpha angle, posterior Tibiofibular distance (PTFD) and Talar rotation (TR) angle in comparison to the contralateral non-injured ankle.
Results
The difference in neutral-stressed Alpha angle and ATFD showed a significant difference between patients with a syndesmotic ankle lesion and contralateral control (
P
= 0.046 and
P
= 0.039, respectively). The difference in neutral-stressed PTFD and TR angle did not show a significant difference between patients with a syndesmotic ankle lesion and healthy ankles (n.s.).
Conclusion
Application of combined weightbearing-external rotation reveals an increased ATFD in patients with syndesmotic ligament injuries. This study provides the first insights based on 3D measurements to support the potential relevance of applying external rotation during WBCT imaging. In clinical practice, this could enhance the current diagnostic accuracy of subtle syndesmotic instability in a non-invasive manner. However, to what extent certain displacement patterns require operative treatment strategies has yet to be determined in future studies.
Level of evidence
Level III.
Journal Article
Generalizable, Reproducible, and Neuroscientifically Interpretable Imaging Biomarkers for Alzheimer's Disease
by
Yu, Chunshui
,
Wang, Dawei
,
Han, Tong
in
Alzheimer's disease
,
Biomarkers
,
computer‐aided diagnosis
2020
Precision medicine for Alzheimer's disease (AD) necessitates the development of personalized, reproducible, and neuroscientifically interpretable biomarkers, yet despite remarkable advances, few such biomarkers are available. Also, a comprehensive evaluation of the neurobiological basis and generalizability of the end‐to‐end machine learning system should be given the highest priority. For this reason, a deep learning model (3D attention network, 3DAN) that can simultaneously capture candidate imaging biomarkers with an attention mechanism module and advance the diagnosis of AD based on structural magnetic resonance imaging is proposed. The generalizability and reproducibility are evaluated using cross‐validation on in‐house, multicenter (n = 716), and public (n = 1116) databases with an accuracy up to 92%. Significant associations between the classification output and clinical characteristics of AD and mild cognitive impairment (MCI, a middle stage of dementia) groups provide solid neurobiological support for the 3DAN model. The effectiveness of the 3DAN model is further validated by its good performance in predicting the MCI subjects who progress to AD with an accuracy of 72%. Collectively, the findings highlight the potential for structural brain imaging to provide a generalizable, and neuroscientifically interpretable imaging biomarker that can support clinicians in the early diagnosis of AD.
This study proposes a deep learning model (3D attention network) to simultaneously capture imaging biomarkers with an attention mechanism module and advance the diagnosis of Alzheimer's disease. The generalizability and reproducibility are cross‐validated on independent databases with an accuracy up to 92%. Significant associations between the classification output and clinical characteristics of patients provide solid neurobiological support for the model.
Journal Article
IoMT-Enabled Computer-Aided Diagnosis of Pulmonary Embolism from Computed Tomography Scans Using Deep Learning
by
Shah, Pir Masoom
,
Ahmad, Zahoor
,
Islam, Saif ul
in
Accuracy
,
Classification
,
computed tomography scans
2023
The Internet of Medical Things (IoMT) has revolutionized Ambient Assisted Living (AAL) by interconnecting smart medical devices. These devices generate a large amount of data without human intervention. Learning-based sophisticated models are required to extract meaningful information from this massive surge of data. In this context, Deep Neural Network (DNN) has been proven to be a powerful tool for disease detection. Pulmonary Embolism (PE) is considered the leading cause of death disease, with a death toll of 180,000 per year in the US alone. It appears due to a blood clot in pulmonary arteries, which blocks the blood supply to the lungs or a part of the lung. An early diagnosis and treatment of PE could reduce the mortality rate. Doctors and radiologists prefer Computed Tomography (CT) scans as a first-hand tool, which contain 200 to 300 images of a single study for diagnosis. Most of the time, it becomes difficult for a doctor and radiologist to maintain concentration going through all the scans and giving the correct diagnosis, resulting in a misdiagnosis or false diagnosis. Given this, there is a need for an automatic Computer-Aided Diagnosis (CAD) system to assist doctors and radiologists in decision-making. To develop such a system, in this paper, we proposed a deep learning framework based on DenseNet201 to classify PE into nine classes in CT scans. We utilized DenseNet201 as a feature extractor and customized fully connected decision-making layers. The model was trained on the Radiological Society of North America (RSNA)-Pulmonary Embolism Detection Challenge (2020) Kaggle dataset and achieved promising results of 88%, 88%, 89%, and 90% in terms of the accuracy, sensitivity, specificity, and Area Under the Curve (AUC), respectively.
Journal Article
Automatic Skin Cancer Detection Using Clinical Images: A Comprehensive Review
by
Garcia, Rafael
,
Nazari, Sana
in
Artificial intelligence
,
automated diagnosis of pigmented skin lesions (PSLs), computer-aided diagnosis
,
Automation
2023
Skin cancer has become increasingly common over the past decade, with melanoma being the most aggressive type. Hence, early detection of skin cancer and melanoma is essential in dermatology. Computational methods can be a valuable tool for assisting dermatologists in identifying skin cancer. Most research in machine learning for skin cancer detection has focused on dermoscopy images due to the existence of larger image datasets. However, general practitioners typically do not have access to a dermoscope and must rely on naked-eye examinations or standard clinical images. By using standard, off-the-shelf cameras to detect high-risk moles, machine learning has also proven to be an effective tool. The objective of this paper is to provide a comprehensive review of image-processing techniques for skin cancer detection using clinical images. In this study, we evaluate 51 state-of-the-art articles that have used machine learning methods to detect skin cancer over the past decade, focusing on clinical datasets. Even though several studies have been conducted in this field, there are still few publicly available clinical datasets with sufficient data that can be used as a benchmark, especially when compared to the existing dermoscopy databases. In addition, we observed that the available artifact removal approaches are not quite adequate in some cases and may also have a negative impact on the models. Moreover, the majority of the reviewed articles are working with single-lesion images and do not consider typical mole patterns and temporal changes in the lesions of each patient.
Journal Article
Classification of Alzheimer's disease based on hippocampal multivariate morphometry statistics
by
Li, Zhigang
,
Dong, Qunxi
,
Liu, Honghong
in
AD patient stratification
,
Algorithms
,
Alzheimer's disease
2023
Background
Alzheimer's disease (AD) is a neurodegenerative disease characterized by progressive cognitive decline, and mild cognitive impairment (MCI) is associated with a high risk of developing AD. Hippocampal morphometry analysis is believed to be the most robust magnetic resonance imaging (MRI) markers for AD and MCI. Multivariate morphometry statistics (MMS), a quantitative method of surface deformations analysis, is confirmed to have strong statistical power for evaluating hippocampus.
Aims
We aimed to test whether surface deformation features in hippocampus can be employed for early classification of AD, MCI, and healthy controls (HC).
Methods
We first explored the differences in hippocampus surface deformation among these three groups by using MMS analysis. Additionally, the hippocampal MMS features of selective patches and support vector machine (SVM) were used for the binary classification and triple classification.
Results
By the results, we identified significant hippocampal deformation among the three groups, especially in hippocampal CA1. In addition, the binary classification of AD/HC, MCI/HC, AD/MCI showed good performances, and area under curve (AUC) of triple‐classification model achieved 0.85. Finally, positive correlations were found between the hippocampus MMS features and cognitive performances.
Conclusions
The study revealed significant hippocampal deformation among AD, MCI, and HC. Additionally, we confirmed that hippocampal MMS can be used as a sensitive imaging biomarker for the early diagnosis of AD at the individual level.
Hippocampal multivariate morphometry statistics can be used as a sensitive imaging biomarker to predict the development of Alzheimer's disease.
Journal Article