Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
990
result(s) for
"Image Interpretation, Computer-Assisted - statistics "
Sort by:
Deep learning shows the capability of high-level computer-aided diagnosis in malignant lymphoma
by
Yonezawa, Sho
,
Matsuda, Kotaro
,
Yoshimura, Takuro
in
13/56
,
631/1647/245/2226
,
631/67/1990/291/1621/1915
2020
A pathological evaluation is one of the most important methods for the diagnosis of malignant lymphoma. A standardized diagnosis is occasionally difficult to achieve even by experienced hematopathologists. Therefore, established procedures including a computer-aided diagnosis are desired. This study aims to classify histopathological images of malignant lymphomas through deep learning, which is a computer algorithm and type of artificial intelligence (AI) technology. We prepared hematoxylin and eosin (H&E) slides of a lesion area from 388 sections, namely, 259 with diffuse large B-cell lymphoma, 89 with follicular lymphoma, and 40 with reactive lymphoid hyperplasia, and created whole slide images (WSIs) using a whole slide system. WSI was annotated in the lesion area by experienced hematopathologists. Image patches were cropped from the WSI to train and evaluate the classifiers. Image patches at magnifications of ×5, ×20, and ×40 were randomly divided into a test set and a training and evaluation set. The classifier was assessed using the test set through a cross-validation after training. The classifier achieved the highest levels of accuracy of 94.0%, 93.0%, and 92.0% for image patches with magnifications of ×5, ×20, and ×40, respectively, in comparison to diffuse large B-cell lymphoma, follicular lymphoma, and reactive lymphoid hyperplasia. Comparing the diagnostic accuracies between the proposed classifier and seven pathologists, including experienced hematopathologists, using the test set made up of image patches with magnifications of ×5, ×20, and ×40, the best accuracy demonstrated by the classifier was 97.0%, whereas the average accuracy achieved by the pathologists using WSIs was 76.0%, with the highest accuracy reaching 83.3%. In conclusion, the neural classifier can outperform pathologists in a morphological evaluation. These results suggest that the AI system can potentially support the diagnosis of malignant lymphoma.
This study aims to classify histopathological images of malignant lymphoma through deep learning. The classifier achieved the high levels of accuracy in comparison to diffuse large B-cell lymphoma, follicular lymphoma, and reactive lymphoid hyperplasia, which were higher than those of pathologists. Artificial intelligence can potentially support diagnosis of malignant lymphoma.
Journal Article
Core Imaging Library - Part I: a versatile Python framework for tomographic imaging
by
Jørgensen, J. S.
,
Fardell, G.
,
Pasca, E.
in
Algorithms
,
Data Interpretation, Statistical
,
Databases, Factual - statistics & numerical data
2021
We present the Core Imaging Library (CIL), an open-source Python framework for tomographic imaging with particular emphasis on reconstruction of challenging datasets. Conventional filtered back-projection reconstruction tends to be insufficient for highly noisy, incomplete, non-standard or multi-channel data arising for example in dynamic, spectral and in situ tomography. CIL provides an extensive modular optimization framework for prototyping reconstruction methods including sparsity and total variation regularization, as well as tools for loading, preprocessing and visualizing tomographic data. The capabilities of CIL are demonstrated on a synchrotron example dataset and three challenging cases spanning golden-ratio neutron tomography, cone-beam X-ray laminography and positron emission tomography. This article is part of the theme issue ‘Synergistic tomographic image reconstruction: part 2’.
Journal Article
Deep-learning-based accurate hepatic steatosis quantification for histological assessment of liver biopsies
2020
Hepatic steatosis droplet quantification with histology biopsies has high clinical significance for risk stratification and management of patients with fatty liver diseases and in the decision to use donor livers for transplantation. However, pathology reviewing processes, when conducted manually, are subject to a high inter- and intra-reader variability, due to the overwhelmingly large number and significantly varying appearance of steatosis instances. This process is challenging as there is a large number of overlapped steatosis droplets with either missing or weak boundaries. In this study, we propose a deep-learning-based region-boundary integrated network for precise steatosis quantification with whole slide liver histopathology images. The proposed model consists of two sequential steps: a region extraction and a boundary prediction module for foreground regions and steatosis boundary prediction, followed by an integrated prediction map generation. Missing steatosis boundaries are next recovered from the predicted map and assembled from adjacent image patches to generate results for the whole slide histopathology image. The resulting steatosis measures both at the pixel level and steatosis object-level present strong correlation with pathologist annotations, radiology readouts and clinical data. In addition, the segregated steatosis object count is shown as a promising alternative measure to the traditional metrics at the pixel level. These results suggest a high potential of artificial intelligence-assisted technology to enhance liver disease decision support using whole slide images.
Accurate quantification of steatosis in liver biopsies is a key step in the treatment of patients with fatty liver diseases. To assist pathologists for such analysis tasks, we develop a novel deep learning-based framework to segment overlapped steatosis droplets in whole slide liver biopsy images. Quantitative measurements of steatosis at both pixel and object-level present strong correlation with clinical data, suggesting its potential for clinical decision support.
Journal Article
Modified kernel MLAA using autoencoder for PET-enabled dual-energy CT
2021
Combined use of PET and dual-energy CT provides complementary information for multi-parametric imaging. PET-enabled dual-energy CT combines a low-energy X-ray CT image with a high-energy γ -ray CT (GCT) image reconstructed from time-of-flight PET emission data to enable dual-energy CT material decomposition on a PET/CT scanner. The maximum-likelihood attenuation and activity (MLAA) algorithm has been used for GCT reconstruction but suffers from noise. Kernel MLAA exploits an X-ray CT image prior through the kernel framework to guide GCT reconstruction and has demonstrated substantial improvements in noise suppression. However, similar to other kernel methods for image reconstruction, the existing kernel MLAA uses image intensity-based features to construct the kernel representation, which is not always robust and may lead to suboptimal reconstruction with artefacts. In this paper, we propose a modified kernel method by using an autoencoder convolutional neural network (CNN) to extract an intrinsic feature set from the X-ray CT image prior. A computer simulation study was conducted to compare the autoencoder CNN-derived feature representation with raw image patches for evaluation of kernel MLAA for GCT image reconstruction and dual-energy multi-material decomposition. The results show that the autoencoder kernel MLAA method can achieve a significant image quality improvement for GCT and material decomposition as compared to the existing kernel MLAA algorithm. A weakness of the proposed method is its potential over-smoothness in a bone region, indicating the importance of further optimization in future work. This article is part of the theme issue ‘Synergistic tomographic image reconstruction: part 2’.
Journal Article
Synergistic tomographic image reconstruction
by
Jørgensen, Jakob Sauer
,
Kolbitsch, Christoph
,
Tsoumpas, Charalampos
in
Algorithms
,
Humans
,
Image Interpretation, Computer-Assisted - statistics & numerical data
2021
This special issue is the second part of a themed issue that focuses on synergistic tomographic image reconstruction and includes a range of contributions in multiple disciplines and application areas. The primary subject of study lies within inverse problems which are tackled with various methods including statistical and computational approaches. This volume covers algorithms and methods for a wide range of imaging techniques such as spectral X-ray computed tomography (CT), positron emission tomography combined with CT or magnetic resonance imaging, bioluminescence imaging and fluorescence-mediated imaging as well as diffuse optical tomography combined with ultrasound. Some of the articles demonstrate their utility on real-world challenges, either medical applications (e.g. motion compensation for imaging patients) or applications in material sciences (e.g. material decomposition and characterization). One of the desired outcomes of the special issues is to bring together different scientific communities which do not usually interact as they do not share the same platforms such as journals and conferences.
This article is part of the theme issue ‘Synergistic tomographic image reconstruction: part 2’.
Journal Article
Deep neural network improves fracture detection by clinicians
by
Gardner, Michael
,
Gupta, Anurag
,
Lindsey, Robert
in
Artificial neural networks
,
Biological Sciences
,
Computer Sciences
2018
Suspected fractures are among the most common reasons for patients to visit emergency departments (EDs), and X-ray imaging is the primary diagnostic tool used by clinicians to assess patients for fractures. Missing a fracture in a radiograph often has severe consequences for patients, resulting in delayed treatment and poor recovery of function. Nevertheless, radiographs in emergency settings are often read out of necessity by emergency medicine clinicians who lack subspecialized expertise in orthopedics, and misdiagnosedfractures account forupwardof four of everyfivereported diagnostic errors in certain EDs. In this work, we developed a deep neural network to detect and localize fractures in radiographs. We trained it to accurately emulate the expertise of 18 senior subspecialized orthopedic surgeons by having them annotate 135,409 radiographs. We then ran a controlled experiment with emergency medicine clinicians to evaluate their ability to detect fractures in wrist radiographs with and without the assistance of the deep learning model. The average clinician’s sensitivity was 80.8% (95% CI, 76.7–84.1%) unaided and 91.5% (95% CI, 89.3–92.9%) aided, and specificity was 87.5% (95 CI, 85.3–89.5%) unaided and 93.9% (95% CI, 92.9–94.9%) aided. The average clinician experienced a relative reduction in misinterpretation rate of 47.0% (95% CI, 37.4–53.9%). The significant improvements in diagnostic accuracy that we observed in this study show that deep learning methods are a mechanism by which senior medical specialists can deliver their expertise to generalists on the front lines of medicine, thereby providing substantial improvements to patient care.
Journal Article
Computer aided detection of tuberculosis on chest radiographs: An evaluation of the CAD4TB v6 system
by
Scholten, Ernst T.
,
Amad, Farhan
,
Verhagen, Maurits
in
692/699/255/1856
,
692/700/1421/1770
,
Adult
2020
There is a growing interest in the automated analysis of chest X-Ray (CXR) as a sensitive and inexpensive means of screening susceptible populations for pulmonary tuberculosis. In this work we evaluate the latest version of CAD4TB, a commercial software platform designed for this purpose. Version 6 of CAD4TB was released in 2018 and is here tested on a fully independent dataset of 5565 CXR images with GeneXpert (Xpert) sputum test results available (854 Xpert positive subjects). A subset of 500 subjects (50% Xpert positive) was reviewed and annotated by 5 expert observers independently to obtain a radiological reference standard. The latest version of CAD4TB is found to outperform all previous versions in terms of area under receiver operating curve (ROC) with respect to both Xpert and radiological reference standards. Improvements with respect to Xpert are most apparent at high sensitivity levels with a specificity of 76% obtained at a fixed 90% sensitivity. When compared with the radiological reference standard, CAD4TB v6 also outperformed previous versions by a considerable margin and achieved 98% specificity at the 90% sensitivity setting. No substantial difference was found between the performance of CAD4TB v6 and any of the various expert observers against the Xpert reference standard. A cost and efficiency analysis on this dataset demonstrates that in a standard clinical situation, operating at 90% sensitivity, users of CAD4TB v6 can process 132 subjects per day at an average cost per screen of
$
5.95 per subject, while users of version 3 process only 85 subjects per day at a cost of
$
8.38 per subject. At all tested operating points version 6 is shown to be more efficient and cost effective than any other version.
Journal Article
Data-efficient and weakly supervised computational pathology on whole-slide images
by
Williamson, Drew F. K.
,
Barbieri, Matteo
,
Lu, Ming Y.
in
631/114/1305
,
631/114/1564
,
692/700/139/422
2021
Deep-learning methods for computational pathology require either manual annotation of gigapixel whole-slide images (WSIs) or large datasets of WSIs with slide-level labels and typically suffer from poor domain adaptation and interpretability. Here we report an interpretable weakly supervised deep-learning method for data-efficient WSI processing and learning that only requires slide-level labels. The method, which we named clustering-constrained-attention multiple-instance learning (CLAM), uses attention-based learning to identify subregions of high diagnostic value to accurately classify whole slides and instance-level clustering over the identified representative regions to constrain and refine the feature space. By applying CLAM to the subtyping of renal cell carcinoma and non-small-cell lung cancer as well as the detection of lymph node metastasis, we show that it can be used to localize well-known morphological features on WSIs without the need for spatial labels, that it overperforms standard weakly supervised classification algorithms and that it is adaptable to independent test cohorts, smartphone microscopy and varying tissue content.
A data-efficient and interpretable deep-learning method for the multi-class classification of whole-slide images that relies only on slide-level labels is applied to the detection of lymph node metastasis and to cancer subtyping.
Journal Article
An objective comparison of detection and segmentation algorithms for artefacts in clinical endoscopy
by
Ning, Qingtian
,
Schnabel, Julia A.
,
Rittscher, Jens
in
692/308/575
,
692/700/1421/164/2223
,
Algorithms
2020
We present a comprehensive analysis of the submissions to the first edition of the Endoscopy Artefact Detection challenge (EAD). Using crowd-sourcing, this initiative is a step towards understanding the limitations of existing state-of-the-art computer vision methods applied to endoscopy and promoting the development of new approaches suitable for clinical translation. Endoscopy is a routine imaging technique for the detection, diagnosis and treatment of diseases in hollow-organs; the esophagus, stomach, colon, uterus and the bladder. However the nature of these organs prevent imaged tissues to be free of imaging artefacts such as bubbles, pixel saturation, organ specularity and debris, all of which pose substantial challenges for any quantitative analysis. Consequently, the potential for improved clinical outcomes through quantitative assessment of abnormal mucosal surface observed in endoscopy videos is presently not realized accurately. The EAD challenge promotes awareness of and addresses this key bottleneck problem by investigating methods that can accurately classify, localize and segment artefacts in endoscopy frames as critical prerequisite tasks. Using a diverse curated multi-institutional, multi-modality, multi-organ dataset of video frames, the accuracy and performance of 23 algorithms were objectively ranked for artefact detection and segmentation. The ability of methods to generalize to unseen datasets was also evaluated. The best performing methods (top 15%) propose deep learning strategies to reconcile variabilities in artefact appearance with respect to size, modality, occurrence and organ type. However, no single method outperformed across all tasks. Detailed analyses reveal the shortcomings of current training strategies and highlight the need for developing new optimal metrics to accurately quantify the clinical applicability of methods.
Journal Article
Synthetic data in machine learning for medicine and healthcare
by
Williamson, Drew F. K.
,
Lu, Ming Y.
,
Chen, Richard J.
in
631/114/1305
,
631/114/1564
,
692/308/575
2021
The proliferation of synthetic data in artificial intelligence for medicine and healthcare raises concerns about the vulnerabilities of the software and the challenges of current policy.
Journal Article