Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Series TitleSeries Title
-
Reading LevelReading Level
-
YearFrom:-To:
-
More FiltersMore FiltersContent TypeItem TypeIs Full-Text AvailableSubjectPublisherSourceDonorLanguagePlace of PublicationContributorsLocation
Done
Filters
Reset
14,202
result(s) for
"Image Interpretation, Computer-Assisted"
Sort by:
Convolutional neural networks for medical image processing applications
\"With the development of technology, living standards rise and people's expectations increase. This situation makes itself felt strikingly especially in the medical field. The use of medical devices is rapidly increasing to protect human health. It is very important to quickly evaluate the images obtained from these medical imaging devices. For this purpose, artificial intelligence (AI) methods are used. While hand-crafted methods were preferred in the past, more advanced methods are preferred today. CNN architectures are one of the most effective AI methods today. This book contains applications for the use of CNN methods for medical applications. The content of the book, in which different CNN methods are applied to various medical image processing problems, is quite extensive. Readers will be able to comprehensively analyze the effects of CNN methods presented in the book on medical applications\"-- Provided by publisher.
Deep-learning-based accurate hepatic steatosis quantification for histological assessment of liver biopsies
2020
Hepatic steatosis droplet quantification with histology biopsies has high clinical significance for risk stratification and management of patients with fatty liver diseases and in the decision to use donor livers for transplantation. However, pathology reviewing processes, when conducted manually, are subject to a high inter- and intra-reader variability, due to the overwhelmingly large number and significantly varying appearance of steatosis instances. This process is challenging as there is a large number of overlapped steatosis droplets with either missing or weak boundaries. In this study, we propose a deep-learning-based region-boundary integrated network for precise steatosis quantification with whole slide liver histopathology images. The proposed model consists of two sequential steps: a region extraction and a boundary prediction module for foreground regions and steatosis boundary prediction, followed by an integrated prediction map generation. Missing steatosis boundaries are next recovered from the predicted map and assembled from adjacent image patches to generate results for the whole slide histopathology image. The resulting steatosis measures both at the pixel level and steatosis object-level present strong correlation with pathologist annotations, radiology readouts and clinical data. In addition, the segregated steatosis object count is shown as a promising alternative measure to the traditional metrics at the pixel level. These results suggest a high potential of artificial intelligence-assisted technology to enhance liver disease decision support using whole slide images.
Accurate quantification of steatosis in liver biopsies is a key step in the treatment of patients with fatty liver diseases. To assist pathologists for such analysis tasks, we develop a novel deep learning-based framework to segment overlapped steatosis droplets in whole slide liver biopsy images. Quantitative measurements of steatosis at both pixel and object-level present strong correlation with clinical data, suggesting its potential for clinical decision support.
Journal Article
Deep learning shows the capability of high-level computer-aided diagnosis in malignant lymphoma
by
Yonezawa, Sho
,
Matsuda, Kotaro
,
Yoshimura, Takuro
in
13/56
,
631/1647/245/2226
,
631/67/1990/291/1621/1915
2020
A pathological evaluation is one of the most important methods for the diagnosis of malignant lymphoma. A standardized diagnosis is occasionally difficult to achieve even by experienced hematopathologists. Therefore, established procedures including a computer-aided diagnosis are desired. This study aims to classify histopathological images of malignant lymphomas through deep learning, which is a computer algorithm and type of artificial intelligence (AI) technology. We prepared hematoxylin and eosin (H&E) slides of a lesion area from 388 sections, namely, 259 with diffuse large B-cell lymphoma, 89 with follicular lymphoma, and 40 with reactive lymphoid hyperplasia, and created whole slide images (WSIs) using a whole slide system. WSI was annotated in the lesion area by experienced hematopathologists. Image patches were cropped from the WSI to train and evaluate the classifiers. Image patches at magnifications of ×5, ×20, and ×40 were randomly divided into a test set and a training and evaluation set. The classifier was assessed using the test set through a cross-validation after training. The classifier achieved the highest levels of accuracy of 94.0%, 93.0%, and 92.0% for image patches with magnifications of ×5, ×20, and ×40, respectively, in comparison to diffuse large B-cell lymphoma, follicular lymphoma, and reactive lymphoid hyperplasia. Comparing the diagnostic accuracies between the proposed classifier and seven pathologists, including experienced hematopathologists, using the test set made up of image patches with magnifications of ×5, ×20, and ×40, the best accuracy demonstrated by the classifier was 97.0%, whereas the average accuracy achieved by the pathologists using WSIs was 76.0%, with the highest accuracy reaching 83.3%. In conclusion, the neural classifier can outperform pathologists in a morphological evaluation. These results suggest that the AI system can potentially support the diagnosis of malignant lymphoma.
This study aims to classify histopathological images of malignant lymphoma through deep learning. The classifier achieved the high levels of accuracy in comparison to diffuse large B-cell lymphoma, follicular lymphoma, and reactive lymphoid hyperplasia, which were higher than those of pathologists. Artificial intelligence can potentially support diagnosis of malignant lymphoma.
Journal Article
Data mining in biomedical imaging, signaling, and systems
\"Data mining has rapidly emerged as an enabling, robust, and scalable technique to analyze data for novel patterns, trends, anomalies, structures, and features that can be employed for a variety of biomedical and clinical domains. Approaching the techniques and challenges of image mining from a multidisciplinary perspective, this book presents data mining techniques, methodologies, algorithms, and strategies to analyze biomedical signals and images. Written by experts, the text addresses data mining paradigms for the development of biomedical systems. It also includes special coverage of knowledge discovery in mammograms and emphasizes both the diagnostic and therapeutic fields of eye imaging\"--Provided by publisher.
Human brain anatomy in computerized images
by
Damasio, Hanna
in
Brain
,
Brain -- anatomy & histology -- Atlases
,
Brain -- Magnetic resonance imaging -- Atlases
2005
This book provides an atlas of the normal human brain based on three dimensional reconstructions of magnetic resonance scans obtained in normal living adults as well as neurological patients with focal brain lesions. It provides detailed descriptions of sulci and gyri and illustrates how they appear in different brains. The book shows how different slice orientations obtained in the same brain produce different images that can be anatomically misinterpreted, in normal brains as well as brains with lesions. The book also addresses quantitative differences between the human brain and the brains of apes; gray and white matter differences between the hemispheres; and differences related to gender, age, and congenital deafness.
CT iterative reconstruction algorithms: a task-based image quality assessment
2020
PurposeTo assess the dose performance in terms of image quality of filtered back projection (FBP) and two generations of iterative reconstruction (IR) algorithms developed by the most common CT vendors.Materials and methodsWe used four CT systems equipped with a hybrid/statistical IR (H/SIR) and a full/partial/advanced model-based IR (MBIR) algorithms. Acquisitions were performed on an ACR phantom at five dose levels. Raw data were reconstructed using a standard soft tissue kernel for FBP and one iterative level of the two IR algorithm generations. The noise power spectrum (NPS) and the task-based transfer function (TTF) were computed. A detectability index (d′) was computed to model the detection task of a large mass in the liver (large feature; 120 HU and 25-mm diameter) and a small calcification (small feature; 500 HU and 1.5-mm diameter).ResultsWith H/SIR, the highest values of d′ for both features were found for Siemens, then for Canon and the lowest values for Philips and GE. For the large feature, potential dose reductions with MBIR compared with H/SIR were − 35% for GE, − 62% for Philips, and − 13% for Siemens; for the small feature, corresponding reductions were − 45%, − 78%, and − 14%, respectively. With the Canon system, a potential dose reduction of − 32% was observed only for the small feature with MBIR compared with the H/SIR algorithm. For the large feature, the dose increased by 100%.ConclusionThis multivendor comparison of several versions of IR algorithms allowed to compare the different evolution within each vendor. The use of d′ is highly adapted and robust for an optimization process.Key Points• The performance of four CT systems was evaluated by using imQuest software to assess noise characteristic, spatial resolution, and lesion detection.• Two task functions were defined to model the detection task of a large mass in the liver and a small calcification.• The advantage of task-based image quality assessment for radiologists is that it does not include only complicated metrics, but also clinically meaningful image quality.
Journal Article
Which GAN? A comparative study of generative adversarial network-based fast MRI reconstruction
2021
Fast magnetic resonance imaging (MRI) is crucial for clinical applications that can alleviate motion artefacts and increase patient throughput.
K
-space undersampling is an obvious approach to accelerate MR acquisition. However, undersampling of
k
-space data can result in blurring and aliasing artefacts for the reconstructed images. Recently, several studies have been proposed to use deep learning-based data-driven models for MRI reconstruction and have obtained promising results. However, the comparison of these methods remains limited because the models have not been trained on the same datasets and the validation strategies may be different. The purpose of this work is to conduct a comparative study to investigate the generative adversarial network (GAN)-based models for MRI reconstruction. We reimplemented and benchmarked four widely used GAN-based architectures including DAGAN, ReconGAN, RefineGAN and KIGAN. These four frameworks were trained and tested on brain, knee and liver MRI images using twofold, fourfold and sixfold accelerations, respectively, with a random undersampling mask. Both quantitative evaluations and qualitative visualization have shown that the RefineGAN method has achieved superior performance in reconstruction with better accuracy and perceptual quality compared to other GAN-based methods.
This article is part of the theme issue ‘Synergistic tomographic image reconstruction: part 1’.
Journal Article
Comparison of Chest Radiograph Interpretations by Artificial Intelligence Algorithm vs Radiology Residents
by
Moradi, Mehdi
,
Morris, Michael
,
Ahmad, Hassan
in
Algorithms
,
Area Under Curve
,
Artificial intelligence
2020
Chest radiography is the most common diagnostic imaging examination performed in emergency departments (EDs). Augmenting clinicians with automated preliminary read assistants could help expedite their workflows, improve accuracy, and reduce the cost of care.
To assess the performance of artificial intelligence (AI) algorithms in realistic radiology workflows by performing an objective comparative evaluation of the preliminary reads of anteroposterior (AP) frontal chest radiographs performed by an AI algorithm and radiology residents.
This diagnostic study included a set of 72 findings assembled by clinical experts to constitute a full-fledged preliminary read of AP frontal chest radiographs. A novel deep learning architecture was designed for an AI algorithm to estimate the findings per image. The AI algorithm was trained using a multihospital training data set of 342 126 frontal chest radiographs captured in ED and urgent care settings. The training data were labeled from their associated reports. Image-based F1 score was chosen to optimize the operating point on the receiver operating characteristics (ROC) curve so as to minimize the number of missed findings and overcalls per image read. The performance of the model was compared with that of 5 radiology residents recruited from multiple institutions in the US in an objective study in which a separate data set of 1998 AP frontal chest radiographs was drawn from a hospital source representative of realistic preliminary reads in inpatient and ED settings. A triple consensus with adjudication process was used to derive the ground truth labels for the study data set. The performance of AI algorithm and radiology residents was assessed by comparing their reads with ground truth findings. All studies were conducted through a web-based clinical study application system. The triple consensus data set was collected between February and October 2018. The comparison study was preformed between January and October 2019. Data were analyzed from October to February 2020. After the first round of reviews, further analysis of the data was performed from March to July 2020.
The learning performance of the AI algorithm was judged using the conventional ROC curve and the area under the curve (AUC) during training and field testing on the study data set. For the AI algorithm and radiology residents, the individual finding label performance was measured using the conventional measures of label-based sensitivity, specificity, and positive predictive value (PPV). In addition, the agreement with the ground truth on the assignment of findings to images was measured using the pooled κ statistic. The preliminary read performance was recorded for AI algorithm and radiology residents using new measures of mean image-based sensitivity, specificity, and PPV designed for recording the fraction of misses and overcalls on a per image basis. The 1-sided analysis of variance test was used to compare the means of each group (AI algorithm vs radiology residents) using the F distribution, and the null hypothesis was that the groups would have similar means.
The trained AI algorithm achieved a mean AUC across labels of 0.807 (weighted mean AUC, 0.841) after training. On the study data set, which had a different prevalence distribution, the mean AUC achieved was 0.772 (weighted mean AUC, 0.865). The interrater agreement with ground truth finding labels for AI algorithm predictions had pooled κ value of 0.544, and the pooled κ for radiology residents was 0.585. For the preliminary read performance, the analysis of variance test was used to compare the distributions of AI algorithm and radiology residents' mean image-based sensitivity, PPV, and specificity. The mean image-based sensitivity for AI algorithm was 0.716 (95% CI, 0.704-0.729) and for radiology residents was 0.720 (95% CI, 0.709-0.732) (P = .66), while the PPV was 0.730 (95% CI, 0.718-0.742) for the AI algorithm and 0.682 (95% CI, 0.670-0.694) for the radiology residents (P < .001), and specificity was 0.980 (95% CI, 0.980-0.981) for the AI algorithm and 0.973 (95% CI, 0.971-0.974) for the radiology residents (P < .001).
These findings suggest that it is possible to build AI algorithms that reach and exceed the mean level of performance of third-year radiology residents for full-fledged preliminary read of AP frontal chest radiographs. This diagnostic study also found that while the more complex findings would still benefit from expert overreads, the performance of AI algorithms was associated with the amount of data available for training rather than the level of difficulty of interpretation of the finding. Integrating such AI systems in radiology workflows for preliminary interpretations has the potential to expedite existing radiology workflows and address resource scarcity while improving overall accuracy and reducing the cost of care.
Journal Article
Magnetic Resonance of Rectal Cancer Response to Therapy: An Image Quality Comparison between 3.0 and 1.5 Tesla
2020
Purpose. To evaluate signal intensity (SI) differences between 3.0 T and 1.5 T on T2-weighted (T2w), diffusion-weighted imaging (DWI), and apparent diffusion coefficient (ADC) in rectal cancer pre-, during, and postneoadjuvant chemoradiotherapy (CRT). Materials and Methods. 22 patients with locally advanced rectal cancer were prospectively enrolled. All patients underwent T2w, DWI, and ADC pre-, during, and post-CRT on both 3.0 T MRI and 1.5 T MRI. A radiologist drew regions of interest (ROIs) of the tumor and obturator internus muscle on the selected slice to evaluate SI and relative SI (rSI). Additionally, a subanalysis evaluating the SI before and after-CRT (∆SI pre-post) in complete responder patients (CR) and nonresponder patients (NR) on T2w, DWI, and ADC was performed. Results. Significant differences were observed for T2w and DWI on 3.0 T MRI compared to 1.5 T MRI pre-, during, and post-CRT (all P<0.001), whereas no significant differences were reported for ADC among all controls (all P>0.05). rSI showed no significant differences in all the examinations for all sequences (all P>0.05). ∆SI showed significant differences between 3.0 T and 1.5 T MRI for DWI-∆SI in CR and NR (188.39±166.90 vs. 30.45±21.73 and 169.70±121.87 vs. 22.00±31.29, respectively, all P 0.02) and ADC-∆SI for CR (−0.58±0.27 vs. −0.21±0.24P value 0.02), while no significant differences were observed for ADC-∆SI in NR and both CR and NR for T2w-∆SI. Conclusion. T2w-SI and DWI-SI showed significant differences for 3.0 T compared to 1.5 T in all three controls, while ADCSI showed no significant differences in all three controls on both field strengths. rSI was comparable for 3.0 T and 1.5 T MRI in rectal cancer patients; therefore, rectal cancer patients can be assessed both at 3.0 T MRI and 1.5 T MRI. However, a significant DWI-∆SI and ADC-∆SI on 3.0 T in CR might be interpreted as a better visual assessment in discriminating response to therapy compared to 1.5 T. Further investigations should be performed to confirm future possible clinical application.
Journal Article