Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Reading LevelReading Level
-
Content TypeContent Type
-
YearFrom:-To:
-
More FiltersMore FiltersItem TypeIs Full-Text AvailableSubjectCountry Of PublicationPublisherSourceTarget AudienceDonorLanguagePlace of PublicationContributorsLocation
Done
Filters
Reset
82
result(s) for
"Michael Ingrisch"
Sort by:
ChatGPT makes medicine easy to swallow: an exploratory case study on simplified radiology reports
by
Weber, Tobias
,
Ingrisch, Michael
,
Stüber, Anna Theresa
in
Case studies
,
Chatbots
,
Diagnostic Radiology
2024
Objectives
To assess the quality of simplified radiology reports generated with the large language model (LLM) ChatGPT and to discuss challenges and chances of ChatGPT-like LLMs for medical text simplification.
Methods
In this exploratory case study, a radiologist created three fictitious radiology reports which we simplified by prompting ChatGPT with “Explain this medical report to a child using simple language.” In a questionnaire, we tasked 15 radiologists to rate the quality of the simplified radiology reports with respect to their factual correctness, completeness, and potential harm for patients. We used Likert scale analysis and inductive free-text categorization to assess the quality of the simplified reports.
Results
Most radiologists agreed that the simplified reports were factually correct, complete, and not potentially harmful to the patient. Nevertheless, instances of incorrect statements, missed relevant medical information, and potentially harmful passages were reported.
Conclusion
While we see a need for further adaption to the medical field, the initial insights of this study indicate a tremendous potential in using LLMs like ChatGPT to improve patient-centered care in radiology and other medical domains.
Clinical relevance statement
Patients have started to use ChatGPT to simplify and explain their medical reports, which is expected to affect patient-doctor interaction. This phenomenon raises several opportunities and challenges for clinical routine.
Key Points
•
Patients have started to use ChatGPT to simplify their medical reports, but their quality was unknown.
•
In a questionnaire, most participating radiologists overall asserted good quality to radiology reports simplified with ChatGPT. However, they also highlighted a notable presence of errors, potentially leading patients to draw harmful conclusions.
•
Large language models such as ChatGPT have vast potential to enhance patient-centered care in radiology and other medical domains. To realize this potential while minimizing harm, they need supervision by medical experts and adaption to the medical field.
Graphical Abstract
Journal Article
Fast machine learning image reconstruction of radially undersampled k-space data for low-latency real-time MRI
by
Ingrisch, Michael
,
Stüber, Anna Theresa
,
Dexl, Jakob
in
Algorithms
,
Data acquisition
,
Data acquisition systems
2025
Fast data acquisition and fast image reconstruction are essential to enable low-latency real-time magnetic resonance (MR) imaging applications with high temporal resolution such as interstitial percutaneous needle interventions or MR-guided radiotherapy.
To accelerate the image reconstruction of radially undersampled 2D k-space data, we propose a machine learning (ML) model that consists of a single fully connected linear layer to interpolate radial k-space data to a Cartesian grid, followed by a conventional 2D inverse fast Fourier transform. This k-space-to-image ML model was trained on synthetic data from natural images. It was evaluated with respect to image quality (mean squared error (MSE) compared to ground truth where available) and reconstruction time both on synthetic data with undersampling factors R between 2 and 10 as well as on radial k-space data from MR measurements on two different MRI systems. For comparison, conventional non-iterative zero-filling non-uniform fast Fourier transform (NUFFT) reconstruction and compressed sensing (CS) reconstruction were used.
On synthetic data, the ML model achieved better median MSE values than the non-iterative NUFFT reconstruction. The interquartile ranges of the MSE distributions overlapped for the ML and CS reconstructions for all R . Reconstruction times of the ML approach were shorter than for NUFFT and substantially shorter than for CS reconstructions. The generalizability (for real MRI data) of the ML model was demonstrated by reconstructing 0.35-tesla MR-Linac dynamic measurements of three volunteers and phantom data from a diagnostic 1.5-tesla MRI system; the median reconstruction time for the coil-combined images was much shorter than for the conventional approach (ML: < 4 m s ; NUFFT: ≈ 60 − 90 m s ).
The proposed ML model reconstructs MR data with reduced streaking artifacts compared to non-iterative NUFFT techniques and with extremely short reconstruction times; thus, it is ideally suited for rapid low-latency real-time MR applications.
Journal Article
Deep learning in CT colonography: differentiating premalignant from benign colorectal polyps
2022
Objectives
To investigate the differentiation of premalignant from benign colorectal polyps detected by CT colonography using deep learning.
Methods
In this retrospective analysis of an average risk colorectal cancer screening sample, polyps of all size categories and morphologies were manually segmented on supine and prone CT colonography images and classified as premalignant (adenoma) or benign (hyperplastic polyp or regular mucosa) according to histopathology. Two deep learning models SEG and noSEG were trained on 3D CT colonography image subvolumes to predict polyp class, and model SEG was additionally trained with polyp segmentation masks. Diagnostic performance was validated in an independent external multicentre test sample. Predictions were analysed with the visualisation technique Grad-CAM++.
Results
The training set consisted of 107 colorectal polyps in 63 patients (mean age: 63 ± 8 years, 40 men) comprising 169 polyp segmentations. The external test set included 77 polyps in 59 patients comprising 118 polyp segmentations. Model SEG achieved a ROC-AUC of 0.83 and 80% sensitivity at 69% specificity for differentiating premalignant from benign polyps. Model noSEG yielded a ROC-AUC of 0.75, 80% sensitivity at 44% specificity, and an average Grad-CAM++ heatmap score of ≥ 0.25 in 90% of polyp tissue.
Conclusions
In this proof-of-concept study, deep learning enabled the differentiation of premalignant from benign colorectal polyps detected with CT colonography and the visualisation of image regions important for predictions. The approach did not require polyp segmentation and thus has the potential to facilitate the identification of high-risk polyps as an automated second reader.
Key Points
• Non-invasive deep learning image analysis may differentiate premalignant from benign colorectal polyps found in CT colonography scans.
• Deep learning autonomously learned to focus on polyp tissue for predictions without the need for prior polyp segmentation by experts.
• Deep learning potentially improves the diagnostic accuracy of CT colonography in colorectal cancer screening by allowing for a more precise selection of patients who would benefit from endoscopic polypectomy, especially for patients with polyps of 6–9 mm size.
Journal Article
Pneumothorax detection in chest radiographs: optimizing artificial intelligence system for accuracy and confounding bias reduction using in-image annotations in algorithm training
by
Ingrisch, Michael
,
Sabel, Bastian O.
,
Fieselmann, Andreas
in
Algorithms
,
Annotations
,
Artificial Intelligence
2021
Objectives
Diagnostic accuracy of artificial intelligence (AI) pneumothorax (PTX) detection in chest radiographs (CXR) is limited by the noisy annotation quality of public training data and confounding thoracic tubes (TT). We hypothesize that in-image annotations of the dehiscent visceral pleura for algorithm training boosts algorithm’s performance and suppresses confounders.
Methods
Our single-center evaluation cohort of 3062 supine CXRs includes 760 PTX-positive cases with radiological annotations of PTX size and inserted TTs. Three step-by-step improved algorithms (differing in algorithm architecture, training data from public datasets/clinical sites, and in-image annotations included in algorithm training) were characterized by area under the receiver operating characteristics (AUROC) in detailed subgroup analyses and referenced to the well-established “CheXNet” algorithm.
Results
Performances of established algorithms exclusively trained on publicly available data without in-image annotations are limited to AUROCs of 0.778 and strongly biased towards TTs that can completely eliminate algorithm’s discriminative power in individual subgroups. Contrarily, our final “algorithm 2” which was trained on a lower number of images but additionally with in-image annotations of the dehiscent pleura achieved an overall AUROC of 0.877 for unilateral PTX detection with a significantly reduced TT-related confounding bias.
Conclusions
We demonstrated strong limitations of an established PTX-detecting AI algorithm that can be significantly reduced by designing an AI system capable of learning to both classify and localize PTX. Our results are aimed at drawing attention to the necessity of high-quality in-image localization in training data to reduce the risks of unintentionally biasing the training process of pathology-detecting AI algorithms.
Key Points
• Established pneumothorax-detecting artificial intelligence algorithms trained on public training data are strongly limited and biased by confounding thoracic tubes.
• We used high-quality in-image annotated training data to effectively boost algorithm performance and suppress the impact of confounding thoracic tubes.
• Based on our results, we hypothesize that even hidden confounders might be effectively addressed by in-image annotations of pathology-related image features.
Journal Article
MITK-ModelFit: A generic open-source framework for model fits and their exploration in medical imaging – design, implementation and application on the example of DCE-MRI
by
Ingrisch, Michael
,
Maier-Hein, Klaus
,
Nolden, Marco
in
Algorithms
,
Batch processing
,
Bioinformatics
2019
Background
Many medical imaging techniques utilize fitting approaches for quantitative parameter estimation and analysis. Common examples are pharmacokinetic modeling in dynamic contrast-enhanced (DCE) magnetic resonance imaging (MRI)/computed tomography (CT), apparent diffusion coefficient calculations and intravoxel incoherent motion modeling in diffusion-weighted MRI and Z-spectra analysis in chemical exchange saturation transfer MRI. Most available software tools are limited to a special purpose and do not allow for own developments and extensions. Furthermore, they are mostly designed as stand-alone solutions using external frameworks and thus cannot be easily incorporated natively in the analysis workflow.
Results
We present a framework for medical image fitting tasks that is included in the Medical Imaging Interaction Toolkit MITK, following a rigorous open-source, well-integrated and operating system independent policy. Software engineering-wise, the local models, the fitting infrastructure and the results representation are abstracted and thus can be easily adapted to any model fitting task on image data, independent of image modality or model. Several ready-to-use libraries for model fitting and use-cases, including fit evaluation and visualization, were implemented. Their embedding into MITK allows for easy data loading, pre- and post-processing and thus a natural inclusion of model fitting into an overarching workflow. As an example, we present a comprehensive set of plug-ins for the analysis of DCE MRI data, which we validated on existing and novel digital phantoms, yielding competitive deviations between fit and ground truth.
Conclusions
Providing a very flexible environment, our software mainly addresses developers of medical imaging software that includes model fitting algorithms and tools. Additionally, the framework is of high interest to users in the domain of perfusion MRI, as it offers feature-rich, freely available, validated tools to perform pharmacokinetic analysis on DCE MRI data, with both interactive and automatized batch processing workflows.
Journal Article
Radiation dose and image quality of high-pitch emergency abdominal CT in obese patients using third-generation dual-source CT (DSCT)
by
Ingrisch, Michael
,
Stahl, Robert
,
Forbrig, Robert
in
692/1807/410
,
692/698/2741
,
692/699/1702/393
2019
In this third-generation dual-source CT (DSCT) study, we retrospectively investigated radiation dose and image quality of portal-venous high-pitch emergency CT in 60 patients (28 female, mean age 56 years) with a body mass index (BMI) ≥ 30 kg/m
2
. Patients were dichotomized in groups A (median BMI 31.5 kg/m
2
; n = 33) and B (36.8 kg/m
2
; n = 27). Volumetric CT dose index (CTDI
vol
), size-specific dose estimate (SSDE), dose length product (DLP) and effective dose (ED) were assessed. Contrast-to-noise ratio (CNR) and dose-independent figure-of-merit (FOM) CNR were calculated. Subjective image quality was assessed using a five-point scale. Mean values of CTDI
vol
, SSDE as well as normalized DLP and ED were 7.6 ± 1.8 mGy, 8.0 ± 1.8 mGy, 304 ± 74 mGy * cm and 5.2 ± 1.3 mSv for group A, and 12.6 ± 3.7 mGy, 11.0 ± 2.6 mGy, 521 ± 157 mGy * cm and 8.9 ± 2.7 mSv for group B (p < 0.001). CNR of the liver and spleen as well as each calculated FOM CNR were significantly higher in group A (p < 0.001). Subjective image quality was good in both groups. In conclusion, third-generation abdominal high-pitch emergency DSCT yields good image quality in obese patients. Radiation dose increases in patients with a BMI > 36.8 kg/m
2
.
Journal Article
WindowNet: Learnable Windows for Chest X-ray Classification
by
Ingrisch, Michael
,
Hyska, Sardi
,
Wollek, Alessandro
in
bit depth
,
chest radiograph
,
chest X-ray
2023
Public chest X-ray (CXR) data sets are commonly compressed to a lower bit depth to reduce their size, potentially hiding subtle diagnostic features. In contrast, radiologists apply a windowing operation to the uncompressed image to enhance such subtle features. While it has been shown that windowing improves classification performance on computed tomography (CT) images, the impact of such an operation on CXR classification performance remains unclear. In this study, we show that windowing strongly improves the CXR classification performance of machine learning models and propose WindowNet, a model that learns multiple optimal window settings. Our model achieved an average AUC score of 0.812 compared with the 0.759 score of a commonly used architecture without windowing capabilities on the MIMIC data set.
Journal Article
Automated localization of the medial clavicular epiphyseal cartilages using an object detection network: a step towards deep learning-based forensic age assessment
by
Ingrisch, Michael
,
Stüber, Anna Theresa
,
Sabel, Bastian Oliver
in
Annotations
,
Artificial neural networks
,
Automation
2023
BackgroundDeep learning is a promising technique to improve radiological age assessment. However, expensive manual annotation by experts poses a bottleneck for creating large datasets to appropriately train deep neural networks. We propose an object detection approach to automatically annotate the medial clavicular epiphyseal cartilages in computed tomography (CT) scans.MethodsThe sternoclavicular joints were selected as structure-of-interest (SOI) in chest CT scans and served as an easy-to-identify proxy for the actual medial clavicular epiphyseal cartilages. CT slices containing the SOI were manually annotated with bounding boxes around the SOI. All slices in the training set were used to train the object detection network RetinaNet. Afterwards, the network was applied individually to all slices of the test scans for SOI detection. Bounding box and slice position of the detection with the highest classification score were used as the location estimate for the medial clavicular epiphyseal cartilages inside the CT scan.ResultsFrom 100 CT scans of 82 patients, 29,656 slices were used for training and 30,846 slices from 110 CT scans of 110 different patients for testing the object detection network. The location estimate from the deep learning approach for the SOI was in a correct slice in 97/110 (88%), misplaced by one slice in 5/110 (5%), and missing in 8/110 (7%) test scans. No estimate was misplaced by more than one slice.ConclusionsWe demonstrated a robust automated approach for annotating the medial clavicular epiphyseal cartilages. This enables training and testing of deep neural networks for age assessment.
Journal Article
Clinically focused multi-cohort benchmarking as a tool for external validation of artificial intelligence algorithm performance in basic chest radiography analysis
by
Ingrisch, Michael
,
Fink, Nicola
,
Ben Khaled, Najib
in
639/705/1042
,
639/705/1046
,
692/308/2779
2022
Artificial intelligence (AI) algorithms evaluating [supine] chest radiographs ([S]CXRs) have remarkably increased in number recently. Since training and validation are often performed on subsets of the same overall dataset, external validation is mandatory to reproduce results and reveal potential training errors. We applied a multicohort benchmarking to the publicly accessible (S)CXR analyzing AI algorithm CheXNet, comprising three clinically relevant study cohorts which differ in patient positioning ([S]CXRs), the applied reference standards (CT-/[S]CXR-based) and the possibility to also compare algorithm classification with different medical experts’ reading performance. The study cohorts include [1] a cohort, characterized by 563 CXRs acquired in the emergency unit that were evaluated by 9 readers (radiologists and non-radiologists) in terms of 4 common pathologies, [2] a collection of 6,248 SCXRs annotated by radiologists in terms of pneumothorax presence, its size and presence of inserted thoracic tube material which allowed for subgroup and confounding bias analysis and [3] a cohort consisting of 166 patients with SCXRs that were evaluated by radiologists for underlying causes of basal lung opacities, all of those cases having been correlated to a timely acquired computed tomography scan (SCXR and CT within < 90 min). CheXNet non-significantly exceeded the radiology resident (RR) consensus in the detection of suspicious lung nodules (cohort [1], AUC AI/RR: 0.851/0.839,
p
= 0.793) and the radiological readers in the detection of basal pneumonia (cohort [3], AUC AI/reader consensus: 0.825/0.782,
p
= 0.390) and basal pleural effusion (cohort [3], AUC AI/reader consensus: 0.762/0.710,
p
= 0.336) in SCXR, partly with AUC values higher than originally published (“Nodule”: 0.780, “Infiltration”: 0.735, “Effusion”: 0.864). The classifier “Infiltration” turned out to be very dependent on patient positioning (best in CXR, worst in SCXR). The pneumothorax SCXR cohort [2] revealed poor algorithm performance in CXRs without inserted thoracic material and in the detection of small pneumothoraces, which can be explained by a known systematic confounding error in the algorithm training process. The benefit of clinically relevant external validation is demonstrated by the differences in algorithm performance as compared to the original publication. Our multi-cohort benchmarking finally enables the consideration of confounders, different reference standards and patient positioning as well as the AI performance comparison with differentially qualified medical readers.
Journal Article