Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
44 result(s) for "de Rooij, Maarten"
Sort by:
Cost-effectiveness of artificial intelligence aided vessel occlusion detection in acute stroke: an early health technology assessment
BackgroundLimited evidence is available on the clinical impact of artificial intelligence (AI) in radiology. Early health technology assessment (HTA) is a methodology to assess the potential value of an innovation at an early stage. We use early HTA to evaluate the potential value of AI software in radiology. As a use-case, we evaluate the cost-effectiveness of AI software aiding the detection of intracranial large vessel occlusions (LVO) in stroke in comparison to standard care. We used a Markov based model from a societal perspective of the United Kingdom predominantly using stroke registry data complemented with pooled outcome data from large, randomized trials. Different scenarios were explored by varying missed diagnoses of LVOs, AI costs and AI performance. Other input parameters were varied to demonstrate model robustness. Results were reported in expected incremental costs (IC) and effects (IE) expressed in quality adjusted life years (QALYs).ResultsApplying the base case assumptions (6% missed diagnoses of LVOs by clinicians, $40 per AI analysis, 50% reduction of missed LVOs by AI), resulted in cost-savings and incremental QALYs over the projected lifetime (IC: − $156, − 0.23%; IE: + 0.01 QALYs, + 0.07%) per suspected ischemic stroke patient. For each yearly cohort of patients in the UK this translates to a total cost saving of $11 million.ConclusionsAI tools for LVO detection in emergency care have the potential to improve healthcare outcomes and save costs. We demonstrate how early HTA may be applied for the evaluation of clinically applied AI software for radiology.
Artificial intelligence in radiology: 100 commercially available products and their scientific evidence
Objectives Map the current landscape of commercially available artificial intelligence (AI) software for radiology and review the availability of their scientific evidence. Methods We created an online overview of CE-marked AI software products for clinical radiology based on vendor-supplied product specifications ( www.aiforradiology.com ). Characteristics such as modality, subspeciality, main task, regulatory information, deployment, and pricing model were retrieved. We conducted an extensive literature search on the available scientific evidence of these products. Articles were classified according to a hierarchical model of efficacy. Results The overview included 100 CE-marked AI products from 54 different vendors. For 64/100 products, there was no peer-reviewed evidence of its efficacy. We observed a large heterogeneity in deployment methods, pricing models, and regulatory classes. The evidence of the remaining 36/100 products comprised 237 papers that predominantly (65%) focused on diagnostic accuracy (efficacy level 2). From the 100 products, 18 had evidence that regarded level 3 or higher, validating the (potential) impact on diagnostic thinking, patient outcome, or costs. Half of the available evidence (116/237) were independent and not (co-)funded or (co-)authored by the vendor. Conclusions Even though the commercial supply of AI software in radiology already holds 100 CE-marked products, we conclude that the sector is still in its infancy. For 64/100 products, peer-reviewed evidence on its efficacy is lacking. Only 18/100 AI products have demonstrated (potential) clinical impact. Key Points • Artificial intelligence in radiology is still in its infancy even though already 100 CE-marked AI products are commercially available. • Only 36 out of 100 products have peer-reviewed evidence of which most studies demonstrate lower levels of efficacy. • There is a wide variety in deployment strategies, pricing models, and CE marking class of AI products for radiology.
Deep learning–assisted prostate cancer detection on bi-parametric MRI: minimum training data size requirements and effect of prior knowledge
Objectives To assess Prostate Imaging Reporting and Data System (PI-RADS)–trained deep learning (DL) algorithm performance and to investigate the effect of data size and prior knowledge on the detection of clinically significant prostate cancer (csPCa) in biopsy-naïve men with a suspicion of PCa. Methods Multi-institution data included 2734 consecutive biopsy-naïve men with elevated PSA levels (≥ 3 ng/mL) that underwent multi-parametric MRI (mpMRI). mpMRI exams were prospectively reported using PI-RADS v2 by expert radiologists. A DL framework was designed and trained on center 1 data ( n  = 1952) to predict PI-RADS ≥ 4 ( n  = 1092) lesions from bi-parametric MRI (bpMRI). Experiments included varying the number of cases and the use of automatic zonal segmentation as a DL prior. Independent center 2 cases ( n  = 296) that included pathology outcome (systematic and MRI targeted biopsy) were used to compute performance for radiologists and DL. The performance of detecting PI-RADS 4–5 and Gleason > 6 lesions was assessed on 782 unseen cases (486 center 1, 296 center 2) using free-response ROC (FROC) and ROC analysis. Results The DL sensitivity for detecting PI-RADS ≥ 4 lesions was 87% (193/223, 95% CI: 82–91) at an average of 1 false positive (FP) per patient, and an AUC of 0.88 (95% CI: 0.84–0.91). The DL sensitivity for the detection of Gleason > 6 lesions was 85% (79/93, 95% CI: 77–83) @ 1 FP compared to 91% (85/93, 95% CI: 84–96) @ 0.3 FP for a consensus panel of expert radiologists. Data size and prior zonal knowledge significantly affected performance (4%, ). Conclusion PI-RADS-trained DL can accurately detect and localize Gleason > 6 lesions. DL could reach expert performance using substantially more than 2000 training cases, and DL zonal segmentation. Key Points • AI for prostate MRI analysis depends strongly on data size and prior zonal knowledge. • AI needs substantially more than 2000 training cases to achieve expert performance.
ESUR/ESUI consensus statements on multi-parametric MRI for the detection of clinically significant prostate cancer: quality requirements for image acquisition, interpretation and radiologists’ training
ObjectivesThis study aims to define consensus-based criteria for acquiring and reporting prostate MRI and establishing prerequisites for image quality.MethodsA total of 44 leading urologists and urogenital radiologists who are experts in prostate cancer imaging from the European Society of Urogenital Radiology (ESUR) and EAU Section of Urologic Imaging (ESUI) participated in a Delphi consensus process. Panellists completed two rounds of questionnaires with 55 items under three headings: image quality assessment, interpretation and reporting, and radiologists’ experience plus training centres. Of 55 questions, 31 were rated for agreement on a 9-point scale, and 24 were multiple-choice or open. For agreement items, there was consensus agreement with an agreement ≥ 70% (score 7–9) and disagreement of ≤ 15% of the panellists. For the other questions, a consensus was considered with ≥ 50% of votes.ResultsTwenty-four out of 31 of agreement items and 11/16 of other questions reached consensus. Agreement statements were (1) reporting of image quality should be performed and implemented into clinical practice; (2) for interpretation performance, radiologists should use self-performance tests with histopathology feedback, compare their interpretation with expert-reading and use external performance assessments; and (3) radiologists must attend theoretical and hands-on courses before interpreting prostate MRI. Limitations are that the results are expert opinions and not based on systematic reviews or meta-analyses. There was no consensus on outcomes statements of prostate MRI assessment as quality marker.ConclusionsAn ESUR and ESUI expert panel showed high agreement (74%) on issues improving prostate MRI quality. Checking and reporting of image quality are mandatory. Prostate radiologists should attend theoretical and hands-on courses, followed by supervised education, and must perform regular performance assessments.Key Points• Multi-parametric MRI in the diagnostic pathway of prostate cancer has a well-established upfront role in the recently updated European Association of Urology guideline and American Urological Association recommendations.• Suboptimal image acquisition and reporting at an individual level will result in clinicians losing confidence in the technique and returning to the (non-MRI) systematic biopsy pathway. Therefore, it is crucial to establish quality criteria for the acquisition and reporting of mpMRI.• To ensure high-quality prostate MRI, experts consider checking and reporting of image quality mandatory. Prostate radiologists must attend theoretical and hands-on courses, followed by supervised education, and must perform regular self- and external performance assessments.
PI-QUAL v.1: the first step towards good-quality prostate MRI
Key Points • It is mandatory to evaluate the image quality of a prostate MRI scan, and to mention this quality in the report. • PI-QUAL v1 is an essential starting tool to standardize the evaluation of the quality of prostate MR-images as objectively as possible. • PI-QUAL will step by step develop into a reliable quality assessment tool to ensure that the first step of the MRI pathway is as accurate as possible.
Artificial Intelligence Based Algorithms for Prostate Cancer Classification and Detection on Magnetic Resonance Imaging: A Narrative Review
Due to the upfront role of magnetic resonance imaging (MRI) for prostate cancer (PCa) diagnosis, a multitude of artificial intelligence (AI) applications have been suggested to aid in the diagnosis and detection of PCa. In this review, we provide an overview of the current field, including studies between 2018 and February 2021, describing AI algorithms for (1) lesion classification and (2) lesion detection for PCa. Our evaluation of 59 included studies showed that most research has been conducted for the task of PCa lesion classification (66%) followed by PCa lesion detection (34%). Studies showed large heterogeneity in cohort sizes, ranging between 18 to 499 patients (median = 162) combined with different approaches for performance validation. Furthermore, 85% of the studies reported on the stand-alone diagnostic accuracy, whereas 15% demonstrated the impact of AI on diagnostic thinking efficacy, indicating limited proof for the clinical utility of PCa AI applications. In order to introduce AI within the clinical workflow of PCa assessment, robustness and generalizability of AI applications need to be further validated utilizing external validation and clinical workflow experiments.
Quality of prostate MRI in early diagnosis—a national survey and reading evaluation
Objectives The reliability of image-based recommendations in the prostate cancer pathway is partially dependent on prostate MRI image quality. We evaluated the current compliance with PI-RADSv2.1 technical recommendations and the prostate MRI image quality in the Netherlands. To aid image quality improvement, we identified factors that possibly influence image quality. Materials and methods A survey was sent to 68 Dutch medical centres to acquire information on prostate MRI acquisition. The responding medical centres were requested to provide anonymised prostate MRI examinations of biopsy-naive men suspected of prostate cancer. The images were evaluated for quality by three expert prostate radiologists. The compliance with PI-RADSv2.1 technical recommendations and the PI-QUALv2 score was calculated. Relationships between hardware, education of personnel, technical parameters, and/or patient preparation and both compliance and image quality were analysed using Pearson correlation, Mann–Whitney U -test, or Student's t -test where appropriate. Results Forty-four medical centres submitted their compliance with PI-RADSv2.1 technical recommendations, and 26 medical centres completed the full survey. Thirteen hospitals provided 252 usable images. The mean compliance with technical recommendations was 79%. Inadequate PI-QUALv2 scores were given in 30.9% and 50.6% of the mp-MRI and bp-MRI examinations, respectively. Multiple factors with a possible relationship with image quality were identified. Conclusion In the Netherlands, the average compliance with PI-RADSv2.1 technical recommendations is high. Prostate MRI image quality was inadequate in 30–50% of the provided examinations. Many factors not covered in the PI-RADSv2.1 technical recommendations can influence image quality. Improvement of prostate MRI image quality is needed. Critical relevance statement It is essential to improve the image quality of prostate MRIs, which can be achieved by addressing factors not covered in the PI-RADSv2.1 technical recommendations. Key Points Prostate MRI image quality influences the diagnostic accuracy of image-based decisions. Thirty to fifty percent of Dutch prostate MRI examinations were of inadequate image quality. We identified multiple factors with possible influence on image quality. Graphical Abstract
Artificial intelligence and radiologists in prostate cancer detection on MRI (PI-CAI): an international, paired, non-inferiority, confirmatory study
Artificial intelligence (AI) systems can potentially aid the diagnostic pathway of prostate cancer by alleviating the increasing workload, preventing overdiagnosis, and reducing the dependence on experienced radiologists. We aimed to investigate the performance of AI systems at detecting clinically significant prostate cancer on MRI in comparison with radiologists using the Prostate Imaging—Reporting and Data System version 2.1 (PI-RADS 2.1) and the standard of care in multidisciplinary routine practice at scale. In this international, paired, non-inferiority, confirmatory study, we trained and externally validated an AI system (developed within an international consortium) for detecting Gleason grade group 2 or greater cancers using a retrospective cohort of 10 207 MRI examinations from 9129 patients. Of these examinations, 9207 cases from three centres (11 sites) based in the Netherlands were used for training and tuning, and 1000 cases from four centres (12 sites) based in the Netherlands and Norway were used for testing. In parallel, we facilitated a multireader, multicase observer study with 62 radiologists (45 centres in 20 countries; median 7 [IQR 5–10] years of experience in reading prostate MRI) using PI-RADS (2.1) on 400 paired MRI examinations from the testing cohort. Primary endpoints were the sensitivity, specificity, and the area under the receiver operating characteristic curve (AUROC) of the AI system in comparison with that of all readers using PI-RADS (2.1) and in comparison with that of the historical radiology readings made during multidisciplinary routine practice (ie, the standard of care with the aid of patient history and peer consultation). Histopathology and at least 3 years (median 5 [IQR 4–6] years) of follow-up were used to establish the reference standard. The statistical analysis plan was prespecified with a primary hypothesis of non-inferiority (considering a margin of 0·05) and a secondary hypothesis of superiority towards the AI system, if non-inferiority was confirmed. This study was registered at ClinicalTrials.gov, NCT05489341. Of the 10 207 examinations included from Jan 1, 2012, through Dec 31, 2021, 2440 cases had histologically confirmed Gleason grade group 2 or greater prostate cancer. In the subset of 400 testing cases in which the AI system was compared with the radiologists participating in the reader study, the AI system showed a statistically superior and non-inferior AUROC of 0·91 (95% CI 0·87–0·94; p<0·0001), in comparison to the pool of 62 radiologists with an AUROC of 0·86 (0·83–0·89), with a lower boundary of the two-sided 95% Wald CI for the difference in AUROC of 0·02. At the mean PI-RADS 3 or greater operating point of all readers, the AI system detected 6·8% more cases with Gleason grade group 2 or greater cancers at the same specificity (57·7%, 95% CI 51·6–63·3), or 50·4% fewer false-positive results and 20·0% fewer cases with Gleason grade group 1 cancers at the same sensitivity (89·4%, 95% CI 85·3–92·9). In all 1000 testing cases where the AI system was compared with the radiology readings made during multidisciplinary practice, non-inferiority was not confirmed, as the AI system showed lower specificity (68·9% [95% CI 65·3–72·4] vs 69·0% [65·5–72·5]) at the same sensitivity (96·1%, 94·0–98·2) as the PI-RADS 3 or greater operating point. The lower boundary of the two-sided 95% Wald CI for the difference in specificity (−0·04) was greater than the non-inferiority margin (−0·05) and a p value below the significance threshold was reached (p<0·001). An AI system was superior to radiologists using PI-RADS (2.1), on average, at detecting clinically significant prostate cancer and comparable to the standard of care. Such a system shows the potential to be a supportive tool within a primary diagnostic setting, with several associated benefits for patients and radiologists. Prospective validation is needed to test clinical applicability of this system. Health~Holland and EU Horizon 2020.
How does artificial intelligence in radiology improve efficiency and health outcomes?
Since the introduction of artificial intelligence (AI) in radiology, the promise has been that it will improve health care and reduce costs. Has AI been able to fulfill that promise? We describe six clinical objectives that can be supported by AI: a more efficient workflow, shortened reading time, a reduction of dose and contrast agents, earlier detection of disease, improved diagnostic accuracy and more personalized diagnostics. We provide examples of use cases including the available scientific evidence for its impact based on a hierarchical model of efficacy. We conclude that the market is still maturing and little is known about the contribution of AI to clinical practice. More real-world monitoring of AI in clinical practice is expected to aid in determining the value of AI and making informed decisions on development, procurement and reimbursement.