Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
52 result(s) for "Codella, Noel"
Sort by:
A reinforcement learning model for AI-based decision support in skin cancer
We investigated whether human preferences hold the potential to improve diagnostic artificial intelligence (AI)-based decision support using skin cancer diagnosis as a use case. We utilized nonuniform rewards and penalties based on expert-generated tables, balancing the benefits and harms of various diagnostic errors, which were applied using reinforcement learning. Compared with supervised learning, the reinforcement learning model improved the sensitivity for melanoma from 61.4% to 79.5% (95% confidence interval (CI): 73.5–85.6%) and for basal cell carcinoma from 79.4% to 87.1% (95% CI: 80.3–93.9%). AI overconfidence was also reduced while simultaneously maintaining accuracy. Reinforcement learning increased the rate of correct diagnoses made by dermatologists by 12.0% (95% CI: 8.8–15.1%) and improved the rate of optimal management decisions from 57.4% to 65.3% (95% CI: 61.7–68.9%). We further demonstrated that the reward-adjusted reinforcement learning model and a threshold-based model outperformed naïve supervised learning in various clinical scenarios. Our findings suggest the potential for incorporating human preferences into image-based diagnostic algorithms. A reinforcement learning model developed to adapt artificial intelligence (AI) predictions to human preferences showed better sensitivity for skin cancer diagnoses and improved management decisions compared to a supervised learning model.
BCN20000: Dermoscopic Lesions in the Wild
Advancements in dermatological artificial intelligence research require high-quality and comprehensive datasets that mirror real-world clinical scenarios. We introduce a collection of 18,946 dermoscopic images spanning from 2010 to 2016, collated at the Hospital Clínic in Barcelona, Spain. The BCN20000 dataset aims to address the problem of unconstrained classification of dermoscopic images of skin cancer, including lesions in hard-to-diagnose locations such as those found in nails and mucosa, large lesions which do not fit in the aperture of the dermoscopy device, and hypo-pigmented lesions. Our dataset covers eight key diagnostic categories in dermoscopy, providing a diverse range of lesions for artificial intelligence model training. Furthermore, a ninth out-of-distribution (OOD) class is also present on the test set, comprised of lesions which could not be distinctively classified as any of the others. By providing a comprehensive collection of varied images, BCN20000 helps bridge the gap between the training data for machine learning models and the day-to-day practice of medical practitioners. Additionally, we present a set of baseline classifiers based on state-of-the-art neural networks, which can be extended by other researchers for further experimentation.
Human–computer collaboration for skin cancer recognition
The rapid increase in telemedicine coupled with recent advances in diagnostic artificial intelligence (AI) create the imperative to consider the opportunities and risks of inserting AI-based support into new paradigms of care. Here we build on recent achievements in the accuracy of image-based AI for skin cancer diagnosis to address the effects of varied representations of AI-based support across different levels of clinical expertise and multiple clinical workflows. We find that good quality AI-based support of clinical decision-making improves diagnostic accuracy over that of either AI or physicians alone, and that the least experienced clinicians gain the most from AI-based support. We further find that AI-based multiclass probabilities outperformed content-based image retrieval (CBIR) representations of AI in the mobile technology environment, and AI-based support had utility in simulations of second opinions and of telemedicine triage. In addition to demonstrating the potential benefits associated with good quality AI in the hands of non-expert clinicians, we find that faulty AI can mislead the entire spectrum of clinicians, including experts. Lastly, we show that insights derived from AI class-activation maps can inform improvements in human diagnosis. Together, our approach and findings offer a framework for future studies across the spectrum of image-based diagnostics to improve human–computer collaboration in clinical practice. A systematic evaluation of the value of AI-based decision support in skin tumor diagnosis demonstrates the superiority of human–computer collaboration over each individual approach and supports the potential of automated approaches in diagnostic medicine.
A patient-centric dataset of images and metadata for identifying melanomas using clinical context
Prior skin image datasets have not addressed patient-level information obtained from multiple skin lesions from the same patient. Though artificial intelligence classification algorithms have achieved expert-level performance in controlled studies examining single images, in practice dermatologists base their judgment holistically from multiple lesions on the same patient. The 2020 SIIM-ISIC Melanoma Classification challenge dataset described herein was constructed to address this discrepancy between prior challenges and clinical practice, providing for each image in the dataset an identifier allowing lesions from the same patient to be mapped to one another. This patient-level contextual information is frequently used by clinicians to diagnose melanoma and is especially useful in ruling out false positives in patients with many atypical nevi. The dataset represents 2,056 patients (20.8% with at least one melanoma, 79.2% with zero melanomas) from three continents with an average of 16 lesions per patient, consisting of 33,126 dermoscopic images and 584 (1.8%) histopathologically confirmed melanomas compared with benign melanoma mimickers. Measurement(s) melanoma • Skin Lesion Technology Type(s) Dermoscopy • digital curation Factor Type(s) approximate age • sex • anatomic site Sample Characteristic - Organism Homo sapiens Machine-accessible metadata file describing the reported data: https://doi.org/10.6084/m9.figshare.13070345
Left Ventricle: Fully Automated Segmentation Based on Spatiotemporal Continuity and Myocardium Information in Cine Cardiac Magnetic Resonance Imaging (LV-FAST)
CMR quantification of LV chamber volumes typically and manually defines the basal-most LV, which adds processing time and user-dependence. This study developed an LV segmentation method that is fully automated based on the spatiotemporal continuity of the LV (LV-FAST). An iteratively decreasing threshold region growing approach was used first from the midventricle to the apex, until the LV area and shape discontinued, and then from midventricle to the base, until less than 50% of the myocardium circumference was observable. Region growth was constrained by LV spatiotemporal continuity to improve robustness of apical and basal segmentations. The LV-FAST method was compared with manual tracing on cardiac cine MRI data of 45 consecutive patients. Of the 45 patients, LV-FAST and manual selection identified the same apical slices at both ED and ES and the same basal slices at both ED and ES in 38, 38, 38, and 41 cases, respectively, and their measurements agreed within -1.6±8.7 mL, -1.4±7.8 mL, and 1.0±5.8% for EDV, ESV, and EF, respectively. LV-FAST allowed LV volume-time course quantitatively measured within 3 seconds on a standard desktop computer, which is fast and accurate for processing the cine volumetric cardiac MRI data, and enables LV filling course quantification over the cardiac cycle.
Automated triage of cancer-suspicious skin lesions with 3D total-body photography
Careful selection of skin lesions that require expert evaluation is important for early skin cancer detection. Yet challenges include lack of cost-effective asymptomatic screening, geographical inequality in access to specialty dermatology, and long wait times due to exam inefficiencies and staff shortages. Machine learning models trained on high-quality dermoscopy photos have been shown to aid clinicians in diagnosing individual, hand-selected skin lesions. In contrast, models designed for triage have been less explored due to limited datasets that represent a broader net of skin lesions. 3D total body photography is an emerging technology used in dermatology to document all apparent skin lesions on a patient for skin cancer monitoring. A multi-institutional and global project collected over 900,000 lesion crops off 3D total body photos for an online grand challenge in machine learning. Here we summarize the results of the competition, ‘ISIC 2024 – Skin Cancer Detection with 3D-TBP’, demonstrate superiority of a model that utilized intra-patient context against a prior published approach, and explore clinical plausibility of automated atypical skin lesion triage through an ablation study.
Machine learning derived segmentation of phase velocity encoded cardiovascular magnetic resonance for fully automated aortic flow quantification
Background Phase contrast (PC) cardiovascular magnetic resonance (CMR) is widely employed for flow quantification, but analysis typically requires time consuming manual segmentation which can require human correction. Advances in machine learning have markedly improved automated processing, but have yet to be applied to PC-CMR. This study tested a novel machine learning model for fully automated analysis of PC-CMR aortic flow. Methods A machine learning model was designed to track aortic valve borders based on neural network approaches. The model was trained in a derivation cohort encompassing 150 patients who underwent clinical PC-CMR then compared to manual and commercially-available automated segmentation in a prospective validation cohort. Further validation testing was performed in an external cohort acquired from a different site/CMR vendor. Results Among 190 coronary artery disease patients prospectively undergoing CMR on commercial scanners (84% 1.5T, 16% 3T), machine learning segmentation was uniformly successful, requiring no human intervention: Segmentation time was < 0.01 min/case (1.2 min for entire dataset); manual segmentation required 3.96 ± 0.36 min/case (12.5 h for entire dataset). Correlations between machine learning and manual segmentation-derived flow approached unity ( r  = 0.99, p  < 0.001). Machine learning yielded smaller absolute differences with manual segmentation than did commercial automation (1.85 ± 1.80 vs. 3.33 ± 3.18 mL, p  < 0.01): Nearly all (98%) of cases differed by ≤5 mL between machine learning and manual methods. Among patients without advanced mitral regurgitation, machine learning correlated well ( r  = 0.63, p  < 0.001) and yielded small differences with cine-CMR stroke volume (∆ 1.3 ± 17.7 mL, p  = 0.36). Among advanced mitral regurgitation patients, machine learning yielded lower stroke volume than did volumetric cine-CMR (∆ 12.6 ± 20.9 mL, p  = 0.005), further supporting validity of this method. Among the external validation cohort ( n  = 80) acquired using a different CMR vendor, the algorithm yielded equivalently small differences (∆ 1.39 ± 1.77 mL, p  = 0.4) and high correlations ( r  = 0.99, p  < 0.001) with manual segmentation, including similar results in 20 patients with bicuspid or stenotic aortic valve pathology (∆ 1.71 ± 2.25 mL, p  = 0.25). Conclusion Fully automated machine learning PC-CMR segmentation performs robustly for aortic flow quantification - yielding rapid segmentation, small differences with manual segmentation, and identification of differential forward/left ventricular volumetric stroke volume in context of concomitant mitral regurgitation. Findings support use of machine learning for analysis of large scale CMR datasets.
Impact of diastolic dysfunction severity on global left ventricular volumetric filling - assessment by automated segmentation of routine cine cardiovascular magnetic resonance
Objectives To examine relationships between severity of echocardiography (echo) -evidenced diastolic dysfunction (DD) and volumetric filling by automated processing of routine cine cardiovascular magnetic resonance (CMR). Background Cine-CMR provides high-resolution assessment of left ventricular (LV) chamber volumes. Automated segmentation (LV-METRIC) yields LV filling curves by segmenting all short-axis images across all temporal phases. This study used cine-CMR to assess filling changes that occur with progressive DD. Methods 115 post-MI patients underwent CMR and echo within 1 day. LV-METRIC yielded multiple diastolic indices - E:A ratio, peak filling rate (PFR), time to peak filling rate (TPFR), and diastolic volume recovery (DVR 80 - proportion of diastole required to recover 80% stroke volume). Echo was the reference for DD. Results LV-METRIC successfully generated LV filling curves in all patients. CMR indices were reproducible (≤ 1% inter-reader differences) and required minimal processing time (175 ± 34 images/exam, 2:09 ± 0:51 minutes). CMR E:A ratio decreased with grade 1 and increased with grades 2-3 DD. Diastolic filling intervals, measured by DVR 80 or TPFR, prolonged with grade 1 and shortened with grade 3 DD, paralleling echo deceleration time (p < 0.001). PFR by CMR increased with DD grade, similar to E/e' (p < 0.001). Prolonged DVR 80 identified 71% of patients with echo-evidenced grade 1 but no patients with grade 3 DD, and stroke-volume adjusted PFR identified 67% with grade 3 but none with grade 1 DD (matched specificity = 83%). The combination of DVR 80 and PFR identified 53% of patients with grade 2 DD. Prolonged DVR 80 was associated with grade 1 (OR 2.79, CI 1.65-4.05, p = 0.001) with a similar trend for grade 2 (OR 1.35, CI 0.98-1.74, p = 0.06), whereas high PFR was associated with grade 3 (OR 1.14, CI 1.02-1.25, p = 0.02) DD. Conclusions Automated cine-CMR segmentation can discern LV filling changes that occur with increasing severity of echo-evidenced DD. Impaired relaxation is associated with prolonged filling intervals whereas restrictive filling is characterized by increased filling rates.