Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
32 result(s) for "Halabi, Safwan"
Sort by:
Deep learning for chest radiograph diagnosis: A retrospective comparison of the CheXNeXt algorithm to practicing radiologists
Chest radiograph interpretation is critical for the detection of thoracic diseases, including tuberculosis and lung cancer, which affect millions of people worldwide each year. This time-consuming task typically requires expert radiologists to read the images, leading to fatigue-based diagnostic error and lack of diagnostic expertise in areas of the world where radiologists are not available. Recently, deep learning approaches have been able to achieve expert-level performance in medical image interpretation tasks, powered by large network architectures and fueled by the emergence of large labeled datasets. The purpose of this study is to investigate the performance of a deep learning algorithm on the detection of pathologies in chest radiographs compared with practicing radiologists. We developed CheXNeXt, a convolutional neural network to concurrently detect the presence of 14 different pathologies, including pneumonia, pleural effusion, pulmonary masses, and nodules in frontal-view chest radiographs. CheXNeXt was trained and internally validated on the ChestX-ray8 dataset, with a held-out validation set consisting of 420 images, sampled to contain at least 50 cases of each of the original pathology labels. On this validation set, the majority vote of a panel of 3 board-certified cardiothoracic specialist radiologists served as reference standard. We compared CheXNeXt's discriminative performance on the validation set to the performance of 9 radiologists using the area under the receiver operating characteristic curve (AUC). The radiologists included 6 board-certified radiologists (average experience 12 years, range 4-28 years) and 3 senior radiology residents, from 3 academic institutions. We found that CheXNeXt achieved radiologist-level performance on 11 pathologies and did not achieve radiologist-level performance on 3 pathologies. The radiologists achieved statistically significantly higher AUC performance on cardiomegaly, emphysema, and hiatal hernia, with AUCs of 0.888 (95% confidence interval [CI] 0.863-0.910), 0.911 (95% CI 0.866-0.947), and 0.985 (95% CI 0.974-0.991), respectively, whereas CheXNeXt's AUCs were 0.831 (95% CI 0.790-0.870), 0.704 (95% CI 0.567-0.833), and 0.851 (95% CI 0.785-0.909), respectively. CheXNeXt performed better than radiologists in detecting atelectasis, with an AUC of 0.862 (95% CI 0.825-0.895), statistically significantly higher than radiologists' AUC of 0.808 (95% CI 0.777-0.838); there were no statistically significant differences in AUCs for the other 10 pathologies. The average time to interpret the 420 images in the validation set was substantially longer for the radiologists (240 minutes) than for CheXNeXt (1.5 minutes). The main limitations of our study are that neither CheXNeXt nor the radiologists were permitted to use patient history or review prior examinations and that evaluation was limited to a dataset from a single institution. In this study, we developed and validated a deep learning algorithm that classified clinically important abnormalities in chest radiographs at a performance level comparable to practicing radiologists. Once tested prospectively in clinical settings, the algorithm could have the potential to expand patient access to chest radiograph diagnostics.
Decoding COVID-19 pneumonia: comparison of deep learning and radiomics CT image signatures
PurposeHigh-dimensional image features that underlie COVID-19 pneumonia remain opaque. We aim to compare feature engineering and deep learning methods to gain insights into the image features that drive CT-based for COVID-19 pneumonia prediction, and uncover CT image features significant for COVID-19 pneumonia from deep learning and radiomics framework.MethodsA total of 266 patients with COVID-19 and other viral pneumonia with clinical symptoms and CT signs similar to that of COVID-19 during the outbreak were retrospectively collected from three hospitals in China and the USA. All the pneumonia lesions on CT images were manually delineated by four radiologists. One hundred eighty-four patients (n = 93 COVID-19 positive; n = 91 COVID-19 negative; 24,216 pneumonia lesions from 12,001 CT image slices) from two hospitals from China served as discovery cohort for model development. Thirty-two patients (17 COVID-19 positive, 15 COVID-19 negative; 7883 pneumonia lesions from 3799 CT image slices) from a US hospital served as external validation cohort. A bi-directional adversarial network-based framework and PyRadiomics package were used to extract deep learning and radiomics features, respectively. Linear and Lasso classifiers were used to develop models predictive of COVID-19 versus non-COVID-19 viral pneumonia.Results120-dimensional deep learning image features and 120-dimensional radiomics features were extracted. Linear and Lasso classifiers identified 32 high-dimensional deep learning image features and 4 radiomics features associated with COVID-19 pneumonia diagnosis (P < 0.0001). Both models achieved sensitivity > 73% and specificity > 75% on external validation cohort with slight superior performance for radiomics Lasso classifier. Human expert diagnostic performance improved (increase by 16.5% and 11.6% in sensitivity and specificity, respectively) when using a combined deep learning-radiomics model.ConclusionsWe uncover specific deep learning and radiomics features to add insight into interpretability of machine learning algorithms and compare deep learning and radiomics models for COVID-19 pneumonia that might serve to augment human diagnostic performance.
Deep-learning-assisted diagnosis for knee magnetic resonance imaging: Development and retrospective validation of MRNet
Magnetic resonance imaging (MRI) of the knee is the preferred method for diagnosing knee injuries. However, interpretation of knee MRI is time-intensive and subject to diagnostic error and variability. An automated system for interpreting knee MRI could prioritize high-risk patients and assist clinicians in making diagnoses. Deep learning methods, in being able to automatically learn layers of features, are well suited for modeling the complex relationships between medical images and their interpretations. In this study we developed a deep learning model for detecting general abnormalities and specific diagnoses (anterior cruciate ligament [ACL] tears and meniscal tears) on knee MRI exams. We then measured the effect of providing the model's predictions to clinical experts during interpretation. Our dataset consisted of 1,370 knee MRI exams performed at Stanford University Medical Center between January 1, 2001, and December 31, 2012 (mean age 38.0 years; 569 [41.5%] female patients). The majority vote of 3 musculoskeletal radiologists established reference standard labels on an internal validation set of 120 exams. We developed MRNet, a convolutional neural network for classifying MRI series and combined predictions from 3 series per exam using logistic regression. In detecting abnormalities, ACL tears, and meniscal tears, this model achieved area under the receiver operating characteristic curve (AUC) values of 0.937 (95% CI 0.895, 0.980), 0.965 (95% CI 0.938, 0.993), and 0.847 (95% CI 0.780, 0.914), respectively, on the internal validation set. We also obtained a public dataset of 917 exams with sagittal T1-weighted series and labels for ACL injury from Clinical Hospital Centre Rijeka, Croatia. On the external validation set of 183 exams, the MRNet trained on Stanford sagittal T2-weighted series achieved an AUC of 0.824 (95% CI 0.757, 0.892) in the detection of ACL injuries with no additional training, while an MRNet trained on the rest of the external data achieved an AUC of 0.911 (95% CI 0.864, 0.958). We additionally measured the specificity, sensitivity, and accuracy of 9 clinical experts (7 board-certified general radiologists and 2 orthopedic surgeons) on the internal validation set both with and without model assistance. Using a 2-sided Pearson's chi-squared test with adjustment for multiple comparisons, we found no significant differences between the performance of the model and that of unassisted general radiologists in detecting abnormalities. General radiologists achieved significantly higher sensitivity in detecting ACL tears (p-value = 0.002; q-value = 0.019) and significantly higher specificity in detecting meniscal tears (p-value = 0.003; q-value = 0.019). Using a 1-tailed t test on the change in performance metrics, we found that providing model predictions significantly increased clinical experts' specificity in identifying ACL tears (p-value < 0.001; q-value = 0.006). The primary limitations of our study include lack of surgical ground truth and the small size of the panel of clinical experts. Our deep learning model can rapidly generate accurate clinical pathology classifications of knee MRI exams from both internal and external datasets. Moreover, our results support the assertion that deep learning models can improve the performance of clinical experts during medical imaging interpretation. Further research is needed to validate the model prospectively and to determine its utility in the clinical setting.
Attention-guided deep learning for gestational age prediction using fetal brain MRI
Magnetic resonance imaging offers unrivaled visualization of the fetal brain, forming the basis for establishing age-specific morphologic milestones. However, gauging age-appropriate neural development remains a difficult task due to the constantly changing appearance of the fetal brain, variable image quality, and frequent motion artifacts. Here we present an end-to-end, attention-guided deep learning model that predicts gestational age with R 2 score of 0.945, mean absolute error of 6.7 days, and concordance correlation coefficient of 0.970. The convolutional neural network was trained on a heterogeneous dataset of 741 developmentally normal fetal brain images ranging from 19 to 39 weeks in gestational age. We also demonstrate model performance and generalizability using independent datasets from four academic institutions across the U.S. and Turkey with R 2 scores of 0.81–0.90 after minimal fine-tuning. The proposed regression algorithm provides an automated machine-enabled tool with the potential to better characterize in utero neurodevelopment and guide real-time gestational age estimation after the first trimester.
Pediatric imaging: a core review
Prepare for success on the pediatric imaging component of the radiology Core Exam! Pediatric Imaging: A Core Review, 2nd Edition, by Drs. Steven L. Blumer, David M. Biko, and Safwan S. Halabi, is an up-to-date, practical review tool written specifically for the Core Exam. This helpful resource contains over 300 image-rich, multiple-choice questions with detailed explanations of right and wrong answers, revised content, and additional eBook questions to ensure you're ready for the Core Exam or recertification exam. Features questions in all areas of pediatric radiology, with dozens of new questions  Features over 500 high-resolution images Provides concise answers with explanations of each choice followed by relevant, up-to-date references Follows the structure and content of what you'll encounter on the test, conveniently organized by topic 
Human–machine partnership with artificial intelligence for chest radiograph diagnosis
Human-in-the-loop (HITL) AI may enable an ideal symbiosis of human experts and AI models, harnessing the advantages of both while at the same time overcoming their respective limitations. The purpose of this study was to investigate a novel collective intelligence technology designed to amplify the diagnostic accuracy of networked human groups by forming real-time systems modeled on biological swarms. Using small groups of radiologists, the swarm-based technology was applied to the diagnosis of pneumonia on chest radiographs and compared against human experts alone, as well as two state-of-the-art deep learning AI models. Our work demonstrates that both the swarm-based technology and deep-learning technology achieved superior diagnostic accuracy than the human experts alone. Our work further demonstrates that when used in combination, the swarm-based technology and deep-learning technology outperformed either method alone. The superior diagnostic accuracy of the combined HITL AI solution compared to radiologists and AI alone has broad implications for the surging clinical AI deployment and implementation strategies in future practice.
The requirements for performing artificial-intelligence-related research and model development
Artificial intelligence research in health care has undergone tremendous growth in the last several years thanks to the explosion of digital health care data and systems that can leverage large amounts of data to learn patterns that can be applied to clinical tasks. In addition, given broad acceleration in machine learning across industries like transportation, media and commerce, there has been a significant growth in demand for machine-learning practitioners such as engineers and data scientists, who have skill sets that can be applied to health care use cases but who simultaneously lack important health care domain expertise. The purpose of this paper is to discuss the requirements of building an artificial-intelligence research enterprise including the research team, technical software/hardware, and procurement and curation of health care data.
A Review of Core Concepts of Imaging Informatics
There are myriad systems and standards used in imaging informatics. Digital Imaging and Communications in Medicine (DICOM) is the standard for displaying, transferring, and storing medical images. Health Level Seven International (HL7) develops and maintains standards for exchanging, integrating, and sharing medical data. Picture archiving and communication system (PACS) serves as the health provider's primary tool for viewing and interpreting medical images. Medical imaging depends on the interoperability of several of these systems. From entering the order into the electronic medical record (EMR), several systems receive and share medical data, including the radiology information system (RIS) and hospital information system (HIS). After acquiring an image, transformations may be performed to better focus on a specific area. The workflow from entering the order to receiving the report depends on many systems. Having disaster recovery and business continuity procedures is important should any issues arise. This article intends to review these essential concepts of imaging informatics.
Perceptions of US Medical Students on Artificial Intelligence in Medicine: Mixed Methods Survey Study
Given the rapidity with which artificial intelligence is gaining momentum in clinical medicine, current physician leaders have called for more incorporation of artificial intelligence topics into undergraduate medical education. This is to prepare future physicians to better work together with artificial intelligence technology. However, the first step in curriculum development is to survey the needs of end users. There has not been a study to determine which media and which topics are most preferred by US medical students to learn about the topic of artificial intelligence in medicine. We aimed to survey US medical students on the need to incorporate artificial intelligence in undergraduate medical education and their preferred means to do so to assist with future education initiatives. A mixed methods survey comprising both specific questions and a write-in response section was sent through Qualtrics to US medical students in May 2021. Likert scale questions were used to first assess various perceptions of artificial intelligence in medicine. Specific questions were posed regarding learning format and topics in artificial intelligence. We surveyed 390 US medical students with an average age of 26 (SD 3) years from 17 different medical programs (the estimated response rate was 3.5%). A majority (355/388, 91.5%) of respondents agreed that training in artificial intelligence concepts during medical school would be useful for their future. While 79.4% (308/388) were excited to use artificial intelligence technologies, 91.2% (353/387) either reported that their medical schools did not offer resources or were unsure if they did so. Short lectures (264/378, 69.8%), formal electives (180/378, 47.6%), and Q and A panels (167/378, 44.2%) were identified as preferred formats, while fundamental concepts of artificial intelligence (247/379, 65.2%), when to use artificial intelligence in medicine (227/379, 59.9%), and pros and cons of using artificial intelligence (224/379, 59.1%) were the most preferred topics for enhancing their training. The results of this study indicate that current US medical students recognize the importance of artificial intelligence in medicine and acknowledge that current formal education and resources to study artificial intelligence-related topics are limited in most US medical schools. Respondents also indicated that a hybrid formal/flexible format would be most appropriate for incorporating artificial intelligence as a topic in US medical schools. Based on these data, we conclude that there is a definitive knowledge gap in artificial intelligence education within current medical education in the US. Further, the results suggest there is a disparity in opinions on the specific format and topics to be introduced.