Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
33 result(s) for "Ball, Robyn L."
Sort by:
Impact of a deep learning assistant on the histopathologic classification of liver cancer
Artificial intelligence (AI) algorithms continue to rival human performance on a variety of clinical tasks, while their actual impact on human diagnosticians, when incorporated into clinical workflows, remains relatively unexplored. In this study, we developed a deep learning-based assistant to help pathologists differentiate between two subtypes of primary liver cancer, hepatocellular carcinoma and cholangiocarcinoma, on hematoxylin and eosin-stained whole-slide images (WSI), and evaluated its effect on the diagnostic performance of 11 pathologists with varying levels of expertise. Our model achieved accuracies of 0.885 on a validation set of 26 WSI, and 0.842 on an independent test set of 80 WSI. Although use of the assistant did not change the mean accuracy of the 11 pathologists ( p  = 0.184, OR = 1.281), it significantly improved the accuracy ( p  = 0.045, OR = 1.499) of a subset of nine pathologists who fell within well-defined experience levels (GI subspecialists, non-GI subspecialists, and trainees). In the assisted state, model accuracy significantly impacted the diagnostic decisions of all 11 pathologists. As expected, when the model’s prediction was correct, assistance significantly improved accuracy ( p  = 0.000, OR = 4.289), whereas when the model’s prediction was incorrect, assistance significantly decreased accuracy ( p  = 0.000, OR = 0.253), with both effects holding across all pathologist experience levels and case difficulty levels. Our results highlight the challenges of translating AI models into the clinical setting, and emphasize the importance of taking into account potential unintended negative consequences of model assistance when designing and testing medical AI-assistance tools.
PENet—a scalable deep-learning model for automated diagnosis of pulmonary embolism using volumetric CT imaging
Pulmonary embolism (PE) is a life-threatening clinical problem and computed tomography pulmonary angiography (CTPA) is the gold standard for diagnosis. Prompt diagnosis and immediate treatment are critical to avoid high morbidity and mortality rates, yet PE remains among the diagnoses most frequently missed or delayed. In this study, we developed a deep learning model—PENet, to automatically detect PE on volumetric CTPA scans as an end-to-end solution for this purpose. The PENet is a 77-layer 3D convolutional neural network (CNN) pretrained on the Kinetics-600 dataset and fine-tuned on a retrospective CTPA dataset collected from a single academic institution. The PENet model performance was evaluated in detecting PE on data from two different institutions: one as a hold-out dataset from the same institution as the training data and a second collected from an external institution to evaluate model generalizability to an unrelated population dataset. PENet achieved an AUROC of 0.84 [0.82–0.87] on detecting PE on the hold out internal test set and 0.85 [0.81–0.88] on external dataset. PENet also outperformed current state-of-the-art 3D CNN models. The results represent successful application of an end-to-end 3D CNN model for the complex task of PE diagnosis without requiring computationally intensive and time consuming preprocessing and demonstrates sustained performance on data from an external institution. Our model could be applied as a triage tool to automatically identify clinically important PEs allowing for prioritization for diagnostic radiology interpretation and improved care pathways via more efficient diagnosis.
CheXaid: deep learning assistance for physician diagnosis of tuberculosis using chest x-rays in patients with HIV
Tuberculosis (TB) is the leading cause of preventable death in HIV-positive patients, and yet often remains undiagnosed and untreated. Chest x-ray is often used to assist in diagnosis, yet this presents additional challenges due to atypical radiographic presentation and radiologist shortages in regions where co-infection is most common. We developed a deep learning algorithm to diagnose TB using clinical information and chest x-ray images from 677 HIV-positive patients with suspected TB from two hospitals in South Africa. We then sought to determine whether the algorithm could assist clinicians in the diagnosis of TB in HIV-positive patients as a web-based diagnostic assistant. Use of the algorithm resulted in a modest but statistically significant improvement in clinician accuracy ( p  = 0.002), increasing the mean clinician accuracy from 0.60 (95% CI 0.57, 0.63) without assistance to 0.65 (95% CI 0.60, 0.70) with assistance. However, the accuracy of assisted clinicians was significantly lower ( p  < 0.001) than that of the stand-alone algorithm, which had an accuracy of 0.79 (95% CI 0.77, 0.82) on the same unseen test cases. These results suggest that deep learning assistance may improve clinician accuracy in TB diagnosis using chest x-rays, which would be valuable in settings with a high burden of HIV/TB co-infection. Moreover, the high accuracy of the stand-alone algorithm suggests a potential value particularly in settings with a scarcity of radiological expertise.
Evaluation of a Smartphone Decision-Support Tool for Diarrheal Disease Management in a Resource-Limited Setting
The emergence of mobile technology offers new opportunities to improve clinical guideline adherence in resource-limited settings. We conducted a clinical pilot study in rural Bangladesh to evaluate the impact of a smartphone adaptation of the World Health Organization (WHO) diarrheal disease management guidelines, including a modality for age-based weight estimation. Software development was guided by end-user input and evaluated in a resource-limited district and sub-district hospital during the fall 2015 cholera season; both hospitals lacked scales which necessitated weight estimation. The study consisted of a 6 week pre-intervention and 6 week intervention period with a 10-day post-discharge follow-up. Standard of care was maintained throughout the study with the exception that admitting clinicians used the tool during the intervention. Inclusion criteria were patients two months of age and older with uncomplicated diarrheal disease. The primary outcome was adherence to guidelines for prescriptions of intravenous (IV) fluids, antibiotics and zinc. A total of 841 patients were enrolled (325 pre-intervention; 516 intervention). During the intervention, the proportion of prescriptions for IV fluids decreased at the district and sub-district hospitals (both p < 0.001) with risk ratios (RRs) of 0.5 and 0.2, respectively. However, when IV fluids were prescribed, the volume better adhered to recommendations. The proportion of prescriptions for the recommended antibiotic azithromycin increased (p < 0.001 district; p = 0.035 sub-district) with RRs of 6.9 (district) and 1.6 (sub-district) while prescriptions for other antibiotics decreased; zinc adherence increased. Limitations included an absence of a concurrent control group and no independent dehydration assessment during the pre-intervention. Despite limitations, opportunities were identified to improve clinical care, including better assessment, weight estimation, and fluid/ antibiotic selection. These findings demonstrate that a smartphone-based tool can improve guideline adherence. This study should serve as a catalyst for a randomized controlled trial to expand on the findings and address limitations.
Deep-learning-assisted diagnosis for knee magnetic resonance imaging: Development and retrospective validation of MRNet
Magnetic resonance imaging (MRI) of the knee is the preferred method for diagnosing knee injuries. However, interpretation of knee MRI is time-intensive and subject to diagnostic error and variability. An automated system for interpreting knee MRI could prioritize high-risk patients and assist clinicians in making diagnoses. Deep learning methods, in being able to automatically learn layers of features, are well suited for modeling the complex relationships between medical images and their interpretations. In this study we developed a deep learning model for detecting general abnormalities and specific diagnoses (anterior cruciate ligament [ACL] tears and meniscal tears) on knee MRI exams. We then measured the effect of providing the model's predictions to clinical experts during interpretation. Our dataset consisted of 1,370 knee MRI exams performed at Stanford University Medical Center between January 1, 2001, and December 31, 2012 (mean age 38.0 years; 569 [41.5%] female patients). The majority vote of 3 musculoskeletal radiologists established reference standard labels on an internal validation set of 120 exams. We developed MRNet, a convolutional neural network for classifying MRI series and combined predictions from 3 series per exam using logistic regression. In detecting abnormalities, ACL tears, and meniscal tears, this model achieved area under the receiver operating characteristic curve (AUC) values of 0.937 (95% CI 0.895, 0.980), 0.965 (95% CI 0.938, 0.993), and 0.847 (95% CI 0.780, 0.914), respectively, on the internal validation set. We also obtained a public dataset of 917 exams with sagittal T1-weighted series and labels for ACL injury from Clinical Hospital Centre Rijeka, Croatia. On the external validation set of 183 exams, the MRNet trained on Stanford sagittal T2-weighted series achieved an AUC of 0.824 (95% CI 0.757, 0.892) in the detection of ACL injuries with no additional training, while an MRNet trained on the rest of the external data achieved an AUC of 0.911 (95% CI 0.864, 0.958). We additionally measured the specificity, sensitivity, and accuracy of 9 clinical experts (7 board-certified general radiologists and 2 orthopedic surgeons) on the internal validation set both with and without model assistance. Using a 2-sided Pearson's chi-squared test with adjustment for multiple comparisons, we found no significant differences between the performance of the model and that of unassisted general radiologists in detecting abnormalities. General radiologists achieved significantly higher sensitivity in detecting ACL tears (p-value = 0.002; q-value = 0.019) and significantly higher specificity in detecting meniscal tears (p-value = 0.003; q-value = 0.019). Using a 1-tailed t test on the change in performance metrics, we found that providing model predictions significantly increased clinical experts' specificity in identifying ACL tears (p-value < 0.001; q-value = 0.006). The primary limitations of our study include lack of surgical ground truth and the small size of the panel of clinical experts. Our deep learning model can rapidly generate accurate clinical pathology classifications of knee MRI exams from both internal and external datasets. Moreover, our results support the assertion that deep learning models can improve the performance of clinical experts during medical imaging interpretation. Further research is needed to validate the model prospectively and to determine its utility in the clinical setting.
Deep learning for chest radiograph diagnosis: A retrospective comparison of the CheXNeXt algorithm to practicing radiologists
Chest radiograph interpretation is critical for the detection of thoracic diseases, including tuberculosis and lung cancer, which affect millions of people worldwide each year. This time-consuming task typically requires expert radiologists to read the images, leading to fatigue-based diagnostic error and lack of diagnostic expertise in areas of the world where radiologists are not available. Recently, deep learning approaches have been able to achieve expert-level performance in medical image interpretation tasks, powered by large network architectures and fueled by the emergence of large labeled datasets. The purpose of this study is to investigate the performance of a deep learning algorithm on the detection of pathologies in chest radiographs compared with practicing radiologists. We developed CheXNeXt, a convolutional neural network to concurrently detect the presence of 14 different pathologies, including pneumonia, pleural effusion, pulmonary masses, and nodules in frontal-view chest radiographs. CheXNeXt was trained and internally validated on the ChestX-ray8 dataset, with a held-out validation set consisting of 420 images, sampled to contain at least 50 cases of each of the original pathology labels. On this validation set, the majority vote of a panel of 3 board-certified cardiothoracic specialist radiologists served as reference standard. We compared CheXNeXt's discriminative performance on the validation set to the performance of 9 radiologists using the area under the receiver operating characteristic curve (AUC). The radiologists included 6 board-certified radiologists (average experience 12 years, range 4-28 years) and 3 senior radiology residents, from 3 academic institutions. We found that CheXNeXt achieved radiologist-level performance on 11 pathologies and did not achieve radiologist-level performance on 3 pathologies. The radiologists achieved statistically significantly higher AUC performance on cardiomegaly, emphysema, and hiatal hernia, with AUCs of 0.888 (95% confidence interval [CI] 0.863-0.910), 0.911 (95% CI 0.866-0.947), and 0.985 (95% CI 0.974-0.991), respectively, whereas CheXNeXt's AUCs were 0.831 (95% CI 0.790-0.870), 0.704 (95% CI 0.567-0.833), and 0.851 (95% CI 0.785-0.909), respectively. CheXNeXt performed better than radiologists in detecting atelectasis, with an AUC of 0.862 (95% CI 0.825-0.895), statistically significantly higher than radiologists' AUC of 0.808 (95% CI 0.777-0.838); there were no statistically significant differences in AUCs for the other 10 pathologies. The average time to interpret the 420 images in the validation set was substantially longer for the radiologists (240 minutes) than for CheXNeXt (1.5 minutes). The main limitations of our study are that neither CheXNeXt nor the radiologists were permitted to use patient history or review prior examinations and that evaluation was limited to a dataset from a single institution. In this study, we developed and validated a deep learning algorithm that classified clinically important abnormalities in chest radiographs at a performance level comparable to practicing radiologists. Once tested prospectively in clinical settings, the algorithm could have the potential to expand patient access to chest radiograph diagnostics.
A genetic locus complements resistance to Bordetella pertussis-induced histamine sensitization
Histamine plays pivotal role in normal physiology and dysregulated production of histamine or signaling through histamine receptors (HRH) can promote pathology. Previously, we showed that Bordetella pertussis or pertussis toxin can induce histamine sensitization in laboratory inbred mice and is genetically controlled by Hrh1 /HRH1. HRH1 allotypes differ at three amino acid residues with P 263 -V 313 -L 331 and L 263 -M 313 -S 331 , imparting sensitization and resistance respectively. Unexpectedly, we found several wild-derived inbred strains that carry the resistant HRH1 allotype (L 263 -M 313 -S 331 ) but exhibit histamine sensitization. This suggests the existence of a locus modifying pertussis-dependent histamine sensitization. Congenic mapping identified the location of this modifier locus on mouse chromosome 6 within a functional linkage disequilibrium domain encoding multiple loci controlling sensitization to histamine. We utilized interval-specific single-nucleotide polymorphism (SNP) based association testing across laboratory and wild-derived inbred mouse strains and functional prioritization analyses to identify candidate genes for this modifier locus. Atg7, Plxnd1, Tmcc1, Mkrn2, Il17re, Pparg, Lhfpl4, Vgll4, Rho and Syn2 are candidate genes within this modifier locus, which we named Bphse , enhancer of Bordetella pertussis induced histamine sensitization. Taken together, these results identify, using the evolutionarily significant diversity of wild-derived inbred mice, additional genetic mechanisms controlling histamine sensitization. Mice with the histamine sensitization resistant alleles are still susceptible to histamine shock, with a locus on chromosome 6 associated with histamine sensitization reported.
Neuroimaging Radiological Interpretation System for Acute Traumatic Brain Injury
The purpose of the study was to develop an outcome-based NeuroImaging Radiological Interpretation System (NIRIS) for patients with acute traumatic brain injury (TBI) that would standardize the interpretation of noncontrast head computer tomography (CT) scans and consolidate imaging findings into ordinal severity categories that would inform specific patient management actions and that could be used as a clinical decision support tool. We retrospectively identified all patients transported to our emergency department by ambulance or helicopter for whom a trauma alert was triggered per established criteria and who underwent a noncontrast head CT because of suspicion of TBI, between November 2015 and April 2016. Two neuroradiologists reviewed the noncontrast head CTs and assessed the TBI imaging common data elements (CDEs), as defined by the National Institutes of Health (NIH). Using descriptive statistics and receiver operating characteristic curve analyses to identify imaging characteristics and associated thresholds that best distinguished among outcomes, we classified patients into five mutually exclusive categories: 0-discharge from the emergency department; 1-follow-up brain imaging and/or admission; 2-admission to an advanced care unit; 3-neurosurgical procedure; 4-death up to 6 months after TBI. Sensitivity of NIRIS with respect to each patient's true outcome was then evaluated and compared with that of the Marshall and Rotterdam scoring systems for TBI. In our cohort of 542 patients with TBI, NIRIS was developed to predict discharge (182 patients), follow-up brain imaging/admission (187 patients), need for advanced care unit (151 patients), neurosurgical procedures (10 patients), and death (12 patients). NIRIS performed similarly to the Marshall and Rotterdam scoring systems in terms of predicting death. We developed an interpretation system for neuroimaging using the CDEs that informs specific patient management actions and could be used as a clinical decision support tool for patients with TBI. Our NIRIS classification, with evidence-based grouping of the CDEs into actionable categories, will need to be validated in different TBI populations.
Deep Learning–Assisted Diagnosis of Cerebral Aneurysms Using the HeadXNet Model
Deep learning has the potential to augment clinician performance in medical imaging interpretation and reduce time to diagnosis through automated segmentation. Few studies to date have explored this topic. To develop and apply a neural network segmentation model (the HeadXNet model) capable of generating precise voxel-by-voxel predictions of intracranial aneurysms on head computed tomographic angiography (CTA) imaging to augment clinicians' intracranial aneurysm diagnostic performance. In this diagnostic study, a 3-dimensional convolutional neural network architecture was developed using a training set of 611 head CTA examinations to generate aneurysm segmentations. Segmentation outputs from this support model on a test set of 115 examinations were provided to clinicians. Between August 13, 2018, and October 4, 2018, 8 clinicians diagnosed the presence of aneurysm on the test set, both with and without model augmentation, in a crossover design using randomized order and a 14-day washout period. Head and neck examinations performed between January 3, 2003, and May 31, 2017, at a single academic medical center were used to train, validate, and test the model. Examinations positive for aneurysm had at least 1 clinically significant, nonruptured intracranial aneurysm. Examinations with hemorrhage, ruptured aneurysm, posttraumatic or infectious pseudoaneurysm, arteriovenous malformation, surgical clips, coils, catheters, or other surgical hardware were excluded. All other CTA examinations were considered controls. Sensitivity, specificity, accuracy, time, and interrater agreement were measured. Metrics for clinician performance with and without model augmentation were compared. The data set contained 818 examinations from 662 unique patients with 328 CTA examinations (40.1%) containing at least 1 intracranial aneurysm and 490 examinations (59.9%) without intracranial aneurysms. The 8 clinicians reading the test set ranged in experience from 2 to 12 years. Augmenting clinicians with artificial intelligence-produced segmentation predictions resulted in clinicians achieving statistically significant improvements in sensitivity, accuracy, and interrater agreement when compared with no augmentation. The clinicians' mean sensitivity increased by 0.059 (95% CI, 0.028-0.091; adjusted P = .01), mean accuracy increased by 0.038 (95% CI, 0.014-0.062; adjusted P = .02), and mean interrater agreement (Fleiss κ) increased by 0.060, from 0.799 to 0.859 (adjusted P = .05). There was no statistically significant change in mean specificity (0.016; 95% CI, -0.010 to 0.041; adjusted P = .16) and time to diagnosis (5.71 seconds; 95% CI, 7.22-18.63 seconds; adjusted P = .19). The deep learning model developed successfully detected clinically significant intracranial aneurysms on CTA. This suggests that integration of an artificial intelligence-assisted diagnostic model may augment clinician performance with dependable and accurate predictions and thereby optimize patient care.
Regulatory complexity revealed by integrated cytological and RNA-seq analyses of meiotic substages in mouse spermatocytes
Background The continuous and non-synchronous nature of postnatal male germ-cell development has impeded stage-specific resolution of molecular events of mammalian meiotic prophase in the testis. Here the juvenile onset of spermatogenesis in mice is analyzed by combining cytological and transcriptomic data in a novel computational analysis that allows decomposition of the transcriptional programs of spermatogonia and meiotic prophase substages. Results Germ cells from testes of individual mice were obtained at two-day intervals from 8 to 18 days post-partum (dpp), prepared as surface-spread chromatin and immunolabeled for meiotic stage-specific protein markers (STRA8, SYCP3, phosphorylated H2AFX, and HISTH1T). Eight stages were discriminated cytologically by combinatorial antibody labeling, and RNA-seq was performed on the same samples. Independent principal component analyses of cytological and transcriptomic data yielded similar patterns for both data types, providing strong evidence for substage-specific gene expression signatures. A novel permutation-based maximum covariance analysis (PMCA) was developed to map co-expressed transcripts to one or more of the eight meiotic prophase substages, thereby linking distinct molecular programs to cytologically defined cell states. Expression of meiosis-specific genes is not substage-limited, suggesting regulation of substage transitions at other levels. Conclusions This integrated analysis provides a general method for resolving complex cell populations. Here it revealed not only features of meiotic substage-specific gene expression, but also a network of substage-specific transcription factors and relationships to potential target genes.