Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
62 result(s) for "Mei, Xueyan"
Sort by:
Artificial intelligence–enabled rapid diagnosis of patients with COVID-19
For diagnosis of coronavirus disease 2019 (COVID-19), a SARS-CoV-2 virus-specific reverse transcriptase polymerase chain reaction (RT–PCR) test is routinely used. However, this test can take up to 2 d to complete, serial testing may be required to rule out the possibility of false negative results and there is currently a shortage of RT–PCR test kits, underscoring the urgent need for alternative methods for rapid and accurate diagnosis of patients with COVID-19. Chest computed tomography (CT) is a valuable component in the evaluation of patients with suspected SARS-CoV-2 infection. Nevertheless, CT alone may have limited negative predictive value for ruling out SARS-CoV-2 infection, as some patients may have normal radiological findings at early stages of the disease. In this study, we used artificial intelligence (AI) algorithms to integrate chest CT findings with clinical symptoms, exposure history and laboratory testing to rapidly diagnose patients who are positive for COVID-19. Among a total of 905 patients tested by real-time RT–PCR assay and next-generation sequencing RT–PCR, 419 (46.3%) tested positive for SARS-CoV-2. In a test set of 279 patients, the AI system achieved an area under the curve of 0.92 and had equal sensitivity as compared to a senior thoracic radiologist. The AI system also improved the detection of patients who were positive for COVID-19 via RT–PCR who presented with normal CT scans, correctly identifying 17 of 25 (68%) patients, whereas radiologists classified all of these patients as COVID-19 negative. When CT scans and associated clinical history are available, the proposed AI system can help to rapidly diagnose COVID-19 patients. Artificial intelligence algorithms integrating chest computed tomography scans and clinical information can diagnose COVID-19 with similar accuracy as compared to a senior radiologist.
Interstitial lung disease diagnosis and prognosis using an AI system integrating longitudinal data
For accurate diagnosis of interstitial lung disease (ILD), a consensus of radiologic, pathological, and clinical findings is vital. Management of ILD also requires thorough follow-up with computed tomography (CT) studies and lung function tests to assess disease progression, severity, and response to treatment. However, accurate classification of ILD subtypes can be challenging, especially for those not accustomed to reading chest CTs regularly. Dynamic models to predict patient survival rates based on longitudinal data are challenging to create due to disease complexity, variation, and irregular visit intervals. Here, we utilize RadImageNet pretrained models to diagnose five types of ILD with multimodal data and a transformer model to determine a patient’s 3-year survival rate. When clinical history and associated CT scans are available, the proposed deep learning system can help clinicians diagnose and classify ILD patients and, importantly, dynamically predict disease progression and prognosis. Accurate diagnosis of interstitial lung disease subtypes and prediction of patient survival rates remains challenging. Here, the authors develop AI algorithms to combine patient’s clinical history and longitudinal CT images to help clinicians diagnose and classify subtypes and dynamically predict disease progression and prognosis.
U-Net Based Segmentation and Characterization of Gliomas
(1) Background: Gliomas are the most common primary brain neoplasms accounting for roughly 40–50% of all malignant primary central nervous system tumors. We aim to develop a deep learning-based framework for automated segmentation and prediction of biomarkers and prognosis in patients with gliomas. (2) Methods: In this retrospective two center study, patients were included if they (1) had a diagnosis of glioma with known surgical histopathology and (2) had preoperative MRI with FLAIR sequence. The entire tumor volume including FLAIR hyperintense infiltrative component and necrotic and cystic components was segmented. Deep learning-based U-Net framework was developed based on symmetric architecture from the 512 × 512 segmented maps from FLAIR as the ground truth mask. (3) Results: The final cohort consisted of 208 patients with mean ± standard deviation of age (years) of 56 ± 15 with M/F of 130/78. DSC of the generated mask was 0.93. Prediction for IDH-1 and MGMT status had a performance of AUC 0.88 and 0.62, respectively. Survival prediction of <18 months demonstrated AUC of 0.75. (4) Conclusions: Our deep learning-based framework can detect and segment gliomas with excellent performance for the prediction of IDH-1 biomarker status and survival.
Large Language Models in Cancer Imaging: Applications and Future Perspectives
Recently, there has been tremendous interest on the use of large language models (LLMs) in radiology. LLMs have been employed for various applications in cancer imaging, including improving reporting speed and accuracy via generation of standardized reports, automating the classification and staging of abnormal findings in reports, incorporating appropriate guidelines, and calculating individualized risk scores. Another use of LLMs is their ability to improve patient comprehension of imaging reports with simplification of the medical terms and possible translations to multiple languages. Additional future applications of LLMs include multidisciplinary tumor board standardizations, aiding patient management, and preventing and predicting adverse events (contrast allergies, MRI contraindications) and cancer imaging research. However, limitations such as hallucinations and variable performances could present obstacles to widespread clinical implementation. Herein, we present a review of the current and future applications of LLMs in cancer imaging, as well as pitfalls and limitations.
Deep Learning for Automated Measurement of Patellofemoral Anatomic Landmarks
Background: Patellofemoral anatomy has not been well characterized. Applying deep learning to automatically measure knee anatomy can provide a better understanding of anatomy, which can be a key factor in improving outcomes. Methods: 483 total patients with knee CT imaging (April 2017–May 2022) from 6 centers were selected from a cohort scheduled for knee arthroplasty and a cohort with healthy knee anatomy. A total of 7 patellofemoral landmarks were annotated on 14,652 images and approved by a senior musculoskeletal radiologist. A two-stage deep learning model was trained to predict landmark coordinates using a modified ResNet50 architecture initialized with self-supervised learning pretrained weights on RadImageNet. Landmark predictions were evaluated with mean absolute error, and derived patellofemoral measurements were analyzed with Bland–Altman plots. Statistical significance of measurements was assessed by paired t-tests. Results: Mean absolute error between predicted and ground truth landmark coordinates was 0.20/0.26 cm in the healthy/arthroplasty cohort. Four knee parameters were calculated, including transepicondylar axis length, transepicondylar-posterior femur axis angle, trochlear medial asymmetry, and sulcus angle. There were no statistically significant parameter differences (p > 0.05) between predicted and ground truth measurements in both cohorts, except for the healthy cohort sulcus angle. Conclusion: Our model accurately identifies key trochlear landmarks with ~0.20–0.26 cm accuracy and produces human-comparable measurements on both healthy and pathological knees. This work represents the first deep learning regression model for automated patellofemoral annotation trained on both physiologic and pathologic CT imaging at this scale. This novel model can enhance our ability to analyze the anatomy of the patellofemoral compartment at scale.
Discovery Viewer (DV): Web-Based Medical AI Model Development Platform and Deployment Hub
The rapid rise of artificial intelligence (AI) in medicine in the last few years highlights the importance of developing bigger and better systems for data and model sharing. However, the presence of Protected Health Information (PHI) in medical data poses a challenge when it comes to sharing. One potential solution to mitigate the risk of PHI breaches is to exclusively share pre-trained models developed using private datasets. Despite the availability of these pre-trained networks, there remains a need for an adaptable environment to test and fine-tune specific models tailored for clinical tasks. This environment should be open for peer testing, feedback, and continuous model refinement, allowing dynamic model updates that are especially important in the medical field, where diseases and scanning techniques evolve rapidly. In this context, the Discovery Viewer (DV) platform was developed in-house at the Biomedical Engineering and Imaging Institute at Mount Sinai (BMEII) to facilitate the creation and distribution of cutting-edge medical AI models that remain accessible after their development. The all-in-one platform offers a unique environment for non-AI experts to learn, develop, and share their own deep learning (DL) concepts. This paper presents various use cases of the platform, with its primary goal being to demonstrate how DV holds the potential to empower individuals without expertise in AI to create high-performing DL models. We tasked three non-AI experts to develop different musculoskeletal AI projects that encompassed segmentation, regression, and classification tasks. In each project, 80% of the samples were provided with a subset of these samples annotated to aid the volunteers in understanding the expected annotation task. Subsequently, they were responsible for annotating the remaining samples and training their models through the platform’s “Training Module”. The resulting models were then tested on the separate 20% hold-off dataset to assess their performance. The classification model achieved an accuracy of 0.94, a sensitivity of 0.92, and a specificity of 1. The regression model yielded a mean absolute error of 14.27 pixels. And the segmentation model attained a Dice Score of 0.93, with a sensitivity of 0.9 and a specificity of 0.99. This initiative seeks to broaden the community of medical AI model developers and democratize the access of this technology to all stakeholders. The ultimate goal is to facilitate the transition of medical AI models from research to clinical settings.
Prediction of arrhythmia susceptibility through mathematical modeling and machine learning
At present, the QT interval on the electrocardiographic (ECG) waveform is the most common metric for assessing an individual’s susceptibility to ventricular arrhythmias, with a long QT, or, at the cellular level, a long action potential duration (APD) considered high risk. However, the limitations of this simple approach have long been recognized. Here, we sought to improve prediction of arrhythmia susceptibility by combining mechanistic mathematical modeling with machine learning (ML). Simulations with a model of the ventricular myocyte were performed to develop a large heterogenous population of cardiomyocytes (n = 10,586), and we tested each variant’s ability to withstand three arrhythmogenic triggers: 1) block of the rapid delayed rectifier potassium current (IKr Block), 2) augmentation of the L-type calcium current (ICaL Increase), and 3) injection of inward current (Current Injection). Eight ML algorithms were trained to predict, based on simulated AP features in preperturbed cells, whether each cell would develop arrhythmic dynamics in response to each trigger. We found that APD can accurately predict how cells respond to the simple Current Injection trigger but cannot effectively predict the response to IKr Block or ICaL Increase. ML predictive performance could be improved by incorporating additional AP features and simulations of additional experimental protocols. Importantly, we discovered that the most relevant features and experimental protocols were trigger specific, which shed light on the mechanisms that promoted arrhythmia formation in response to the triggers. Overall, our quantitative approach provides a means to understand and predict differences between individuals in arrhythmia susceptibility.
Clonally expanded CD8 T cells characterize amyotrophic lateral sclerosis-4
Amyotrophic lateral sclerosis (ALS) is a heterogenous neurodegenerative disorder that affects motor neurons and voluntary muscle control 1 . ALS heterogeneity includes the age of manifestation, the rate of progression and the anatomical sites of symptom onset. Disease-causing mutations in specific genes have been identified and define different subtypes of ALS 1 . Although several ALS-associated genes have been shown to affect immune functions 2 , whether specific immune features account for ALS heterogeneity is poorly understood. Amyotrophic lateral sclerosis-4 (ALS4) is characterized by juvenile onset and slow progression 3 . Patients with ALS4 show motor difficulties by the time that they are in their thirties, and most of them require devices to assist with walking by their fifties. ALS4 is caused by mutations in the senataxin gene ( SETX ). Here, using Setx knock-in mice that carry the ALS4-causative L389S mutation, we describe an immunological signature that consists of clonally expanded, terminally differentiated effector memory (T EMRA ) CD8 T cells in the central nervous system and the blood of knock-in mice. Increased frequencies of antigen-specific CD8 T cells in knock-in mice mirror the progression of motor neuron disease and correlate with anti-glioma immunity. Furthermore, bone marrow transplantation experiments indicate that the immune system has a key role in ALS4 neurodegeneration. In patients with ALS4, clonally expanded T EMRA CD8 T cells circulate in the peripheral blood. Our results provide evidence of an antigen-specific CD8 T cell response in ALS4, which could be used to unravel disease mechanisms and as a potential biomarker of disease state. An immune signature characterized by activated antigen-specific CD8 T cells is identified in the brain and blood of mice with amyotrophic lateral sclerosis-4 (ALS4), suggesting that the immune system is involved in ALS4 neurodegeneration.
Cats to CATs with RadImageNet: A Transformative Platform for Medical Imaging AI Research
Most current medical imaging Artificial Intelligence (AI) relies upon transfer learning using convolutional neural networks (CNNs) developed upon ImageNet, a large database of natural world images, including cats, dogs, and vehicles. Size, diversity, and similarity of the source data determine the success of the transfer learning on the target data. ImageNet is large and diverse, but there is a significant dissimilarity between its natural world images and medical images. Despite low similarity, ImageNet pretrained models are widely used in medical image classification. In chapter 2, ImageNet-based ResNet18 model was used to diagnose COVID-19 patients based on chest CT scans and associated clinical information and this model achieved senior radiologist level in recognition of COVID-19 patients. However, a medical imaging-only dataset similar to ImageNet and pretrained models containing only radiologic features could improve model performance. In this thesis, we curated a standardized database, RadImageNet, consisting of 1.35 million annotated medical images consisting of CT, MRI, and ultrasound of musculoskeletal, neurologic, oncologic, gastrointestinal, endocrine, abdominal, and pulmonary pathologies from over 130,000 patients. This database is unprecedented in scale and breadth in the medical imaging field that allows for training convolutional neural networks from scratch without importing weights from existing pre-trained models and the models derived from the RadImageNet database could be a better starting point to improve medical imaging applications that require transfer learning. The establishment of the RadImageNet database and associated pretrained models as well as comparisons to ImageNet models on eight independent applications including datasets used in chapter 2 were presented in chapter 3. To further study inner connections between different modalities from the RadImageNet database, we developed RadImageNet modality and anatomy models and created machine learning classifiers to select best parameter settings. Overall, this thesis evaluated ImageNet models and RadImageNet models on multiple medical imaging datasets and RadImageNet derived models and selection of CNN parameters could facilitate medical imaging AI research in recognition rate and saving turnaround time.