Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
6,688 result(s) for "692/308/575"
Sort by:
Large language models in medicine
Large language models (LLMs) can respond to free-text queries without being specifically trained in the task in question, causing excitement and concern about their use in healthcare settings. ChatGPT is a generative artificial intelligence (AI) chatbot produced through sophisticated fine-tuning of an LLM, and other tools are emerging through similar developmental processes. Here we outline how LLM applications such as ChatGPT are developed, and we discuss how they are being leveraged in clinical settings. We consider the strengths and limitations of LLMs and their potential to improve the efficiency and effectiveness of clinical, educational and research work in medicine. LLM chatbots have already been deployed in a range of biomedical contexts, with impressive but mixed results. This review acts as a primer for interested clinicians, who will determine if and how LLM technology is used in healthcare for the benefit of patients and practitioners. This review explains how large language models (LLMs), such as ChatGPT, are developed and discusses their strengths and limitations in the context of potential clinical applications.
Health and disease markers correlate with gut microbiome composition across thousands of people
Variation in the human gut microbiome can reflect host lifestyle and behaviors and influence disease biomarker levels in the blood. Understanding the relationships between gut microbes and host phenotypes are critical for understanding wellness and disease. Here, we examine associations between the gut microbiota and ~150 host phenotypic features across ~3,400 individuals. We identify major axes of taxonomic variance in the gut and a putative diversity maximum along the Firmicutes-to-Bacteroidetes axis. Our analyses reveal both known and unknown associations between microbiome composition and host clinical markers and lifestyle factors, including host-microbe associations that are composition-specific. These results suggest potential opportunities for targeted interventions that alter the composition of the microbiome to improve host health. By uncovering the interrelationships between host diet and lifestyle factors, clinical blood markers, and the human gut microbiome at the population-scale, our results serve as a roadmap for future studies on host-microbe interactions and interventions. Variation in the gut microbiome can reflect host lifestyle, behaviour, and influence blood-based biomarkers. Here the authors examine associations between the microbiota and 150 host phenotypic features in a large cohort of >3,000 individuals.
The Medical Segmentation Decathlon
International challenges have become the de facto standard for comparative assessment of image analysis algorithms. Although segmentation is the most widely investigated medical image processing task, the various challenges have been organized to focus only on specific clinical tasks. We organized the Medical Segmentation Decathlon (MSD)—a biomedical image analysis challenge, in which algorithms compete in a multitude of both tasks and modalities to investigate the hypothesis that a method capable of performing well on multiple tasks will generalize well to a previously unseen task and potentially outperform a custom-designed solution. MSD results confirmed this hypothesis, moreover, MSD winner continued generalizing well to a wide range of other clinical problems for the next two years. Three main conclusions can be drawn from this study: (1) state-of-the-art image segmentation algorithms generalize well when retrained on unseen tasks; (2) consistent algorithmic performance across multiple tasks is a strong surrogate of algorithmic generalizability; (3) the training of accurate AI segmentation models is now commoditized to scientists that are not versed in AI model training. International challenges have become the de facto standard for comparative assessment of image analysis algorithms. Here, the authors present the results of a biomedical image segmentation challenge, showing that a method capable of performing well on multiple tasks will generalize well to a previously unseen task.
A multimodal comparison of latent denoising diffusion probabilistic models and generative adversarial networks for medical image synthesis
Although generative adversarial networks (GANs) can produce large datasets, their limited diversity and fidelity have been recently addressed by denoising diffusion probabilistic models, which have demonstrated superiority in natural image synthesis. In this study, we introduce Medfusion, a conditional latent DDPM designed for medical image generation, and evaluate its performance against GANs, which currently represent the state-of-the-art. Medfusion was trained and compared with StyleGAN-3 using fundoscopy images from the AIROGS dataset, radiographs from the CheXpert dataset, and histopathology images from the CRCDX dataset. Based on previous studies, Progressively Growing GAN (ProGAN) and Conditional GAN (cGAN) were used as additional baselines on the CheXpert and CRCDX datasets, respectively. Medfusion exceeded GANs in terms of diversity (recall), achieving better scores of 0.40 compared to 0.19 in the AIROGS dataset, 0.41 compared to 0.02 (cGAN) and 0.24 (StyleGAN-3) in the CRMDX dataset, and 0.32 compared to 0.17 (ProGAN) and 0.08 (StyleGAN-3) in the CheXpert dataset. Furthermore, Medfusion exhibited equal or higher fidelity (precision) across all three datasets. Our study shows that Medfusion constitutes a promising alternative to GAN-based models for generating high-quality medical images, leading to improved diversity and less artifacts in the generated images.
Adapted large language models can outperform medical experts in clinical text summarization
Analyzing vast textual data and summarizing key information from electronic health records imposes a substantial burden on how clinicians allocate their time. Although large language models (LLMs) have shown promise in natural language processing (NLP) tasks, their effectiveness on a diverse range of clinical summarization tasks remains unproven. Here we applied adaptation methods to eight LLMs, spanning four distinct clinical summarization tasks: radiology reports, patient questions, progress notes and doctor–patient dialogue. Quantitative assessments with syntactic, semantic and conceptual NLP metrics reveal trade-offs between models and adaptation methods. A clinical reader study with 10 physicians evaluated summary completeness, correctness and conciseness; in most cases, summaries from our best-adapted LLMs were deemed either equivalent (45%) or superior (36%) compared with summaries from medical experts. The ensuing safety analysis highlights challenges faced by both LLMs and medical experts, as we connect errors to potential medical harm and categorize types of fabricated information. Our research provides evidence of LLMs outperforming medical experts in clinical text summarization across multiple tasks. This suggests that integrating LLMs into clinical workflows could alleviate documentation burden, allowing clinicians to focus more on patient care. Comparative performance assessment of large language models identified ChatGPT-4 as the best-adapted model across a diverse set of clinical text summarization tasks, and it outperformed 10 medical experts in a reader study.
nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation
Biomedical imaging is a driver of scientific discovery and a core component of medical care and is being stimulated by the field of deep learning. While semantic segmentation algorithms enable image analysis and quantification in many applications, the design of respective specialized solutions is non-trivial and highly dependent on dataset properties and hardware conditions. We developed nnU-Net, a deep learning-based segmentation method that automatically configures itself, including preprocessing, network architecture, training and post-processing for any new task. The key design choices in this process are modeled as a set of fixed parameters, interdependent rules and empirical decisions. Without manual intervention, nnU-Net surpasses most existing approaches, including highly specialized solutions on 23 public datasets used in international biomedical segmentation competitions. We make nnU-Net publicly available as an out-of-the-box tool, rendering state-of-the-art segmentation accessible to a broad audience by requiring neither expert knowledge nor computing resources beyond standard network training.nnU-Net is a deep learning-based image segmentation method that automatically configures itself for diverse biological and medical image segmentation tasks. nnU-Net offers state-of-the-art performance as an out-of-the-box tool.
Allergen immunotherapy: past, present and future
Allergen immunotherapy is a form of therapeutic vaccination for established IgE-mediated hypersensitivity to common allergen sources such as pollens, house dust mites and the venom of stinging insects. The classical protocol, introduced in 1911, involves repeated subcutaneous injection of increasing amounts of allergen extract, followed by maintenance injections over a period of 3 years, achieving a form of allergen-specific tolerance that provides clinical benefit for years after its discontinuation. More recently, administration through the sublingual route has emerged as an effective, safe alternative. Oral immunotherapy for peanut allergy induces effective ‘desensitization’ but not long-term tolerance. Research and clinical trials over the past few decades have elucidated the mechanisms underlying immunotherapy-induced tolerance, involving a reduction of allergen-specific T helper 2 (TH2) cells, an induction of regulatory T and B cells, and production of IgG and IgA ‘blocking’ antibodies. To better harness these mechanisms, novel strategies are being explored to achieve safer, effective, more convenient regimens and more durable long-term tolerance; these include alternative routes for current immunotherapy approaches, novel adjuvants, use of recombinant allergens (including hypoallergenic variants) and combination of allergens with immune modifiers or monoclonal antibodies targeting the TH2 cell pathway.Durham and Shamji review the history and future of allergen immunotherapy for established IgE-mediated hypersensitivity to common allergens. They describe the mechanisms of immunotherapy-induced tolerance and the new strategies being explored to achieve safer, more effective, long-term tolerance.
Cellular senescence and senolytics: the path to the clinic
Interlinked and fundamental aging processes appear to be a root-cause contributor to many disorders and diseases. One such process is cellular senescence, which entails a state of cell cycle arrest in response to damaging stimuli. Senescent cells can arise throughout the lifespan and, if persistent, can have deleterious effects on tissue function due to the many proteins they secrete. In preclinical models, interventions targeting those senescent cells that are persistent and cause tissue damage have been shown to delay, prevent or alleviate multiple disorders. In line with this, the discovery of small-molecule senolytic drugs that selectively clear senescent cells has led to promising strategies for preventing or treating multiple diseases and age-related conditions in humans. In this Review, we outline the rationale for senescent cells as a therapeutic target for disorders across the lifespan and discuss the most promising strategies—including recent and ongoing clinical trials—for translating small-molecule senolytics and other senescence-targeting interventions into clinical use. Cellular senescence has emerged as a promising therapeutic target for disorders across the lifespan; this Review highlights the most promising strategies for translating senescence-targeting interventions into clinical use in the near future.
Evaluation and mitigation of the limitations of large language models in clinical decision-making
Clinical decision-making is one of the most impactful parts of a physician’s responsibilities and stands to benefit greatly from artificial intelligence solutions and large language models (LLMs) in particular. However, while LLMs have achieved excellent performance on medical licensing exams, these tests fail to assess many skills necessary for deployment in a realistic clinical decision-making environment, including gathering information, adhering to guidelines, and integrating into clinical workflows. Here we have created a curated dataset based on the Medical Information Mart for Intensive Care database spanning 2,400 real patient cases and four common abdominal pathologies as well as a framework to simulate a realistic clinical setting. We show that current state-of-the-art LLMs do not accurately diagnose patients across all pathologies (performing significantly worse than physicians), follow neither diagnostic nor treatment guidelines, and cannot interpret laboratory results, thus posing a serious risk to the health of patients. Furthermore, we move beyond diagnostic accuracy and demonstrate that they cannot be easily integrated into existing workflows because they often fail to follow instructions and are sensitive to both the quantity and order of information. Overall, our analysis reveals that LLMs are currently not ready for autonomous clinical decision-making while providing a dataset and framework to guide future studies. Using a curated dataset of 2,400 cases and a framework to simulate a realistic clinical setting, current large language models are shown to incur substantial pitfalls when used for autonomous clinical decision-making.
Strategies for HIV-1 vaccines that induce broadly neutralizing antibodies
After nearly four decades of research, a safe and effective HIV-1 vaccine remains elusive. There are many reasons why the development of a potent and durable HIV-1 vaccine is challenging, including the extraordinary genetic diversity of HIV-1 and its complex mechanisms of immune evasion. HIV-1 envelope glycoproteins are poorly recognized by the immune system, which means that potent broadly neutralizing antibodies (bnAbs) are only infrequently induced in the setting of HIV-1 infection or through vaccination. Thus, the biology of HIV-1–host interactions necessitates novel strategies for vaccine development to be designed to activate and expand rare bnAb-producing B cell lineages and to select for the acquisition of critical improbable bnAb mutations. Here we discuss strategies for the induction of potent and broad HIV-1 bnAbs and outline the steps that may be necessary for ultimate success.There are many reasons why the development of a potent and durable vaccine to HIV-1 is exceptionally challenging, including the large genetic diversity of the virus and its complex mechanisms of immune evasion. In this Review, Haynes et al. discuss strategies for the induction of potent broadly neutralizing antibodies for HIV-1 and the steps that may be necessary for ultimate success.