Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
1,491 result(s) for "Plain language"
Sort by:
Using ChatGPT-4 for Lay Summarization in Prostate Cancer Research to Advance Patient-Centered Communication: Large-Scale Generative AI Performance Evaluation
The increasing volume and complexity of biomedical literature pose challenges for making scientific knowledge accessible to lay audiences. Lay summaries, now widely encouraged or required by journals, aim to bridge this gap by promoting health literacy, patient engagement, and public trust. However, many are written by scientists without formal training in plain-language communication, often resulting in limited clarity, readability, and consistency. Generative large language models such as ChatGPT-4 offer a scalable opportunity to support lay summary creation, though their effectiveness within specific clinical domains has not been systematically evaluated at scale. This study aimed to assess ChatGPT-4's performance in generating lay summaries for prostate cancer studies. A secondary objective was to evaluate how prompt design influences summary quality, aiming to provide practical guidance for the use of generative artificial intelligence (AI) in scientific publishing. A total of 204 consecutive articles on prostate cancer were extracted from a high-ranking oncology journal mandating lay summaries. Each abstract was processed with ChatGPT-4 using 2 prompts: a simple prompt based on the journal's guidelines and an extended prompt refined to improve readability. AI-generated and original summaries were evaluated using 3 criteria: readability (Flesch-Kincaid Reading Ease [FKRE]), factual accuracy (5-point Likert scale, blinded rating by 2 clinical experts), and compliance with word count instructions (120-150 words). Summaries were classified as high-quality as a composite outcome if they met all 3 benchmarks: FKRE >30, accuracy ≥4 from both raters, and word count within range. Statistical comparisons used Wilcoxon signed-rank and paired 2-tailed t tests (P<.05). ChatGPT-4-generated lay summaries showed an improvement in readability compared to human-written versions, with the extended prompt achieving higher scores than the simple prompt (median FKRE: extended prompt 47, IQR 42-56; simple prompt 36, IQR 29-43; original 20, IQR 9.5-29; P<.001). Factual accuracy was higher for the AI-generated lay summaries compared to originals (median factual accuracy score: extended prompt 5, IQR 5-5; simple prompt 5, IQR 5-5; original 5, IQR 4-5; P<.001) in this dataset. Compliance with word count instructions was greater for both AI-generated summaries in comparison to originals (wrong number of words; extended prompt 39 (19%), simple prompt 40 (20%), original 140 (69%); P<.001). Between simple and extended prompts, there were no significant differences in accuracy (P=.53) and word count compliance (P=.87). The proportion rated as high-quality was 79.4% for the extended prompt, 54.9% for the simple prompt, and 5.4% for original summaries (P<.001). With optimized prompting, ChatGPT-4 produced lay summaries that, on average, scored higher than author-written versions in readability, factual accuracy, and structural compliance within our dataset. These results support integrating generative AI into editorial workflows to improve science communication for nonexpert audiences. Limitations include focus on a single clinical domain and journal, and absence of layperson evaluation.
Jargon and Readability in Plain Language Summaries of Health Research: Cross-Sectional Observational Study
The idea of making science more accessible to nonscientists has prompted health researchers to involve patients and the public more actively in their research. This sometimes involves writing a plain language summary (PLS), a short summary intended to make research findings accessible to nonspecialists. However, whether PLSs satisfy the basic requirements of accessible language is unclear. We aimed to assess the readability and level of jargon in the PLSs of research funded by the largest national clinical research funder in Europe, the United Kingdom's National Institute for Health and Care Research (NIHR). We also aimed to assess whether readability and jargon were influenced by internal and external characteristics of research projects. We downloaded the PLSs of all NIHR National Journals Library reports from mid-2014 to mid-2022 (N=1241) and analyzed them using the Flesch Reading Ease (FRE) formula and a jargon calculator (the De-Jargonizer). In our analysis, we included the following study characteristics of each PLS: research topic, funding program, project size, length, publication year, and readability and jargon scores of the original funding proposal. Readability scores ranged from 1.1 to 70.8, with an average FRE score of 39.0 (95% CI 38.4-39.7). Moreover, 2.8% (35/1241) of the PLSs had an FRE score classified as \"plain English\" or better; none had readability scores in line with the average reading age of the UK population. Jargon scores ranged from 76.4 to 99.3, with an average score of 91.7 (95% CI 91.5-91.9) and 21.7% (269/1241) of the PLSs had a jargon score suitable for general comprehension. Variables such as research topic, funding program, and project size significantly influenced readability and jargon scores. The biggest differences related to the original proposals: proposals with a PLS in their application that were in the 20% most readable were almost 3 times more likely to have a more readable final PLS (incidence rate ratio 2.88, 95% CI 1.86-4.45). Those with the 20% least jargon in the original application were more than 10 times as likely to have low levels of jargon in the final PLS (incidence rate ratio 13.87, 95% CI 5.17-37.2). There was no observable trend over time. Most of the PLSs published in the NIHR's National Journals Library have poor readability due to their complexity and use of jargon. None were readable at a level in keeping with the average reading age of the UK population. There were significant variations in readability and jargon scores depending on the research topic, funding program, and other factors. Notably, the readability of the original funding proposal seemed to significantly impact the final report's readability. Ways of improving the accessibility of PLSs are needed, as is greater clarity over who and what they are for.
Communicating Health Research With Plain Language
Although critical to enacting change, effectively communicating clinical and public health research results remains a challenge. In a webinar that occurred on December 7, 2023, a group of clinical and public health researchers and communications specialists convened to share their experiences using plain language materials to communicate research results. Herein, they provide practical guidance and case examples of lay summaries, infographics, data dashboards, and zines, along with challenges and potential solutions. Discussion illuminated the critical importance of partnering with communities who represent the intended beneficiaries of the research to plan, create, and disseminate materials. Accordingly, researchers should plan early, prepare, and dedicate resources for results dissemination. Researchers can use this guidance to develop plain language research dissemination materials, help connect with their audiences to inform and influence their understanding, and empower action to ultimately improve health and well-being.
Plain Language Summary of Publication: What is the effect of the medicine vibegron in the treatment of overactive bladder in patients with and without bladder leakage?
What is this summary about? People with overactive bladder need to use the bathroom many times a day to urinate (pee). This need may often be sudden and may cause some people with overactive bladder to have accidental bladder leakage. The EMPOWUR trial looked at how well a medicine called vibegron worked to help people with overactive bladder. The study also included another drug that was already available for treating overactive bladder called tolterodine and a pill with no medicine called a placebo. Both vibegron and tolterodine were compared with placebo. Participants had improvements in their overactive bladder symptoms after taking either vibegron or tolterodine compared to placebo. The medicine vibegron was approved in 2020 by the US Food and Drug Administration (also called the FDA) to treat overactive bladder. Researchers next wanted to see how well vibegron worked in people from the EMPOWUR trial split into 2 groups. One group was made of participants with overactive bladder who have accidental leakage. The second group was made of participants with overactive bladder who do not have accidental leakage. This is a plain language summary of the study of how well vibegron works for those 2 groups from the EMPOWUR study that was published in the International Journal of Clinical Practice. What were the results? Study participants who took vibegron needed to pee fewer times per day. The number of times they had little warning before the need to pee was also lower. The results were the same for study participants who did and did not have accidental leakage related to overactive bladder. What do the results mean? This study suggests that vibegron can improve symptoms in people with overactive bladder whether or not they have accidental bladder leakage.
Where are biomedical research plain‐language summaries?
Background and Aims Plain‐language summaries (PLS) are being heralded as a tool to improve communication of scientific research to lay audiences and time‐poor or nonspecialist healthcare professionals. However, this relies on PLS being intuitively located and accessible. This research investigated the “discoverability” of PLS in biomedical journals. Methods The eLIFE list of journals/organizations that produce PLS was consulted on July 12, 2018, for biomedical journals (based on title). Internet research, primarily focusing on information provided by the journal websites, explored PLS terminology (what do the journals call PLS), requirements (what articles are PLS generated for, who writes/reviews them, and at what stage), and location and sharing mechanisms (where/how the PLS are made available, are they free to access, and are they visible on PubMed). Results The methodology identified 10 journals from distinct publishers, plus eLIFE itself (N = 11). Impact factors ranged from 3.768 to 17.581. Nine different terms were used to describe PLS. Most of the journals (8/11) required PLS for at least all research articles. Authors were responsible for writing PLS in 9/11 cases. Seven journals required PLS on article submission; of the other four, one required PLS at revision and three on acceptance. The location/sharing mechanism for PLS varied: within articles, alongside articles (separate tab/link), and/or on separate platforms (eg, social media, dedicated website). PLS were freely available when they were published with articles; however, PLS were only included within conventional s on PubMed for 2/11 journals. Conclusion Across the few biomedical journals producing PLS, our research suggests there is wide variation in terminology, location, sharing mechanisms, and PubMed visibility. We advocate a more consistent approach to ensure that PLS have appropriate prominence and can be easily found by their intended audiences.
Using ChatGPT to Improve the Presentation of Plain Language Summaries of Cochrane Systematic Reviews About Oncology Interventions: Cross-Sectional Study
Plain language summaries (PLSs) of Cochrane systematic reviews are a simple format for presenting medical information to the lay public. This is particularly important in oncology, where patients have a more active role in decision-making. However, current PLS formats often exceed the readability requirements for the general population. There is still a lack of cost-effective and more automated solutions to this problem. This study assessed whether a large language model (eg, ChatGPT) can improve the readability and linguistic characteristics of Cochrane PLSs about oncology interventions, without changing evidence synthesis conclusions. The dataset included 275 scientific abstracts and corresponding PLSs of Cochrane systematic reviews about oncology interventions. ChatGPT-4 was tasked to make each scientific abstract into a PLS using 3 prompts as follows: (1) rewrite this scientific abstract into a PLS to achieve a Simple Measure of Gobbledygook (SMOG) index of 6, (2) rewrite the PLS from prompt 1 so it is more emotional, and (3) rewrite this scientific abstract so it is easier to read and more appropriate for the lay audience. ChatGPT-generated PLSs were analyzed for word count, level of readability (SMOG index), and linguistic characteristics using Linguistic Inquiry and Word Count (LIWC) software and compared with the original PLSs. Two independent assessors reviewed the conclusiveness categories of ChatGPT-generated PLSs and compared them with original abstracts to evaluate consistency. The conclusion of each abstract about the efficacy and safety of the intervention was categorized as conclusive (positive/negative/equal), inconclusive, or unclear. Group comparisons were conducted using the Friedman nonparametric test. ChatGPT-generated PLSs using the first prompt (SMOG index 6) were the shortest and easiest to read, with a median SMOG score of 8.2 (95% CI 8-8.4), compared with the original PLSs (median SMOG score 13.1, 95% CI 12.9-13.4). These PLSs had a median word count of 240 (95% CI 232-248) compared with the original PLSs' median word count of 364 (95% CI 339-388). The second prompt (emotional tone) generated PLSs with a median SMOG score of 11.4 (95% CI 11.1-12), again lower than the original PLSs. PLSs produced with the third prompt (write simpler and easier) had a median SMOG score of 8.7 (95% CI 8.4-8.8). ChatGPT-generated PLSs across all prompts demonstrated reduced analytical tone and increased authenticity, clout, and emotional tone compared with the original PLSs. Importantly, the conclusiveness categorization of the original abstracts was unchanged in the ChatGPT-generated PLSs. ChatGPT can be a valuable tool in simplifying PLSs as medically related formats for lay audiences. More research is needed, including oversight mechanisms to ensure that the information is accurate, reliable, and culturally relevant for different audiences.
How do doctors and patients communicate about the treatment of systemic sclerosis-associated interstitial lung disease? A plain language summary of publication
Summary What is this summary about? Systemic sclerosis (SSc) is a condition that affects the immune system (the body’s natural defence system) and causes the skin to harden and thicken in large patches. Research shows that 30% to 90% of people with SSc also have interstitial lung disease (ILD), a condition that causes inflammation and scarring of the lungs. When people have SSc and ILD, it is known as SSc-associated ILD or SSc-ILD. The authors of this plain language summary of publication (PLS-P) reviewed different articles to find out what the key issues were in the way doctors and patients with SSc-ILD communicate with each other. What were the results? The key messages from the studies were: Most patients felt uneasy when they were diagnosed with SSc-ILD Good communication between doctors and patients at the first visit is crucial as it sets the tone for future relationships Both doctors and patients avoid talking about how SSc-ILD symptoms may get worse (prognosis) or the subject of death. Patients should be encouraged to ask questions to address important and personal topics that would not be talked about otherwise Patients may feel intimidated by a doctor, which could interfere with communication Doctors must be able to listen and show empathy to build a relationship with patients and be aware that different communication styles may suit a patient during different stages in their journey Doctors should avoid using a lot of technical terms. Patients felt metaphors helped them understand their condition better Patients have different awareness, thoughts, and feelings about SSc-ILD than doctors. If doctors understand this, it may improve the communication between doctors and patients Ways to close the gap between the way doctors and patients communicate include patients having the opportunity to access: Self-learning and patient organizations Peer-mentoring (patients mentoring other patients) Information technology Shared decision-making, where the doctor and patient work together to come to a decision about treatment and care What do the results mean? The best way to improve the feelings patients have when they are diagnosed with SSc, including SSc-ILD, is to improve the quality of the communication between doctors and patients. The quality of the first meeting between a doctor and patient sets the tone for future checkups, especially if the doctor can listen, show empathy, and allow the patient to ask questions. Improving the patient’s knowledge about SSc-ILD, for example by using websites, reading printed materials, or taking part in peer-mentoring schemes, may also contribute to a better conversation.
Predicting response to benralizumab in patients with COPD: a plain language summary of publication of the GALATHEA and TERRANOVA studies
Summary What is this summary about? ● This is a plain language summary of two articles originally published in The New England Journal of Medicine and The Lancet Respiratory Medicine. These articles presented the results of GALATHEA and TERRANOVA, two clinical studies that took place across 41 countries.  ○ GALATHEA and TERRANOVA measured how patients’ COPD changed from before their first benralizumab (10, 30, or 100 mg) injection, to after 56 weeks of treatment.  ○ In both studies, benralizumab was compared with placebo.  ○ To see whether benralizumab treatment would benefit any particular patients included in these studies, researchers carried out an additional analysis following the main studies of GALATHEA and TERRANOVA.
The Corpus of Contemporary English Legal Decisions, 1950–2021 (CoCELD): A new tool for analysing recent changes in English legal discourse
Legal discourse is widely assumed to be resistant to change, and indeed legislative documents are extremely conservative with fixed and formulaic structures. However, recent research has shown that changes can be observed in the lexico-grammatical features of some legal documents when examined diachronically, particularly since the emergence in the 1970s of the Plain Language Movement, which sought to draw attention to the unnecessary complexity of the official language, this including legal discourse. Despite the crucial changes in legal language in recent years, research in that direction is scarce to date, particularly in the British English variety, probably due, in part, to the shortage of specialised corpora that allow this kind of studies. In order to bridge this gap, we have embarked on the compilation of the (CoCELD), a corpus of British judicial decisions produced between 1950 and 2021. In this paper we present the structure and characteristics of CoCELD, as well as the methodology used for its compilation. The new corpus, which was released in February 2022, contains sample texts of roughly 2,500 words for each year from 1950 to 2021, which adds up to more than 730,000 words. The corpus contains files in raw text and with POS-annotation, and is freely available for the research community under signed consent. With CoCELD we hope to contribute with a new, useful resource for linguists with an interest in legal language, from both a synchronic and a diachronic perspective.
Something for everyone
Journals and other scientific organizations produce a diverse variety of plain-language summaries.