Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
15
result(s) for
"Flesch reading ease score"
Sort by:
Enhancing Software Comments Readability Using Flesch Reading Ease Score
2020
Comments are used to explain the meaning of code and ease communications between programmers themselves, quality assurance auditors, and code reviewers. A tool has been developed to help programmers write readable comments and measure their readability level. It is used to enhance software readability by providing alternatives to both keywords and comment statements from a local database and an online dictionary. It is also a word-finding query engine for developers. Readability level is measured using three different formulas: the fog index, the Flesch reading ease score, and Flesch–Kincaid grade levels. A questionnaire has been distributed to 42 programmers and 35 students to compare the readability aspect between both new comments written by the tool and the original comments written by previous programmers and developers. Programmers stated that the comments from the proposed tool had fewer complex words and took less time to read and understand. Nevertheless, this did not significantly affect the understandability of the text, as programmers normally have quite a high level of English. However, the results from students show that the tool affects the understandability of text and the time taken to read it, while text complexity results show that the tool makes new comment text that is more readable by changing the three studied variables.
Journal Article
Evaluating the readability of recruitment materials in veterinary clinical research
by
Quigley, Mindy
,
McKenna, Charly
,
Webb, Tracy L.
in
biomedical research
,
client‐owned animal
,
Clinical trials
2023
Abstract
Background
Owner comprehension is vital to recruitment and study success, but limited information exists regarding the readability of public-facing veterinary clinical trial descriptions.
Objectives
The current study sought to evaluate the readability of public-facing online veterinary clinical trial descriptions from academic institutions and private referral practices.
Animals
None.
Methods
This prospective study assessed readability in a convenience sample of veterinary clinical trial study descriptions using 3 common methods: the Flesch-Kincaid Grade Level (F-K), Flesch Reading Ease Score (FRES), and online Automatic Readability Checker (ARC). Results were compared across specialties and between academic and private institutions.
Results
Grade level and readability consensus scores (RCSs) were obtained for 61 online clinical trial descriptions at universities (n = 49) and private practices (n = 12). Average grade-level RCS for study descriptions was 14.13 (range, 9-21). Using Microsoft Word, the FRES score was higher in descriptions from universities compared to private practices (P = .03), and F-K scores were lower in university compared to private practice descriptions (P = .03). FRES (P = .07), F-K (P = .12), and readability consensus (P = .17) scores obtained from ARC were not different between institution types. Forty-eight studies (79%) had RCSs over 12, equivalent to reading material at college or graduate school levels.
Conclusions and Clinical Importance
Similar to other areas in veterinary communication, the evaluated veterinary clinical trial descriptions used for advertising and recruitment far exceeded the recommended 6th-grade reading level for medical information. Readability assessments are straightforward to conduct, and ensuring health literacy should be a customary best practice in veterinary medicine and clinical research.
Journal Article
Evaluating ChatGPT’s ability to simplify scientific abstracts for clinicians and the public
by
Phadke, Chetan
,
Dogru-Huzmeli, Esra
,
Shafiee, Erfan
in
692/308
,
692/700
,
Abstracting and Indexing
2025
This study evaluated ChatGPT’s ability to simplify scientific abstracts for both public and clinician use. Ten questions were developed to assess ChatGPT’s ability to simplify scientific abstracts and improve their readability for both the public and clinicians. These questions were applied to 43 abstracts. The abstracts were selected through a convenience sample from Google Scholar by four interdisciplinary reviewers from physiotherapy, occupational therapy, and nursing backgrounds. Each abstract was summarized by ChatGPT on two separate occasions. These summaries were then reviewed independently by two different reviewers. Flesch Reading Ease scores were calculated for each summary and original abstract. A subgroup analysis explored differences in accuracy, clarity, and consistency across various study designs. ChatGPT’s summaries scored higher on the Flesch Reading Ease test than the original abstracts in 31 out of 43 papers, showing a significant improvement in readability (
p
= 0.005). Systematic reviews and meta-analyses consistently received higher scores for accuracy, clarity, and consistency, while clinical trials scored lower across these parameters. Despite its strengths, ChatGPT showed limitations in “Hallucination presence” and “Technical terms usage,” scoring below 7 out of 10. Hallucination rates varied by study type, with case reports having the lowest scores. Reviewer agreement across parameters demonstrated consistency in evaluations. ChatGPT shows promise for translating knowledge in clinical settings, helping to make scientific research more accessible to non-experts. However, its tendency toward hallucinations and technical jargon requires careful review by clinicians, patients, and caregivers. Further research is needed to assess its reliability and safety for broader use in healthcare communication.
Journal Article
Readability of consent forms in veterinary clinical research
2019
Abstract
Background
“Readability” of consent forms is vital to the informed consent process. The average human hospital consent form is written at a 10th grade reading level, whereas the average American adult reads at an 8th grade level. Limited information currently exists regarding the readability of veterinary general medical or clinical research consent forms.
Hypothesis/Objectives
The goal of this study was to assess the readability of veterinary clinical trial consent forms from a group of veterinary referral centers recently involved in a working group focused on veterinary clinical trial review and consent. We hypothesized that consent forms would not be optimized for client comprehension and would be written above the National Institutes of Health-recommended 6th grade reading level.
Animals
None.
Methods
This was a prospective study assessing a convenience sample of veterinary clinical trial consent forms. Readability was assessed using 3 methods: the Flesch-Kincaid (F-K) Grade Level, Flesch Reading Ease Score (FRES), and the Readability Test Tool (RTT). Results were reported as mean (±SD) and compared across specialties.
Results
Fifty-three consent forms were evaluated. Mean FRES was 37.5 ± 6.0 (target 60 or higher). Mean F-K Grade Level was 13.0 ± 1.2 and mean RTT grade level was 12.75 ± 1.1 (target 6.0 or lower). There was substantial agreement between F-K and RTT grade level scores (intraclass correlation coefficient 0.8).
Conclusions and Clinical Importance
No form evaluated met current health literacy recommendations for readability. A simple and readily available F-K Microsoft-based approach for evaluating grade level was in substantial agreement with other methods, suggesting that this approach might be sufficient for use by clinicians and administrators drafting forms for future studies.
Journal Article
A Readability-Driven Curriculum Learning Method for Data-Efficient Small Language Model Pretraining
by
Kim, Juae
,
Kim, Suyun
,
Park, Jungwon
in
Curricula
,
Curriculum development
,
curriculum learning
2025
Large language models demand substantial computational and data resources, motivating approaches that improve the training efficiency of small language models. While curriculum learning methods based on linguistic difficulty measures have been explored as a potential solution, prior approaches that rely on complex linguistic indices are often computationally expensive, difficult to interpret, or fail to yield consistent improvements. Moreover, existing methods rarely incorporate the cognitive and linguistic efficiency observed in human language acquisition. To address these gaps, we propose a readability-driven curriculum learning method based on the Flesch Reading Ease (FRE) score, which provides a simple, interpretable, and cognitively motivated measure of text difficulty. Across two dataset configurations and multiple curriculum granularities, our method yields consistent improvements over baseline models without curriculum learning, achieving substantial gains on BLiMP and MNLI. Reading behavior evaluations also reveal human-like sensitivity to textual difficulty. These findings demonstrate that a lightweight, interpretable curriculum design can enhance small language models under strict data constraints, offering a practical path toward more efficient training.
Journal Article
Reverse Osmosis Membrane Engineering: Multidirectional Analysis Using Bibliometric, Machine Learning, Data, and Text Mining Approaches
by
Aytaç, Ersin
,
Khayet, Mohamed
,
Ibrahim, Yazan
in
Artificial intelligence
,
Bibliometrics
,
Biblioshiny
2024
Membrane engineering is a complex field involving the development of the most suitable membrane process for specific purposes and dealing with the design and operation of membrane technologies. This study analyzed 1424 articles on reverse osmosis (RO) membrane engineering from the Scopus database to provide guidance for future studies. The results show that since the first article was published in 1964, the domain has gained popularity, especially since 2009. Thin-film composite (TFC) polymeric material has been the primary focus of RO membrane experts, with 550 articles published on this topic. The use of nanomaterials and polymers in membrane engineering is also high, with 821 articles. Common problems such as fouling, biofouling, and scaling have been the center of work dedication, with 324 articles published on these issues. Wang J. is the leader in the number of published articles (73), while Gao C. is the leader in other metrics. Journal of Membrane Science is the most preferred source for the publication of RO membrane engineering and related technologies. Author social networks analysis shows that there are five core clusters, and the dominant cluster have 4 researchers. The analysis of sentiment, subjectivity, and emotion indicates that abstracts are positively perceived, objectively written, and emotionally neutral.
Journal Article
Readability of Online Materials Related to Vocal Cord Leukoplakia
by
Best, Simon
,
Snow, Grace E.
,
Shneyderman, Matthew
in
Flesch Reading Ease Score
,
Flesch‐Kincaid Grade Level
,
Oral diseases
2021
Objectives
To assess readability and understandability of online materials for vocal cord leukoplakia.
Study Design
Review of online materials.
Setting
Academic medical center.
Methods
A Google search of “vocal cord leukoplakia” was performed, and the first 50 websites were considered for analysis. Readability was measured by the Flesch Reading Ease Score (FRES), Flesch-Kincaid Grade Level (FKGL), and Simple Measure of Gobbledygook (SMOG). Understandability and actionability were assessed by 2 independent reviewers with the PEMAT-P (Patient Education Materials Assessment Tool for Printable Materials). Unpaired t tests compared scores between sites aimed at physicians and those at patients, and a Cohen’s kappa was calculated to measure interrater reliability.
Results
Twenty-two websites (17 patient oriented, 5 physician oriented) met inclusion criteria. For the entire cohort, FRES, FKGL, and SMOG scores (mean ± SD) were 36.90 ± 20.65, 12.96 ± 3.28, and 15.65 ± 3.57, respectively, indicating that materials were difficult to read at a >12th-grade level. PEMAT-P understandability and actionability scores were 73.65% ± 7.05% and 13.63% ± 22.47%. Statistically, patient-oriented sites were more easily read than physician-oriented sites (P < .02 for each of the FRES, FKGL, and SMOG comparisons); there were no differences in understandability or actionability scores between these categories of sites.
Conclusion
Online materials for vocal cord leukoplakia are written at a level more advanced than what is recommended for patient education materials. Awareness of the current ways that these online materials are failing our patients may lead to improved education materials in the future.
Journal Article
Enhancing Readability in Construction Safety Reports Using a Two-Step Quantitative Analysis Approach
by
Kim, Hoyoung
,
Mun, Hyeongjun
,
Kumi, Louis
in
Accident prevention
,
Analysis
,
Anderson–Darling goodness-of-fit test
2025
This study addresses the limitations of South Korea’s Design for Safety (DfS) reports, which are a critical component of construction safety reports (CSRs) but rely heavily on text, limiting readability and visual comprehension. While previous studies have highlighted the readability challenges in construction safety documents, few have quantitatively combined layout and readability assessments using objective metrics. To enhance information delivery, this research proposes an improved CSR format and quantitatively evaluates its effectiveness compared to the conventional format. A two-step analysis was conducted using document layout analysis, pixel-based methods, and the Flesch Reading Ease Score (FRES) to assess layout and readability. The results showed that conventional CSRs consist of nearly 100% text, while the improved format integrates approximately 70% images and 30% text, enhancing visual clarity without altering content. The improved format achieved a higher average FRES score of 50.24 compared to 44.52 for the conventional format, indicating a 1.12-fold increase in readability. These findings suggest that the improved CSR format significantly enhances comprehension and information delivery. The proposed quantitative analysis method offers a practical approach for evaluating and improving document design in construction safety, and it can be applied to other fields to improve the effectiveness of written communication.
Journal Article
Pleasure reading and reading rate gains
2014
This study investigated the effects of (a) the amount of pleasure reading completed, (b) the type of texts read (i.e., simplified or unsimplified books), and (c) the level of simplified texts read by 14 Japanese university students who made the largest reading rate gains over one academic year. The findings indicated that the participants who made the greatest fluency gains read an average of 208,607 standard words and primarily read simplified texts up to the 1,600-headword level. This study also provides an empirically supported criterion for the minimum amount learners should read annually (i.e., 200,000 standard words), provides direct evidence that simplified texts are more effective than unsimplified texts for reading rate development, and is the first study to provide empirical evidence that reading lower-level simplified texts within learners’ linguistic competence is effective for developing the reading rates of Japanese learners at a lower-intermediate reading proficiency level.
Journal Article
Analysis of Internet-Based Patient Education Materials Related to Pituitary Tumors
2014
The Internet has become a primary and ubiquitous information source for patient education material (PEM); however, the information provided may not be appropriate for the average patient to comprehend. Various national healthcare organizations have recommended that PEM be written at or below the sixth-grade level. The purpose of this study was to assess the readability of pituitary tumor-related PEMs available on the Internet.
Fifty-one PEMs on pituitary tumors were downloaded from professional society and clinical practice websites. Analysis of readability was performed using 4 different readability indices: Flesch-Kincaid Grade Level (FKGL), Flesch Reading Ease Score (FRES), Simple Measure of Gobbledygook (SMOG), and Gunning Frequency Measure of Gobbledygook (Gunning FOG).
Scores from the FKGL, SMOG, and Gunning FOG scales correspond to reading grade levels. Therefore, a higher number corresponds to higher difficulty and lower readability. The average grade level of the PEMs according to the readability indices were as follows: FKGL = 11.71 (11th to 12th grades), SMOG = 14.56 (college level), and Gunning FOG = 14.86 (college level). For the FRES, higher scores imply easier readability. The average FRES was 40.19 (fairly difficult-between 10th and 11th grades).
These findings suggest that online pituitary tumor-related material may be too difficult for comprehension by the majority of the targeted patient population. Keeping the reading level of PEMs at or below the sixth grade may improve understanding of this disease and its management for pituitary tumor patients.
Journal Article