Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
103
result(s) for
"understandability"
Sort by:
An empirical study on software understandability and its dependence on code characteristics
2023
ContextInsufficient code understandability makes software difficult to inspect and maintain and is a primary cause of software development cost. Several source code measures may be used to identify difficult-to-understand code, including well-known ones such as Lines of Code and McCabe’s Cyclomatic Complexity, and novel ones, such as Cognitive Complexity.ObjectiveWe investigate whether and to what extent source code measures, individually or together, are correlated with code understandability.MethodWe carried out an empirical study with students who were asked to carry out realistic maintenance tasks on methods from real-life Open Source Software projects. We collected several data items, including the time needed to correctly complete the maintenance tasks, which we used to quantify method understandability. We investigated the presence of correlations between the collected code measures and code understandability by using several Machine Learning techniques.ResultsWe obtained models of code understandability using one or two code measures. However, the obtained models are not very accurate, the average prediction error being around 30%.ConclusionsBased on our empirical study, it does not appear possible to build an understandability model based on structural code measures alone. Specifically, even the newly introduced Cognitive Complexity measure does not seem able to fulfill the promise of providing substantial improvements over existing measures, at least as far as code understandability prediction is concerned. It seems that, to obtain models of code understandability of acceptable accuracy, process measures should be used, possibly together with new source code measures that are better related to code understandability.
Journal Article
Comparing the understandability of iteration mechanisms over Collections in Java
2023
Source code understandability is a desirable quality factor affecting long-term code maintenance. Understandability of source code can be assessed in a variety of ways, including subjective evaluation of code fragments (perceived understandability), correctness, and response time to tasks performed. It can also be assessed using various source code metrics, such as cyclomatic complexity or cognitive complexity. Programming languages are evolving, giving programmers new ways to do the same things, e.g., iterating over collections. Functional solutions (lambda expressions and streams) are added to typical imperative constructs like iterators or
statements. This research aims to check if there is a correlation between perceived understandability, understandability measured by task correctness, and predicted by source code metrics for typical tasks that require iteration over collections implemented in Java. The answer is based on the results of an experiment. The experiment involved 99 participants of varying ages, declared Java knowledge and seniority measured in years. Functional code was perceived as the most understandable, but only in one case, the subjective assessment was confirmed by the correctness of answers. In two examples with the highest perceived understandability, streams received the worst correctness scores. Cognitive complexity and McCabe’s complexity had the lowest values in all tasks for the functional approach, but – unfortunately – they did not correlate with answer correctness. The main finding is that the functional approach to collection manipulation is the best choice for the filter-map-reduce idiom and its alternatives (e.g., filter-only). It should not be used in more complex tasks, especially those with higher complexity metrics.
Journal Article
Do RESTful API design rules have an impact on the understandability of Web APIs?
by
Bogner, Justus
,
Kotstein, Sebastian
,
Pfaff, Timo
in
Application programming interface
,
Best practice
,
Design
2023
ContextWeb APIs are one of the most used ways to expose application functionality on the Web, and their understandability is important for efficiently using the provided resources. While many API design rules exist, empirical evidence for the effectiveness of most rules is lacking.ObjectiveWe therefore wanted to study 1) the impact of RESTful API design rules on understandability, 2) if rule violations are also perceived as more difficult to understand, and 3) if demographic attributes like REST-related experience have an influence on this.MethodWe conducted a controlled Web-based experiment with 105 participants, from both industry and academia and with different levels of experience. Based on a hybrid between a crossover and a between-subjects design, we studied 12 design rules using API snippets in two complementary versions: one that adhered to a rule and one that was a violation of this rule. Participants answered comprehension questions and rated the perceived difficulty.ResultsFor 11 of the 12 rules, we found that violation performed significantly worse than rule for the comprehension tasks. Regarding the subjective ratings, we found significant differences for 9 of the 12 rules, meaning that most violations were subjectively rated as more difficult to understand. Demographics played no role in the comprehension performance for violation.ConclusionsOur results provide first empirical evidence for the importance of following design rules to improve the understandability of Web APIs, which is important for researchers, practitioners, and educators.
Journal Article
Understanding experiments and research practices for reproducibility: an exploratory study
2021
Scientific experiments and research practices vary across disciplines. The research practices followed by scientists in each domain play an essential role in the understandability and reproducibility of results. The “Reproducibility Crisis”, where researchers find difficulty in reproducing published results, is currently faced by several disciplines. To understand the underlying problem in the context of the reproducibility crisis, it is important to first know the different research practices followed in their domain and the factors that hinder reproducibility. We performed an exploratory study by conducting a survey addressed to researchers representing a range of disciplines to understand scientific experiments and research practices for reproducibility. The survey findings identify a reproducibility crisis and a strong need for sharing data, code, methods, steps, and negative and positive results. Insufficient metadata, lack of publicly available data, and incomplete information in study methods are considered to be the main reasons for poor reproducibility. The survey results also address a wide number of research questions on the reproducibility of scientific results. Based on the results of our explorative study and supported by the existing published literature, we offer general recommendations that could help the scientific community to understand, reproduce, and reuse experimental data and results in the research data lifecycle.
Journal Article
Navigating the Whipple procedure patient educational materials: A readability and understandability analysis
by
Castillo-Angeles, Manuel
,
Brandao, Gabriela Rangel
,
Ponce, Cristina
in
Actionability
,
Algorithms
,
Cancer
2025
Pancreatoduodenectomy (PD) is a complex surgical procedure that is challenging to understand. We aimed to assess the readability, understandability, and actionability of online patient-directed materials for PD.
We evaluated online patient-focused educational materials about PD. Through the Leapfrog ratings for “Pancreatic Resection for Cancer” we classified high-volume (HV) and low-volume (LV) hospitals. Readability was measured using multiple tools. Understandability and actionability were measured using the Patient Education Materials Assessment Tool for Printable materials (PEMAT-P). As an external control source of comparison, we analyzed the patient materials from three patient-focused organizations.
Out of 550 HV hospitals, 10 % had any online patient educational material about PD. Readability was at a median grade level of 12 (IQR 4). All exceeded the recommended sixth-grade readability level. Websites solely focused on PD had significantly higher understandability and actionability scores compared to those where the procedure was only a section within a broader topic such as pancreatic cancer. There was no difference in readability, understandability, or actionability among HV, LV and control groups.
Online patient materials for PD were scarce, lengthy and difficult to comprehend. However, their understandability is crucial for providing patient education. Simplification of patient materials with clear guidance, visuals, and simple language, and strategies-focused research are needed.
•There are limited online resources specifically for patients undergoing PD.•Median readability level was 12th-grade.•No readability difference among high and low-volume centers and control groups.•Actionability score very low for all groups.•Websites dedicated solely to the procedure rather the disease had higher scores.
Journal Article
Accuracy, comprehensiveness and understandability of AI-generated answers to questions from people with COPD: the AIR-COPD Study
by
Powell, Pippa
,
Aliverti, Andrea
,
Pinnock, Hilary
in
Accuracy
,
Analysis
,
Artificial intelligence
2025
Background
Chronic obstructive pulmonary disease (COPD) remains an underestimated and underdiagnosed condition due to low disease awareness. Generative Artificial Intelligence (AI) chatbots are convenient and accessible sources of medical information, but evaluation of the quality of answers provided by patient-generated questions about COPD has not been performed to date.
Objective
To assess and compare accuracy, comprehensiveness, understandability and reliability of different AI chatbots in response to patient-generated questions on the clinical management of COPD.
Methods
A cross-sectional study was conducted in collaboration with the European Respiratory Society (ERS), the European Lung Foundation (ELF), and the ERS CONNECT Clinical Research Collaboration (CRC). Fifteen real questions formulated by ELF COPD patient representatives were divided into three difficulty tiers (easy, medium, difficult) and submitted to ChatGPT (version 3.5), Bard, and Copilot. Experts assessed accuracy and comprehensiveness on a 0–10 scale; patients assessed understandability using the same scale. Reliability was assessed by two investigators. Reviewers were blinded to which AI system generated the answers, and only those who completed all evaluations were included in the analysis.
Results
ChatGPT responses were the most reliable (14/15), followed by Copilot (12/15) and Bard (11/15). ChatGPT scored higher for accuracy (8.0 [7.0 – 9.0]) and comprehensiveness (8.0 [6.8 – 9.0]) than Bard (6.0 [5.0 – 8.0] and 6.0 [5.0 – 7.0]) and Copilot (6.0 [5.0 – 7.3] and 6.0 [5.0 – 8.0]) (both
P
< 0.001). Understandability was similar across all software (ChatGPT: 8.0 [8.0–10.0]; Bard: 9.0 [8.0–10.0]; Copilot: 9.0 [8.0–10.0]) (
P
= 0.53). No significant effect was detected according to the difficulty of the question.
Conclusion
Our findings suggest that AI chatbots, particularly ChatGPT, can provide accurate, comprehensive and understandable answers to patients’ questions.
Journal Article
Reshaping Smart Cities through NGSI-LD Enrichment
by
Sánchez, Luis
,
Sotres, Pablo
,
Lanza, Jorge
in
Analysis
,
Computational linguistics
,
data enrichment
2024
The vast amount of information stemming from the deployment of the Internet of Things and open data portals is poised to provide significant benefits for both the private and public sectors, such as the development of value-added services or an increase in the efficiency of public services. This is further enhanced due to the potential of semantic information models such as NGSI-LD, which enable the enrichment and linkage of semantic data, strengthened by the contextual information present by definition. In this scenario, advanced data processing techniques need to be defined and developed for the processing of harmonised datasets and data streams. Our work is based on a structured approach that leverages the principles of linked-data modelling and semantics, as well as a data enrichment toolchain framework developed around NGSI-LD. Within this framework, we reveal the potential for enrichment and linkage techniques to reshape how data are exploited in smart cities, with a particular focus on citizen-centred initiatives. Moreover, we showcase the effectiveness of these data processing techniques through specific examples of entity transformations. The findings, which focus on improving data comprehension and bolstering smart city advancements, set the stage for the future exploration and refinement of the symbiosis between semantic data and smart city ecosystems.
Journal Article
Improving the Understandability of Clinical Guidelines: Development and Evaluation of a GPT-4–Based Pipeline
by
Jones, Matthew D
,
Torgbi, Melissa
,
Tayyar Madabushi, Harish
in
AI Language Models in Health Care
,
Case studies
,
Clinical Information and Decision Making
2026
Difficulty in finding and understanding information in clinical guidelines contributes to medication errors. Large language models (LLMs) can simplify complex text to aid in understanding, but this approach to improving the quality of guidelines has not been investigated. However, LLMs are also known to hallucinate or generate outputs that may not align with reality.
This study aimed to develop and evaluate an LLM pipeline to improve the readability of clinical guidelines while ensuring the preservation of critical content.
To align LLM revisions with research evidence and enable comparison with manual editing, the National Health Service Injectable Medicines Guide (IMG) was used as a case study, to which a GPT-4-based pipeline was applied, with prompts based on user testing-derived recommendations for IMG authors. This enabled readability comparisons between various IMG guideline versions: original, manually revised, or GPT-4-revised using the user testing-derived recommendations, and fully user tested. Readability was evaluated using readability metrics and ratings from 3 expert pharmacists. Content similarity before and after LLM revision was assessed using BERT (bidirectional encoder representations from transformers) scores and expert pharmacist review.
Considering 20 IMG guidelines used in practice, BERT scores indicated high semantic similarity between the original and LLM-revised guidelines (0.88-0.96). An omission, addition, or change in meaning was identified by at least one pharmacist in 30 (20%), 7 (5%), and 18 (12%) of the 153 guideline subsections, respectively. The SMOG (Simple Measure of Gobbledygook) grade showed a small but significant improvement in readability for the LLM-revised guidelines (mean difference 0.32, 95% CI 0.10-0.55; P=.02) and the manually revised versions (mean difference 0.46, 95% CI 0.13-0.79; P=.03). There was no significant difference between the LLM and manually revised versions (P>.99). There were no significant differences between Flesch-Kincaid reading grades (P=.91). Expert ratings favored the LLM-revised versions for understandability. Considering 2 IMG guidelines from previous research, user testing produced a greater improvement in readability than LLM revision.
Authors should not use current LLMs to modify clinical guidelines without carefully checking the revised text for unintended omissions, additions, or changes in meaning. Further work should investigate the potential of LLMs to augment manual user testing and reduce the barriers to the wider use of this approach to improve the safety of clinical guidelines.
Journal Article
Creative Accounting Determinants and Financial Reporting Quality: Systematic Literature Review
by
Hasan, Elina F.
,
Hussin, Nazimah
,
Haddad, Hossam
in
Accounting policies
,
Accounting procedures
,
Corporate growth
2022
Creative accounting is considered to be a 21st-century phenomenon that has received increased attention after the worldwide economic crisis and budget deficits, particularly the prevention and detection of accounting manipulation. Creative accounting is a practice that influences financial indicators by using accounting knowledge and rules that do not explicitly violate accounting policies, rules, and laws. The main purpose for implementing creative accounting is to show the financial position desired by the company management; stakeholders are informed of what the management wants them to perceive. Creative accounting can be used to manipulate financial information from its correct and accurate form by exploiting existing rules or, in many cases, ignoring one or more rules. Therefore, the methodology of the present work contributes to the existing literature by systematically reviewing the impacts of creative accounting determinants on financial reporting quality, especially in the banking sector. In this review, we describe and critically analyze previous relevant works to identify and assess the relationship between the constructs addressed in the study. In conclusion, this study offers insight for academia, researchers, and practitioners on determining creative accounting practices and their influences on fraudulent financial reporting between 2015 and 2020. Lastly, the present study contributes to the existing information by conducting new research on creative accounting determinants to enhance the quality of financial reporting and, therefore, help professionals to improve practices within the profession.
Journal Article