Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
22,136
result(s) for
"Clinical Reasoning"
Sort by:
The Utility of Virtual Patient Simulations for Clinical Reasoning Education
by
Onigata, Kazumichi
,
Watari, Takashi
,
Tokuda, Yasuharu
in
Classroom response systems
,
Consent
,
Intervention
2020
Virtual Patient Simulations (VPSs) have been cited as a novel learning strategy, but there is little evidence that VPSs yield improvements in clinical reasoning skills and medical knowledge. This study aimed to clarify the effectiveness of VPSs for improving clinical reasoning skills among medical students, and to compare improvements in knowledge or clinical reasoning skills relevant to specific clinical scenarios. We enrolled 210 fourth-year medical students in March 2017 and March 2018 to participate in a real-time pre-post experimental design conducted in a large lecture hall by using a clicker. A VPS program (®Body Interact, Portugal) was implemented for one two-hour class session using the same methodology during both years. A pre–post 20-item multiple-choice questionnaire (10 knowledge and 10 clinical reasoning items) was used to evaluate learning outcomes. A total of 169 students completed the program. Participants showed significant increases in average total post-test scores, both on knowledge items (pre-test: median = 5, mean = 4.78, 95% CI (4.55–5.01); post-test: median = 5, mean = 5.12, 95% CI (4.90–5.43); p-value = 0.003) and clinical reasoning items (pre-test: median = 5, mean = 5.3 95%, CI (4.98–5.58); post-test: median = 8, mean = 7.81, 95% CI (7.57–8.05); p-value < 0.001). Thus, VPS programs could help medical students improve their clinical decision-making skills without lecturer supervision.
Journal Article
Artificial intelligence based assessment of clinical reasoning documentation: an observational study of the impact of the clinical learning environment on resident documentation quality
by
Haller, Matthew
,
Guzman, Benedict
,
Reinstein, Ilan
in
Algorithms
,
Artificial Intelligence
,
Artificial intelligence in clinical reasoning education
2025
Background
Objective measures and large datasets are needed to determine aspects of the Clinical Learning Environment (CLE) impacting the essential skill of clinical reasoning documentation. Artificial Intelligence (AI) offers a solution. Here, the authors sought to determine what aspects of the CLE might be impacting resident clinical reasoning documentation quality assessed by AI.
Methods
In this observational, retrospective cross-sectional analysis of hospital admission notes from the Electronic Health Record (EHR), all categorical internal medicine (IM) residents who wrote at least one admission note during the study period July 1, 2018– June 30, 2023 at two sites of NYU Grossman School of Medicine’s IM residency program were included. Clinical reasoning documentation quality of admission notes was determined to be low or high-quality using a supervised machine learning model. From note-level data, the shift (day or night) and note index within shift (if a note was first, second, etc. within shift) were calculated. These aspects of the CLE were included as potential markers of workload, which have been shown to have a strong relationship with resident performance. Patient data was also captured, including age, sex, Charlson Comorbidity Index, and primary diagnosis. The relationship between these variables and clinical reasoning documentation quality was analyzed using generalized estimating equations accounting for resident-level clustering.
Results
Across 37,750 notes authored by 474 residents, patients who were older, had more pre-existing comorbidities, and presented with certain primary diagnoses (e.g., infectious and pulmonary conditions) were associated with higher clinical reasoning documentation quality. When controlling for these and other patient factors, variables associated with clinical reasoning documentation quality included academic year (adjusted odds ratio, aOR, for high-quality: 1.10; 95% CI 1.06–1.15;
P
<.001), night shift (aOR 1.21; 95% CI 1.13–1.30;
P
<.001), and note index (aOR 0.93; 95% CI 0.90–0.95;
P
<.001).
Conclusions
AI can be used to assess complex skills such as clinical reasoning in authentic clinical notes that can help elucidate the potential impact of the CLE on resident clinical reasoning documentation quality. Future work should explore residency program and systems interventions to optimize the CLE.
Journal Article
AI-powered standardised patients: evaluating ChatGPT-4o’s impact on clinical case management in intern physicians
by
Ülkü, Hilal Hatice
,
Öncü, Selcen
,
Torun, Fulya
in
Adult
,
Artificial Intelligence
,
Artificial intelligence in clinical reasoning education
2025
Background
Artificial Intelligence is currently being applied in healthcare for diagnosis, decision-making and education. ChatGPT-4o, with its advanced language and problem-solving capabilities, offers an innovative alternative as a virtual standardised patient in clinical training. Intern physicians are expected to develop clinical case management skills such as problem-solving, clinical reasoning and crisis management. In this study, ChatGPT-4o’s served as virtual standardised patient and medical interns as physicians on clinical case management. This study aimed to evaluate intern physicians’ competencies in clinical case management; problem-solving, clinical reasoning, crisis management and explore the impact and potential of ChatGPT-4o as a viable tool for assessing these competencies.
Methods
This study used a simultaneous triangulation design, integrating quantitative and qualitative data. Conducted at Aydın Adnan Menderes University, with 21 sixth-year medical students, ChatGPT-4o simulated realistic patient interactions requiring competencies in clinical case management; problem-solving, clinical reasoning, crisis management. Data were gathered through self-assessment survey, semi-structured interviews, observations of the students and ChatGPT-4o during the process. Analyses included Pearson correlation, Chi-square, and Kruskal-Wallis tests, with content analysis conducted on qualitative data using MAXQDA software for coding.
Results
According to the findings, observation and self-assessment survey scores of intern physicians’ clinical case management skills were positively correlated. There was a significant gap between participants’ self-assessment and actual performance, indicating discrepancies in self-perceived versus real clinical competence. Participants reported feeling inadequate in their problem-solving and clinical reasoning competencies and experienced time pressure. They were satisfied with the Artificial Intelligence-powered standardised patient process and were willing to continue similar practices. Participants engaged with a uniform patient experience. Although participants were satisfied, the application process was sometimes negatively affected due to disconnection problems and language processing challenges.
Conclusions
ChatGPT-4o successfully simulated patient interactions, providing a controlled environment without risking harm to real patients for practicing clinical case management. Although some of the technological challenges limited effectiveness, it was useful, cost-effective and accessible. It is thought that intern physicians will be better supported in acquiring clinical management skills through varied clinical scenarios using this method.
Clinical trial number
Not applicable.
Journal Article
Usefulness of the script concordance test and influence of reference panel composition on clinical reasoning assessment in pediatric surgery intensive care nurses
by
Celik, Nazmiye
,
Tasdelen Teker, Gulsen
,
Senel, Emrah
in
Adult
,
Clinical assessment
,
Clinical Competence - standards
2025
This study aims to achieve two objectives. The first is to evaluate the feasibility, validity and reliability of Script Concordance Test (SCT) designed to assess the clinical reasoning (CR) skills of nurses working in a Pediatric Surgery Intensive Care Unit (PSICU). The second is to investigate the impact of different reference panels on SCT scores.
Although the SCT is widely used to assess CR in nursing students, its application in postgraduate nursing practice remains underexplored.
A descriptive evaluation and a validation study.
A 13-case SCT, was administered to 30 nurses working in the PSICU. The scoring key for the SCT was developed based on three distinct panels. The scores of experts and participants were compared using a t-test. Reliability was assessed through generalizability (G) theory. A decision (D) study conducted within the G theory framework determined the number of cases and items required to achieve optimal reliability.
A statistically significant difference in SCT scores was observed across the three-panel compositions. SCT demonstrated its ability to effectively distinguish between experts and participants. The highest reliability (0.71) was achieved with the mixed panel scoring. According to the D study, achieving a reliability of 0.80 or higher requires 100 items.
The SCT can be recommended as a tool that provides valid and reliable measures for assessing CR in postgraduate nursing practice. When developing the scoring key for the (SCT), it is advisable to use either a panel consisting solely of nurses or a mixed reference panel.
•Although the SCT has been used in nursing students, it has not yet been applied in postgraduate nursing practice.•SCT is a tool that provides valid and reliable measures for assessing clinical reasoning in post-graduate nursing practice.•When creating the scoring key for the SCT, a mixed reference panel consisting of both nurses and physicians can be used.•The number of cases or questions can be increased to enhance the reliability.
Journal Article
Collaborative clinical reasoning: a scoping review
by
Lee, Ching-Hsin
,
Yau, Sze-Yuen
,
Lee, Ching-Yi
in
Care and treatment
,
Clinical Reasoning
,
Cognition & reasoning
2024
Collaborative clinical reasoning (CCR) among healthcare professionals is crucial for maximizing clinical outcomes and patient safety. This scoping review explores CCR to address the gap in understanding its definition, structure, and implications.
A scoping review was undertaken to examine CCR related studies in healthcare. Medline, PsychInfo, SciVerse Scopus, and Web of Science were searched. Inclusion criteria included full-text articles published between 2011 to 2020. Search terms included cooperative, collaborative, shared, team, collective, reasoning, problem solving, decision making, combined with clinical or medicine or medical, but excluded shared decision making.
A total of 24 articles were identified in the review. The review reveals a growing interest in CCR, with 14 articles emphasizing the decision-making process, five using Multidisciplinary Team-Metric for the Observation of Decision Making (MDTs-MODe), three exploring CCR theory, and two focusing on the problem-solving process. Communication, trust, and team dynamics emerge as key influencers in healthcare decision-making. Notably, only two articles provide specific CCR definitions.
While decision-making processes dominate CCR studies, a notable gap exists in defining and structuring CCR. Explicit theoretical frameworks, such as those proposed by Blondon et al. and Kiesewetter et al., are crucial for advancing research and understanding CCR dynamics within collaborative teams. This scoping review provides a comprehensive overview of CCR research, revealing a growing interest and diversity in the field. The review emphasizes the need for explicit theoretical frameworks, citing Blondon et al. and Kiesewetter et al. The broader landscape of interprofessional collaboration and clinical reasoning requires exploration.
Journal Article