Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Series TitleSeries Title
-
Reading LevelReading Level
-
YearFrom:-To:
-
More FiltersMore FiltersContent TypeItem TypeIs Full-Text AvailableSubjectCountry Of PublicationPublisherSourceDonorLanguagePlace of PublicationContributorsLocation
Done
Filters
Reset
295,588
result(s) for
"Medical Students"
Sort by:
A history of present illness
A young student doctor discovers the long hours and heartbreaking work at the hospital begin to blur the lines between her new life as a physician and the traumas she's tried to flee from her past.
Do Words Matter? Stigmatizing Language and the Transmission of Bias in the Medical Record
by
Goddu, Anna P
,
Haywood, Carlton
,
Beach, Mary Catherine
in
Attitudes
,
Bias
,
Clinical decision making
2018
BackgroundClinician bias contributes to healthcare disparities, and the language used to describe a patient may reflect that bias. Although medical records are an integral method of communicating about patients, no studies have evaluated patient records as a means of transmitting bias from one clinician to another.ObjectiveTo assess whether stigmatizing language written in a patient medical record is associated with a subsequent physician-in-training’s attitudes towards the patient and clinical decision-making.DesignRandomized vignette study of two chart notes employing stigmatizing versus neutral language to describe the same hypothetical patient, a 28-year-old man with sickle cell disease.ParticipantsA total of 413 physicians-in-training: medical students and residents in internal and emergency medicine programs at an urban academic medical center (54% response rate).Main MeasuresAttitudes towards the hypothetical patient using the previously validated Positive Attitudes towards Sickle Cell Patients Scale (range 7–35) and pain management decisions (residents only) using two multiple-choice questions (composite range 2–7 representing intensity of pain treatment).Key ResultsExposure to the stigmatizing language note was associated with more negative attitudes towards the patient (20.6 stigmatizing vs. 25.6 neutral, p < 0.001). Furthermore, reading the stigmatizing language note was associated with less aggressive management of the patient’s pain (5.56 stigmatizing vs. 6.22 neutral, p = 0.003).ConclusionsStigmatizing language used in medical records to describe patients can influence subsequent physicians-in-training in terms of their attitudes towards the patient and their medication prescribing behavior. This is an important and overlooked pathway by which bias can be propagated from one clinician to another. Attention to the language used in medical records may help to promote patient-centered care and to reduce healthcare disparities for stigmatized populations.
Journal Article
Large language models improve clinical decision making of medical students through patient simulation and structured feedback: a randomized controlled trial
by
Brügge, Emilia
,
Holling, Markus
,
Stummer, Walter
in
Adult
,
Artificial Intelligence
,
Artificial intelligence in clinical reasoning education
2024
Background
Clinical decision-making (CDM) refers to physicians’ ability to gather, evaluate, and interpret relevant diagnostic information. An integral component of CDM is the medical history conversation, traditionally practiced on real or simulated patients. In this study, we explored the potential of using Large Language Models (LLM) to simulate patient-doctor interactions and provide structured feedback.
Methods
We developed AI prompts to simulate patients with different symptoms, engaging in realistic medical history conversations. In our double-blind randomized design, the control group participated in simulated medical history conversations with AI patients (control group), while the intervention group, in addition to simulated conversations, also received AI-generated feedback on their performances (feedback group). We examined the influence of feedback based on their CDM performance, which was evaluated by two raters (ICC = 0.924) using the Clinical Reasoning Indicator – History Taking Inventory (CRI-HTI). The data was analyzed using an ANOVA for repeated measures.
Results
Our final sample included 21 medical students (age
mean
= 22.10 years, semester
mean
= 4, 14 females). At baseline, the feedback group (mean = 3.28 ± 0.09 [standard deviation]) and the control group (3.21 ± 0.08) achieved similar CRI-HTI scores, indicating successful randomization. After only four training sessions, the feedback group (3.60 ± 0.13) outperformed the control group (3.02 ± 0.12), F (1,18) = 4.44,
p
= .049 with a strong effect size, partial
η
2
= 0.198. Specifically, the feedback group showed improvements in the subdomains of CDM of creating context (
p
= .046) and securing information (
p
= .018), while their ability to focus questions did not improve significantly (
p
= .265).
Conclusion
The results suggest that AI-simulated medical history conversations can support CDM training, especially when combined with structured feedback. Such training format may serve as a cost-effective supplement to existing training methods, better preparing students for real medical history conversations.
Journal Article
Emotions and reflexivity in health and social care field research
Health and social care students often undertake field research in their own area of practice using observation and interviews. This book is about emotions and reflexivity when doing field research in health and social care settings.
The Global Prevalence of Anxiety Among Medical Students: A Meta-Analysis
by
Zhang, Zhisong
,
Tian-Ci Quek, Travis
,
Wai-San Tam, Wilson
in
Anxiety
,
Anxiety - epidemiology
,
Anxiety - etiology
2019
Anxiety, although as common and arguably as debilitating as depression, has garnered less attention, and is often undetected and undertreated in the general population. Similarly, anxiety among medical students warrants greater attention due to its significant implications. We aimed to study the global prevalence of anxiety among medical students and the associated factors predisposing medical students to anxiety. In February 2019, we carried out a systematic search for cross-sectional studies that examined the prevalence of anxiety among medical students. We computed the aggregate prevalence and pooled odds ratio (OR) using the random-effects model and used meta-regression analyses to explore the sources of heterogeneity. We pooled and analyzed data from sixty-nine studies comprising 40,348 medical students. The global prevalence rate of anxiety among medical students was 33.8% (95% Confidence Interval: 29.2–38.7%). Anxiety was most prevalent among medical students from the Middle East and Asia. Subgroup analyses by gender and year of study found no statistically significant differences in the prevalence of anxiety. About one in three medical students globally have anxiety—a prevalence rate which is substantially higher than the general population. Administrators and leaders of medical schools should take the lead in destigmatizing mental illnesses and promoting help-seeking behaviors when students are stressed and anxious. Further research is needed to identify risk factors of anxiety unique to medical students.
Journal Article
The additional role of virtual to traditional dissection in teaching anatomy: a randomised controlled trial
2021
IntroductionAnatomy has traditionally been taught via dissection and didactic lectures. The rising prevalence of informatics plays an increasingly important role in medical education. It is hypothesized that virtual dissection can express added value to the traditional one.MethodsSecond-year medical students were randomised to study anatomical structures by virtual dissection (intervention) or textbooks (controls), according to the CONSORT guidelines. Subsequently, they applied to the corresponding gross dissection, with a final test on their anatomical knowledge. Univariate analysis and multivariable binary logistic regression were performed.ResultsThe rate of completed tests was 76.7%. Better overall test performance was detected for the group that applied to the virtual dissection (OR 3.75 with 95% CI 0.91–15.49; p = 0.06). A comparable performance between groups in basic anatomical knowledge (p 0.45 to 0.92) but not muscles and 2D-3D reporting of anatomical structures was found, for which the virtual dissection was of tendential benefit (p 0.08 to 0.13). Medical students who applied to the virtual dissection were over three times more likely to report a positive outcome at the post-dissection test than those who applied to textbooks of topographical anatomy. This would be of benefit with particular reference to the understanding of 2D–3D spatial relationships between anatomical structures.ConclusionThe combination of virtual to traditional gross dissection resulted in a significant improvement of second-year medical students’ learning outcomes. It could be of help in maximizing the impact of practical dissection, overcoming the contraction of economic resources, and the shortage of available bodies.
Journal Article
Integrating ChatGPT in Orthopedic Education for Medical Undergraduates: Randomized Controlled Trial
2024
ChatGPT is a natural language processing model developed by OpenAI, which can be iteratively updated and optimized to accommodate the changing and complex requirements of human verbal communication.
The study aimed to evaluate ChatGPT's accuracy in answering orthopedics-related multiple-choice questions (MCQs) and assess its short-term effects as a learning aid through a randomized controlled trial. In addition, long-term effects on student performance in other subjects were measured using final examination results.
We first evaluated ChatGPT's accuracy in answering MCQs pertaining to orthopedics across various question formats. Then, 129 undergraduate medical students participated in a randomized controlled study in which the ChatGPT group used ChatGPT as a learning tool, while the control group was prohibited from using artificial intelligence software to support learning. Following a 2-week intervention, the 2 groups' understanding of orthopedics was assessed by an orthopedics test, and variations in the 2 groups' performance in other disciplines were noted through a follow-up at the end of the semester.
ChatGPT-4.0 answered 1051 orthopedics-related MCQs with a 70.60% (742/1051) accuracy rate, including 71.8% (237/330) accuracy for A1 MCQs, 73.7% (330/448) accuracy for A2 MCQs, 70.2% (92/131) accuracy for A3/4 MCQs, and 58.5% (83/142) accuracy for case analysis MCQs. As of April 7, 2023, a total of 129 individuals participated in the experiment. However, 19 individuals withdrew from the experiment at various phases; thus, as of July 1, 2023, a total of 110 individuals accomplished the trial and completed all follow-up work. After we intervened in the learning style of the students in the short term, the ChatGPT group answered more questions correctly than the control group (ChatGPT group: mean 141.20, SD 26.68; control group: mean 130.80, SD 25.56; P=.04) in the orthopedics test, particularly on A1 (ChatGPT group: mean 46.57, SD 8.52; control group: mean 42.18, SD 9.43; P=.01), A2 (ChatGPT group: mean 60.59, SD 10.58; control group: mean 56.66, SD 9.91; P=.047), and A3/4 MCQs (ChatGPT group: mean 19.57, SD 5.48; control group: mean 16.46, SD 4.58; P=.002). At the end of the semester, we found that the ChatGPT group performed better on final examinations in surgery (ChatGPT group: mean 76.54, SD 9.79; control group: mean 72.54, SD 8.11; P=.02) and obstetrics and gynecology (ChatGPT group: mean 75.98, SD 8.94; control group: mean 72.54, SD 8.66; P=.04) than the control group.
ChatGPT answers orthopedics-related MCQs accurately, and students using it excel in both short-term and long-term assessments. Our findings strongly support ChatGPT's integration into medical education, enhancing contemporary instructional methods.
Chinese Clinical Trial Registry Chictr2300071774; https://www.chictr.org.cn/hvshowproject.html ?id=225740&v=1.0.
Journal Article