Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
5,840 result(s) for "Education, Medical, Undergraduate - methods"
Sort by:
Evaluating podcasts as a tool for OSCE training: a randomized trial using generative AI-powered simulation
Introduction Objective Structured Clinical Examinations (OSCEs) are critical for assessing clinical competencies in medical education. While traditional teaching methods remain prevalent, this study introduces an innovative approach by evaluating the effectiveness of an OSCE preparation podcast in improving medical students’ OSCE performance using nephrology as a proof of concept. This novel method offers a flexible and accessible format for supplementary learning, potentially revolutionizing medical education. Methods A mono-centric randomized controlled trial was conducted among 50 fourth-year medical students. Participants were randomly assigned to either the podcast intervention group or a control group. Both groups completed six nephrology-specific OSCE stations on DocSimulator, a generative AI-powered virtual patient platform. Scores from three baseline and three post-intervention OSCE stations were compared. The primary outcome was the change in OSCE scores. Secondary outcomes included interest in nephrology and students’ self-reported competence in nephrology-related skills. Results The baseline OSCE scores did not differ between the two groups (23.8 ± 3.9 vs. 23.3 ± 5.3; p  = 0.77). After the intervention, the podcast group demonstrated a significantly higher OSCE score compared to the control group (27.6 ± 3.6 vs. 23.6 ± 5.0; p  = 0.002) with a greater improvement in OSCE scores (+ 3.52[0.7,6.5] vs. -1.22[-3,5.5]; p  = 0.03). While the podcast did not increase students’ intention to specialize in nephrology (4.2% vs. 4.0%; p  = 0.99), it significantly improved their confidence in nephrology-related clinical skills (41.7% vs. 16%, p  = 0.04). 68% of students in the podcast group found OSCE training podcast useful for their OSCE preparation, and 96% reported they would use it again. Conclusions The use of an OSCE preparation podcast significantly enhanced students’ performance in AI-based simulations and confidence in nephrology clinical competencies. Podcasts represent a valuable supplementary tool for medical education, providing flexibility and supporting diverse learning styles. Trial Registration Not applicable.
Large language models improve clinical decision making of medical students through patient simulation and structured feedback: a randomized controlled trial
Background Clinical decision-making (CDM) refers to physicians’ ability to gather, evaluate, and interpret relevant diagnostic information. An integral component of CDM is the medical history conversation, traditionally practiced on real or simulated patients. In this study, we explored the potential of using Large Language Models (LLM) to simulate patient-doctor interactions and provide structured feedback. Methods We developed AI prompts to simulate patients with different symptoms, engaging in realistic medical history conversations. In our double-blind randomized design, the control group participated in simulated medical history conversations with AI patients (control group), while the intervention group, in addition to simulated conversations, also received AI-generated feedback on their performances (feedback group). We examined the influence of feedback based on their CDM performance, which was evaluated by two raters (ICC = 0.924) using the Clinical Reasoning Indicator – History Taking Inventory (CRI-HTI). The data was analyzed using an ANOVA for repeated measures. Results Our final sample included 21 medical students (age mean = 22.10 years, semester mean = 4, 14 females). At baseline, the feedback group (mean = 3.28 ± 0.09 [standard deviation]) and the control group (3.21 ± 0.08) achieved similar CRI-HTI scores, indicating successful randomization. After only four training sessions, the feedback group (3.60 ± 0.13) outperformed the control group (3.02 ± 0.12), F (1,18) = 4.44, p  = .049 with a strong effect size, partial η 2  = 0.198. Specifically, the feedback group showed improvements in the subdomains of CDM of creating context ( p  = .046) and securing information ( p  = .018), while their ability to focus questions did not improve significantly ( p  = .265). Conclusion The results suggest that AI-simulated medical history conversations can support CDM training, especially when combined with structured feedback. Such training format may serve as a cost-effective supplement to existing training methods, better preparing students for real medical history conversations.
Integrating ChatGPT in Orthopedic Education for Medical Undergraduates: Randomized Controlled Trial
ChatGPT is a natural language processing model developed by OpenAI, which can be iteratively updated and optimized to accommodate the changing and complex requirements of human verbal communication. The study aimed to evaluate ChatGPT's accuracy in answering orthopedics-related multiple-choice questions (MCQs) and assess its short-term effects as a learning aid through a randomized controlled trial. In addition, long-term effects on student performance in other subjects were measured using final examination results. We first evaluated ChatGPT's accuracy in answering MCQs pertaining to orthopedics across various question formats. Then, 129 undergraduate medical students participated in a randomized controlled study in which the ChatGPT group used ChatGPT as a learning tool, while the control group was prohibited from using artificial intelligence software to support learning. Following a 2-week intervention, the 2 groups' understanding of orthopedics was assessed by an orthopedics test, and variations in the 2 groups' performance in other disciplines were noted through a follow-up at the end of the semester. ChatGPT-4.0 answered 1051 orthopedics-related MCQs with a 70.60% (742/1051) accuracy rate, including 71.8% (237/330) accuracy for A1 MCQs, 73.7% (330/448) accuracy for A2 MCQs, 70.2% (92/131) accuracy for A3/4 MCQs, and 58.5% (83/142) accuracy for case analysis MCQs. As of April 7, 2023, a total of 129 individuals participated in the experiment. However, 19 individuals withdrew from the experiment at various phases; thus, as of July 1, 2023, a total of 110 individuals accomplished the trial and completed all follow-up work. After we intervened in the learning style of the students in the short term, the ChatGPT group answered more questions correctly than the control group (ChatGPT group: mean 141.20, SD 26.68; control group: mean 130.80, SD 25.56; P=.04) in the orthopedics test, particularly on A1 (ChatGPT group: mean 46.57, SD 8.52; control group: mean 42.18, SD 9.43; P=.01), A2 (ChatGPT group: mean 60.59, SD 10.58; control group: mean 56.66, SD 9.91; P=.047), and A3/4 MCQs (ChatGPT group: mean 19.57, SD 5.48; control group: mean 16.46, SD 4.58; P=.002). At the end of the semester, we found that the ChatGPT group performed better on final examinations in surgery (ChatGPT group: mean 76.54, SD 9.79; control group: mean 72.54, SD 8.11; P=.02) and obstetrics and gynecology (ChatGPT group: mean 75.98, SD 8.94; control group: mean 72.54, SD 8.66; P=.04) than the control group. ChatGPT answers orthopedics-related MCQs accurately, and students using it excel in both short-term and long-term assessments. Our findings strongly support ChatGPT's integration into medical education, enhancing contemporary instructional methods. Chinese Clinical Trial Registry Chictr2300071774; https://www.chictr.org.cn/hvshowproject.html ?id=225740&v=1.0.
High-fidelity simulation versus case-based tutorial sessions for teaching pharmacology: Convergent mixed methods research investigating undergraduate medical students’ performance and perception
Medical educators strive to improve their curricula to enhance the student learning experience. The use of high-fidelity simulation within basic and clinical medical science subjects has been one of these initiatives. However, there is paucity of evidence on using simulation for teaching pharmacology, especially in the Middle East and North Africa region, and the effectiveness of this teaching modality, relative to more traditional ones, have not been sufficiently investigated. Accordingly, this study compares the effects of high-fidelity simulation, which is designed in alignment with adult and experiential learning theories, and traditional case-based tutorial sessions on the performance and perception of undergraduate Year 2 medical students in pharmacology in Dubai, United Arab Emirates. This study employed a convergent mixed methods approach. Forty-nine medical students were randomly assigned to one of two groups during the 16-week pharmacology course. Each group underwent one session delivered via high-fidelity simulation and another via a case-based tutorial. A short multiple-choice question quiz was administered twice (immediately upon completion of the respective sessions and 5 weeks afterwards) to assess knowledge retention. Furthermore, to explore the students' perceptions regarding the two modes of learning delivery (independently and in relation to each other), an evaluation survey was administered following the delivery of each session. Thereafter, the iterative joint display analysis was used to develop a holistic understanding of the effect of high-fidelity simulation in comparison to traditional case-based tutorial sessions on pharmacology learning in the context of the study. There was no statistically significant difference in students' knowledge retention between high-fidelity simulation and case-based tutorial sessions. Yet, students expressed a greater preference for high-fidelity simulation, describing the corresponding sessions as more varied, better at reinforcing learning, and closer to reality. As such, the meta-inferences led to expansion of the overall understanding around students' satisfaction, to both confirmation and expansion of the systemic viewpoint around students' preferences, and lastly to refinement in relation to the perspective around retained knowledge. High-fidelity simulation was found to be as effective as case-based tutorial sessions in terms of students' retention of knowledge. Nonetheless, students demonstrated a greater preference for high-fidelity simulation. The study advocates caution in adapting high-fidelity simulation, where careful appraisal can lend itself to identifying contexts where it is most effective.
Use of very short answer questions compared to multiple choice questions in undergraduate medical students: An external validation study
Multiple choice questions (MCQs) offer high reliability and easy machine-marking, but allow for cueing and stimulate recognition-based learning. Very short answer questions (VSAQs), which are open-ended questions requiring a very short answer, may circumvent these limitations. Although VSAQ use in medical assessment increases, almost all research on reliability and validity of VSAQs in medical education has been performed by a single research group with extensive experience in the development of VSAQs. Therefore, we aimed to validate previous findings about VSAQ reliability, discrimination, and acceptability in undergraduate medical students and teachers with limited experience in VSAQs development. To validate the results presented in previous studies, we partially replicated a previous study and extended results on student experiences. Dutch undergraduate medical students (n = 375) were randomized to VSAQs first and MCQs second or vice versa in a formative exam in two courses, to determine reliability, discrimination, and cueing. Acceptability for teachers (i.e., VSAQ review time) was determined in the summative exam. Reliability (Cronbach’s α) was 0.74 for VSAQs and 0.57 for MCQs in one course. In the other course, Cronbach’s α was 0.87 for VSAQs and 0.83 for MCQs. Discrimination (average R ir ) was 0.27 vs. 0.17 and 0.43 vs. 0.39 for VSAQs vs. MCQs, respectively. Reviewing time of one VSAQ for the entire student cohort was ±2 minutes on average. Positive cueing occurred more in MCQs than in VSAQs (20% vs. 4% and 20.8% vs. 8.3% of questions per person in both courses). This study validates the positive results regarding VSAQs reliability, discrimination, and acceptability in undergraduate medical students. Furthermore, we demonstrate that VSAQ use is reliable among teachers with limited experience in writing and marking VSAQs. The short learning curve for teachers, favourable marking time and applicability regardless of the topic suggest that VSAQs might also be valuable beyond medical assessment.
Chatbot-based serious games: A useful tool for training medical students? A randomized controlled trial
Chatbots, conversational agents that walk medical students (MS) though a clinical case, are serious games that seem to be appreciated by MS. Their impact on MS's performance in exams however was not yet evaluated. Chatprogress is a chatbot-based game developed at Paris Descartes University. It contains 8 pulmonology cases with step-by-step answers delivered with pedagogical comments. The CHATPROGRESS study aimed to evaluate the impact of Chatprogress on students' success rate in their end-term exams. We conducted a post-test randomized controlled trial held on all fourth-year MS at Paris Descartes University. All MS were asked to follow the University's regular lectures, and half of them were randomly given access to Chatprogress. At the end of the term, medical students were evaluated on pulmonology, cardiology and critical care medicine. The primary aim was to evaluate an increase in scores in the pulmonology sub-test for students who had access to Chatprogress, compared to those who didn't. Secondary aims were to evaluate an increase in scores in the overall test (Pulmonology, Cardiology and Critical care medicine test (PCC)) and to evaluate the correlation between access to Chatprogress and overall test score. Finally, students' satisfaction was assessed using a survey. From 10/2018 to 06/2019, 171 students had access to Chatprogress (the Gamers) and among them, 104 ended up using it (the Users). Gamers and Users were compared to 255 Controls with no access to Chatprogress. Differences in scores on the pulmonology sub-test over the academic year were significantly higher among Gamers and Users vs Controls (mean score: 12.7/20 vs 12.0/20, p = 0.0104 and mean score: 12.7/20 vs 12.0/20, p = 0.0365 respectively). This significant difference was present as well in the overall PCC test scores: (mean score: 12.5/20 vs 12.1/20, p = 0.0285 and 12.6/20 vs 12.1/20, p = 0.0355 respectively). Although no significant correlation was found between the pulmonology sub-test's scores and MS's assiduity parameters (number of finished games among the 8 proposed to Users and number of times a User finished a game), there was a trend to a better correlation when users were evaluated on a subject covered by Chatprogress. MS were also found to be fans of this teaching tool, asking for more pedagogical comments even when they got the questions right. This randomised controlled trial is the first to demonstrate a significant improvement in students' results (in both the pulmonology subtest and the overall PCC exam) when they had access to Chatbots, and even more so when they actually used it.
Shedding light on the effects of conflict management training: A multi-rater assessment shines a spotlight on medical students’ skills
Medical students are repeatedly exposed to challenging situations while working with healthcare teams, so acquiring conflict management skills is necessary. This study aimed to investigate the effect of an educational intervention on the conflict management skills of medical students using self- and observer-assessment. This educational intervention with a pre-and post-test design was conducted in 2022-2023. Second-year medical students of Tehran University of Medical Sciences volunteered to participate in a randomized study with a control group. The participants were divided into two intervention (12 groups of 4 each, n = 48) and control groups (12 groups of 4 each, n = 48). The intervention group was educated based on the Fogg model, and the control group was trained using conventional method. Student conflict management skills were evaluated using a self-assessment checklist and observer-assessment. The findings of observer-assessment revealed that the post-test rating in the intervention group was significantly higher than the control group, while the pre-test score in the two groups did not indicate a significant difference (P = 0.03; ES = 0.44 and P = 0.30; ES = 0.18, respectively). Moreover, the comparison between pre-test and post-test in the two intervention and control groups also showed that the educational intervention significantly increased the mean score of the post-test in both the intervention and control groups (P ≤ 0.001; ES = 0.97 and P ≤ 0.001; ES = 1.34, respectively). The comparison between pre-test and post-test in the two intervention and control groups via self-assessment showed that the skill score increased only in the intervention group (P = 0.02; ES = 0.48 and P = 0.98; ES = 0.004, respectively). This study found that using the Fogg model in e-learning platforms enhances medical students' conflict management skills, highlighting the effectiveness of well-designed, creative, and active model-based teaching methods.
Effects of building resilience skills among undergraduate medical students in a multi-cultural, multi-ethnic setting in the United Arab Emirates: A convergent mixed methods study
Although curricula teaching skills related to resilience are widely adopted, little is known about needs and attitudes regarding resilience training of undergraduate-medical-trainees in Middle-East-and-North-Africa-region. The purpose of this study is to investigate the value of an innovative curriculum developed through design-based-research to build resilience-skills among undergraduate-medical-trainees in the United-Arab-Emirates. Convergent-mixed-methods-study-design was utilized. Quantitative data collection was through controlled random group allocation conducted in one cohort of undergraduate medical students(n = 47). Students were randomly allocated into the respective resilience-skills-building-course(study-group) versus an unrelated curriculum(control-group). All students were tested at baseline(test-1), at end of 8-week course(test-2), and again 8 weeks after end of course(test-3). Then students crossed over to the opposite course and again tested at end of 8 weeks(test-4). Testing at four timepoints consisted of questionnaires related to burnout-Maslach-Burnout-Inventory; anxiety-General-Anxiety-Disorder-7; and resilience- Connor-Davidson-Resilience-Scale. Quantitative data were analysed descriptively and inferentially. Qualitative data, constituting of students' perception of their experience with the course, was captured using virtual-focus-group-sessions. Qualitative analysis was inductive. Generated primary inferences were merged using joint-display-analysis. Significant proportion of the students, at baseline, seemed to be at risk for burnout and anxiety, and would benefit from developing their resilience. There appeared to be no statistical differences in measures of burnout, anxiety, and resilience related to course delivery. Overall risk for anxiety among students increased following the COVID-19 lockdown. Qualitative analysis generated the 'Resilience-Skills'-Building-around-Undergraduate-Medical-Education-Transitions' conceptual model of five themes: Transitions, Adaptation, Added Value of course, Sustainability of effects of course, and Opportunities for improving course. Merging of findings led to a thorough understanding of how the resilience-skills'-building-course affected students' adaptability. This study indicates that a resilience-skills'-building-course may not instantly affect medical trainees' ratings of burnout, anxiety, and resilience. However, students likely engage with such an innovative course and its content to acquire and deploy skills to adapt to changes.
Early formative objective structured clinical examinations for students in the pre-clinical years of medical education: A non-randomized controlled prospective pilot study
The value of formative objective structured clinical examinations (OSCEs) during the pre-clinical years of medical education remains unclear. We aimed to assess the effectiveness of a formative OSCE program for medical students in their pre-clinical years on subsequent performance in summative OSCE. We conducted a non-randomized controlled prospective pilot study that included all medical students from the last year of the pre-clinical cycle of the Université Paris-Cité Medical School, France, in 2021. The intervention group received the formative OSCE program, which consisted of four OSCE sessions, followed by debriefing and feedback, whereas the control group received the standard teaching program. The main objective of this formative OSCE program was to develop skills in taking a structured medical history and communication. All participants took a final summative OSCE. The primary endpoint was the summative OSCE mark in each group. A questionnaire was also administered to the intervention-group students to collect their feedback. A qualitative analysis, using a convenience sample, was conducted by gathering data pertaining to the process through on-site participative observation of the formative OSCE program. Twenty students were included in the intervention group; 776 in the control group. We observed a significant improvement with each successive formative OSCE session in communication skills and in taking a structured medical history (p<0.0001 for both skills). Students from the intervention group performed better in a summative OSCE that assessed the structuring of a medical history (median mark 16/20, IQR [15; 17] versus 14/20, [13; 16], respectively, p = 0.012). Adjusted analyses gave similar results. The students from the intervention group reported a feeling of improved competence and a reduced level of stress at the time of the evaluation, supported by the qualitative data showing the benefits of the formative sessions. Our findings suggest that an early formative OSCE program is suitable for the pre-clinical years of medical education and is associated with improved student performance in domains targeted by the program.
Assessing the impact of jigsaw technique for cooperative learning in undergraduate medical education: merits, challenges, and forward prospects
Background Jigsaw method is a structured cooperative-learning technique that lays the groundwork towards achieving collective competence, which forms the core of effective clinical practice. It promotes deep learning and effectively enhances team-work among students, hence creating a more inclusive environment. Objective Present study was designed to introduce jigsaw model of cooperative learning to early-year undergraduate medical students, measure its effectiveness on their academic performance, and evaluate the perspectives of both students and faculty members regarding the same. Methods It was a mixed method research, involving eighty second-year undergraduate medical students. The jigsaw cooperative learning approach was introduced in two themes within neurosciences module. Students were divided into two equal groups, with one group experiencing typical small-group discussions (SGDs) in first theme and other group exposed to jigsaw approach. The groups were then reversed for second theme. Following the activity, an assessment comprising multiple-choice-questions was conducted to evaluate the impact of jigsaw technique on students’ academic performance, with scores from both groups compared. Student perspectives were gathered through self-designed and validated questionnaire, while faculty perceptions were obtained through focus group discussions. Quantitative data were analyzed using SPSS v22, while thematic analysis was performed for qualitative data. Results The students of jigsaw group displayed significantly higher median assessment score percentage compared to control group ( p  = 0.003). Moreover, a significantly greater number of students achieved scores ≥ 60% in jigsaw group compared to control group ( p  = 0.006). The questionnaire responses indicated a favorable perception of this technique among students, in terms of acceptance, positive interdependence, improvement of interpersonal skills, and comparison with typical SGDs. This technique was also well-perceived within the educational context by faculty members. Conclusion The jigsaw method is associated with higher levels of academic performance among students when compared to typical small-group discussion. The students and faculty perceived this technique to be an effective cooperative learning strategy in terms of enhanced student engagement, active participation, and a sense of inclusivity.