Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
148 result(s) for "New Methods and Approaches in Medical Education"
Sort by:
ChatGPT-4 Omni Performance in USMLE Disciplines and Clinical Skills: Comparative Analysis
Recent studies, including those by the National Board of Medical Examiners, have highlighted the remarkable capabilities of recent large language models (LLMs) such as ChatGPT in passing the United States Medical Licensing Examination (USMLE). However, there is a gap in detailed analysis of LLM performance in specific medical content areas, thus limiting an assessment of their potential utility in medical education. This study aimed to assess and compare the accuracy of successive ChatGPT versions (GPT-3.5, GPT-4, and GPT-4 Omni) in USMLE disciplines, clinical clerkships, and the clinical skills of diagnostics and management. This study used 750 clinical vignette-based multiple-choice questions to characterize the performance of successive ChatGPT versions (ChatGPT 3.5 [GPT-3.5], ChatGPT 4 [GPT-4], and ChatGPT 4 Omni [GPT-4o]) across USMLE disciplines, clinical clerkships, and in clinical skills (diagnostics and management). Accuracy was assessed using a standardized protocol, with statistical analyses conducted to compare the models' performances. GPT-4o achieved the highest accuracy across 750 multiple-choice questions at 90.4%, outperforming GPT-4 and GPT-3.5, which scored 81.1% and 60.0%, respectively. GPT-4o's highest performances were in social sciences (95.5%), behavioral and neuroscience (94.2%), and pharmacology (93.2%). In clinical skills, GPT-4o's diagnostic accuracy was 92.7% and management accuracy was 88.8%, significantly higher than its predecessors. Notably, both GPT-4o and GPT-4 significantly outperformed the medical student average accuracy of 59.3% (95% CI 58.3-60.3). GPT-4o's performance in USMLE disciplines, clinical clerkships, and clinical skills indicates substantial improvements over its predecessors, suggesting significant potential for the use of this technology as an educational aid for medical students. These findings underscore the need for careful consideration when integrating LLMs into medical education, emphasizing the importance of structured curricula to guide their appropriate use and the need for ongoing critical analyses to ensure their reliability and effectiveness.
Effectiveness of Gamified Teaching in Disaster Nursing Education for Health Care Workers: Systematic Review
With the continuous advancement of medical technology and the frequent occurrence of disaster events, the training of health care workers in disaster nursing has become increasingly significant. However, traditional training methods often struggle to engage learners' interest and enthusiasm, making it challenging to simulate emergencies in real-life scenarios effectively. Gamification, as an innovative pedagogical approach that enhances the enjoyment and practicality of learning through the incorporation of game elements, has garnered considerable attention in the realm of disaster nursing education for health care workers in recent years. This review systematically evaluates its effectiveness and explores its advantages in improving training outcomes. This review aims to evaluate the effectiveness of gamified teaching methodologies in disaster nursing education and to identify the outcome of 16 indicators used in existing studies. This study was conducted following the PRISMA (Preferred Reporting Items for Systematic reviews and Meta-Analyses) guidelines, using the PICO-SD framework (Population, Intervention, Control, Outcome, Study Design) to establish the inclusion criteria. The researchers systematically searched 8 databases on February 10, 2025, including the Cochrane Library, PubMed, CINAHL (EBSCO), Embase, Web of Science, CNKI, Wanfang, and SCOPUS. Ultimately, 16 quasi-experimental studies investigating the application of gamified teaching in disaster nursing education were included in the analysis. For randomized controlled trials (RCTs), the Cochrane Risk of Bias Assessment Tool (RoB 2.0) was used for quality assessment; for quasi-experimental studies, the Joanna Briggs Institute Risk of Bias Tool for Non-Randomized Intervention Studies was used for methodological quality evaluation. Given the heterogeneity of study designs and the diversity of study indicators, this study used a narrative synthesis to integrate the findings. The studies included in this review comprised 1 RCT and 15 quasi-experimental designs. Six gamified formats exhibited positive outcomes. The effectiveness of these formats was assessed through various metrics, including theoretical knowledge (14 studies), practical skills (11 studies), learner satisfaction (9 studies), knowledge retention (4 studies), and self-efficacy (2 studies). All formats demonstrated improvements in knowledge and skills, with high levels of satisfaction reported. However, data on long-term retention were limited. Gamification teaching methods have shown significant potential to enhance core competencies such as emergency response, decision-making, and teamwork in disaster nursing education and have been effective in reinforcing learning engagement through elements such as cooperation, competition, scoring, and scenario simulation. However, there is a lack of standardized assessment frameworks for skill acquisition, longitudinal studies tracking behavior in real-life scenarios, and rigorous RCTs comparing it with traditional instruction. Although the existing evidence has not systematically confirmed its full effectiveness, based on the findings, this paper provides practical recommendations for developing and implementing gamified teaching strategies in disaster nursing education to enhance students' knowledge acquisition and practice.
Knowledge Gain and the Impact of Stress in a Fully Immersive Virtual Reality–Based Medical Emergencies Training With Automated Feedback: Randomized Controlled Trial
A significant gap exists in the knowledge and procedural skills of medical graduates when it comes to managing emergencies. In response, highly immersive virtual reality (VR)-based learning environments have been developed to train clinical competencies. However, robust evidence on how VR-based methods affect both short- and long-term learning outcomes, as well as physiological and perceived stress, remains limited. This study aimed to assess the effectiveness of VR-based simulation training, augmented with automated feedback, compared with video seminars at improving emergency medical competency among medical students. Furthermore, the study investigated the relationship between learning outcomes and physiological stress markers. The evaluation of participants' perceived stress and estimated learning success was also performed to provide a more comprehensive insight into VR's potential role in emergency training. In total, 72 senior medical students underwent VR-based emergency training (intervention) or viewed video seminars (control) on 2 topics (acute myocardial infarction and exacerbated chronic obstructive pulmonary disease) in an intraindividual crossover design. Levels of applied knowledge were assessed objectively by open-response tests pre- and postintervention and after 30 days. In addition, 2 electrodermal activity markers representing physiological stress response were measured during VR sessions using a wearable sensor. Participants also rated their estimated learning success and perceived stress. They also completed self-ratings of perceived stress and estimated learning success. Short-term knowledge gains were comparable between the VR (mean 26.6%, SD 15.3%) and control (mean 27.2%, SD 16%) condition. However, VR training produced significantly higher long-term knowledge gains (VR: mean 17.8%, SD 15.1% vs control: mean 11.9%, SD 18%; difference: -5.9, 95% CI -11.5 to -0.4). Overall retention scores were likewise higher for VR (mean 75.4%, SD 12.5%) than for video-based learning (mean 69.0%, SD 14.5%), a difference that was more pronounced in the myocardial infarction scenario. Participants rated the VR format as significantly more effective (mean 4.83, SD 0.41, on a 5-point scale) than the video seminar (mean 3.44, SD 1.00). While physiological stress markers increased during VR sessions, their correlation with knowledge gains was weak and negative. No significant relationship was detectable between perceived stress and objective learning outcomes. VR-based simulation training with automated feedback may offer long-term learning advantages over a traditional video seminar in emergency-medicine education. Given the time constraints and resource limitations of clinical education, self-moderated VR-based learning may represent a valuable addition to conventional training methods. Future research could investigate the learning effects of VR scenarios regarding the retention of practical skills, as well as the impact of repeated or team-based scenarios.
Evidence-Based Learning Strategies in Medicine Using AI
Large language models (LLMs), like ChatGPT, are transforming the landscape of medical education. They offer a vast range of applications, such as tutoring (personalized learning), patient simulation, generation of examination questions, and streamlined access to information. The rapid advancement of medical knowledge and the need for personalized learning underscore the relevance and timeliness of exploring innovative strategies for integrating artificial intelligence (AI) into medical education. In this paper, we propose coupling evidence-based learning strategies, such as active recall and memory cues, with AI to optimize learning. These strategies include the generation of tests, mnemonics, and visual cues.
Assessing AI Awareness and Identifying Essential Competencies: Insights From Key Stakeholders in Integrating AI Into Medical Education
The increasing importance of artificial intelligence (AI) in health care has generated a growing need for health care professionals to possess a comprehensive understanding of AI technologies, requiring an adaptation in medical education. This paper explores stakeholder perceptions and expectations regarding AI in medicine and examines their potential impact on the medical curriculum. This study project aims to assess the AI experiences and awareness of different stakeholders and identify essential AI-related topics in medical education to define necessary competencies for students. The empirical data were collected as part of the TüKITZMed project between August 2022 and March 2023, using a semistructured qualitative interview. These interviews were administered to a diverse group of stakeholders to explore their experiences and perspectives of AI in medicine. A qualitative content analysis of the collected data was conducted using MAXQDA software. Semistructured interviews were conducted with 38 participants (6 lecturers, 9 clinicians, 10 students, 6 AI experts, and 7 institutional stakeholders). The qualitative content analysis revealed 6 primary categories with a total of 24 subcategories to answer the research questions. The evaluation of the stakeholders' statements revealed several commonalities and differences regarding their understanding of AI. Crucial identified AI themes based on the main categories were as follows: possible curriculum contents, skills, and competencies; programming skills; curriculum scope; and curriculum structure. The analysis emphasizes integrating AI into medical curricula to ensure students' proficiency in clinical applications. Standardized AI comprehension is crucial for defining and teaching relevant content. Considering diverse perspectives in implementation is essential to comprehensively define AI in the medical context, addressing gaps and facilitating effective solutions for future AI use in medical studies. The results provide insights into potential curriculum content and structure, including aspects of AI in medicine.
Enhancing Medical Student Engagement Through Cinematic Clinical Narratives: Multimodal Generative AI–Based Mixed Methods Study
Medical students often struggle to engage with and retain complex pharmacology topics during their preclinical education. Traditional teaching methods can lead to passive learning and poor long-term retention of critical concepts. This study aims to enhance the teaching of clinical pharmacology in medical school by using a multimodal generative artificial intelligence (genAI) approach to create compelling, cinematic clinical narratives (CCNs). We transformed a standard clinical case into an engaging, interactive multimedia experience called \"Shattered Slippers.\" This CCN used various genAI tools for content creation: GPT-4 for developing the storyline, Leonardo.ai and Stable Diffusion for generating images, Eleven Labs for creating audio narrations, and Suno for composing a theme song. The CCN integrated narrative styles and pop culture references to enhance student engagement. It was applied in teaching first-year medical students about immune system pharmacology. Student responses were assessed through the Situational Interest Survey for Multimedia and examination performance. The target audience comprised first-year medical students (n=40), with 18 responding to the Situational Interest Survey for Multimedia survey (n=18). The study revealed a marked preference for the genAI-enhanced CCNs over traditional teaching methods. Key findings include the majority of surveyed students preferring the CCN over traditional clinical cases (14/18), as well as high average scores for triggered situational interest (mean 4.58, SD 0.53), maintained interest (mean 4.40, SD 0.53), maintained-feeling interest (mean 4.38, SD 0.51), and maintained-value interest (mean 4.42, SD 0.54). Students achieved an average score of 88% on examination questions related to the CCN material, indicating successful learning and retention. Qualitative feedback highlighted increased engagement, improved recall, and appreciation for the narrative style and pop culture references. This study demonstrates the potential of using a multimodal genAI-driven approach to create CCNs in medical education. The \"Shattered Slippers\" case effectively enhanced student engagement and promoted knowledge retention in complex pharmacological topics. This innovative method suggests a novel direction for curriculum development that could improve learning outcomes and student satisfaction in medical education. Future research should explore the long-term retention of knowledge and the applicability of learned material in clinical settings, as well as the potential for broader implementation of this approach across various medical education contexts.
Making Medical Education Courses Visible: Theory-Based Development of a National Database
Medical education has undergone professionalization during the last decades, and internationally, educators are trained in specific medical education courses also known as \"train the trainer\" courses. As these courses have developed organically based on local needs, the lack of a general structure and terminology can confuse and hinder educators' information and development. The first aim of this study was to conduct a national search, analyze the findings, and provide a presentation of medical education courses based on international theoretical frameworks to support Swiss course providers and educators searching for courses. The second aim was to provide a blueprint for such a procedure to be used by the international audience. In this study, we devised a scholarly approach to sorting and presenting medical education courses to make their content accessible to medical educators. This approach is presented in detailed steps and our openly available exemplary database to make it serve as a blueprint for other settings. Following our constructivist paradigm, we examined content from medical education courses using a theory-informed inductive data approach. Switzerland served as an example, covering 4 languages and different approaches to medical education. Data were gathered through an online search and a nationwide survey with course providers. The acquired data and a concurrently developed keyword system to standardize course terminology are presented using Obsidian, a software that shows data networks. Our iterative search included several strategies (web search, survey, provider enquiry, and snowballing) and yielded 69 courses in 4 languages, with varying terminology, target audiences, and providers. The database of courses is interactive and openly accessible. An open-access template database structure is also available. This study proposes a novel method for sorting and visualizing medical education courses and the competencies they cover to provide an easy-to-use database, helping medical educators' practical and scholarly development. Notably, our analysis identified a specific emphasis on undergraduate teaching settings, potentially indicating a gap in postgraduate educational offerings. This aspect could be pivotal for future curriculum development and resource allocation. Our method might guide other countries and health care professions, offering a straightforward means of cataloging and making information about medical education courses widely available and promotable.
Integration of ChatGPT Into a Course for Medical Students: Explorative Study on Teaching Scenarios, Students’ Perception, and Applications
Text-generating artificial intelligence (AI) such as ChatGPT offers many opportunities and challenges in medical education. Acquiring practical skills necessary for using AI in a clinical context is crucial, especially for medical education. This explorative study aimed to investigate the feasibility of integrating ChatGPT into teaching units and to evaluate the course and the importance of AI-related competencies for medical students. Since a possible application of ChatGPT in the medical field could be the generation of information for patients, we further investigated how such information is perceived by students in terms of persuasiveness and quality. ChatGPT was integrated into 3 different teaching units of a blended learning course for medical students. Using a mixed methods approach, quantitative and qualitative data were collected. As baseline data, we assessed students' characteristics, including their openness to digital innovation. The students evaluated the integration of ChatGPT into the course and shared their thoughts regarding the future of text-generating AI in medical education. The course was evaluated based on the Kirkpatrick Model, with satisfaction, learning progress, and applicable knowledge considered as key assessment levels. In ChatGPT-integrating teaching units, students evaluated videos featuring information for patients regarding their persuasiveness on treatment expectations in a self-experience experiment and critically reviewed information for patients written using ChatGPT 3.5 based on different prompts. A total of 52 medical students participated in the study. The comprehensive evaluation of the course revealed elevated levels of satisfaction, learning progress, and applicability specifically in relation to the ChatGPT-integrating teaching units. Furthermore, all evaluation levels demonstrated an association with each other. Higher openness to digital innovation was associated with higher satisfaction and, to a lesser extent, with higher applicability. AI-related competencies in other courses of the medical curriculum were perceived as highly important by medical students. Qualitative analysis highlighted potential use cases of ChatGPT in teaching and learning. In ChatGPT-integrating teaching units, students rated information for patients generated using a basic ChatGPT prompt as \"moderate\" in terms of comprehensibility, patient safety, and the correct application of communication rules taught during the course. The students' ratings were considerably improved using an extended prompt. The same text, however, showed the smallest increase in treatment expectations when compared with information provided by humans (patient, clinician, and expert) via videos. This study offers valuable insights into integrating the development of AI competencies into a blended learning course. Integration of ChatGPT enhanced learning experiences for medical students.
Evaluating Tailored Learning Experiences in Emergency Residency Training Through a Comparative Analysis of Mobile-Based Programs Versus Paper- and Web-Based Approaches: Feasibility Cross-Sectional Questionnaire Study
In the rapidly changing realm of medical education, Competency-Based Medical Education is emerging as a crucial framework to ensure residents acquire essential competencies efficiently. The advent of mobile-based platforms is seen as a pivotal shift from traditional educational methods, offering more dynamic and accessible learning options. This research aims to evaluate the effectiveness of mobile-based apps in emergency residency programs compared with the traditional paper- and web-based formats. Specifically, it focuses on analyzing their roles in facilitating immediate feedback, tracking educational progress, and personalizing the learning journey to meet the unique needs of each resident. This study aimed to compare mobile-based emergency residency training programs with paper- and web-based (programs regarding competency-based medical education core elements. A cross-sectional web-based survey (Nov 2022-Jan 2023) across 23 Taiwanese emergency residency sites used stratified random sampling, yielding 74 valid responses (49 educators, 16 residents, and 9 Residency Review Committee hosts). Data were analyzed using Mann-Whitney U test, chi-squared tests, and t tests. MB programs (n=14) had fewer missed assessments (P=.02) and greater ease in identifying performance trends (P<.001) and required clinical scenarios (P<.001) compared with paper- and web-based programs (n=60). In addition, mobile-based programs enabled real-time visualization of performance trends and completion rates, facilitating individualized training (P<.001). In our nationwide pilot study, we observed that the mobile-based interface significantly enhances emergency residency training. It accomplishes this by providing rapid, customized updates, thereby increasing satisfaction and autonomous motivation among participants. This method is markedly different from traditional paper- or web-based approaches, which tend to be slower and less responsive. This difference is particularly evident in settings with limited resources. The mobile-based interface is a crucial tool in modernizing training, as it improves efficiency, boosts engagement, and facilitates collaboration. It plays an essential role in advancing Competency-Based Medical Education, especially concerning tailored learning experiences.
Performance Evaluation and Implications of Large Language Models in Radiology Board Exams: Prospective Comparative Analysis
Artificial intelligence advancements have enabled large language models to significantly impact radiology education and diagnostic accuracy. This study evaluates the performance of mainstream large language models, including GPT-4, Claude, Bard, Tongyi Qianwen, and Gemini Pro, in radiology board exams. A comparative analysis of 150 multiple-choice questions from radiology board exams without images was conducted. Models were assessed on their accuracy for text-based questions and were categorized by cognitive levels and medical specialties using χ2 tests and ANOVA. GPT-4 achieved the highest accuracy (83.3%, 125/150), significantly outperforming all other models. Specifically, Claude achieved an accuracy of 62% (93/150; P<.001), Bard 54.7% (82/150; P<.001), Tongyi Qianwen 70.7% (106/150; P=.009), and Gemini Pro 55.3% (83/150; P<.001). The odds ratios compared to GPT-4 were 0.33 (95% CI 0.18-0.60) for Claude, 0.24 (95% CI 0.13-0.44) for Bard, and 0.25 (95% CI 0.14-0.45) for Gemini Pro. Tongyi Qianwen performed relatively well with an accuracy of 70.7% (106/150; P=0.02) and had an odds ratio of 0.48 (95% CI 0.27-0.87) compared to GPT-4. Performance varied across question types and specialties, with GPT-4 excelling in both lower-order and higher-order questions, while Claude and Bard struggled with complex diagnostic questions. GPT-4 and Tongyi Qianwen show promise in medical education and training. The study emphasizes the need for domain-specific training datasets to enhance large language models' effectiveness in specialized fields like radiology.