Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
115 result(s) for "Artificial intelligence in clinical reasoning education"
Sort by:
The impact of surgical simulation and training technologies on general surgery education
The landscape of general surgery education has undergone a significant transformation over the past few years, driven in large part by the advent of surgical simulation and training technologies. These innovative tools have revolutionized the way surgeons are trained, allowing for a more immersive, interactive, and effective learning experience. In this review, we will explore the impact of surgical simulation and training technologies on general surgery education, highlighting their benefits, challenges, and future directions. Enhancing the technical proficiency of surgical residents is one of the main benefits of surgical simulation and training technologies. By providing a realistic and controlled environment, With the use of simulations, residents may hone their surgical skills without compromising patient safety. Research has consistently demonstrated that training with simulations enhances surgical skills., reduces errors, and enhances overall performance. Furthermore, simulators can be programmed to mimic a wide range of surgical scenarios, enabling residents to cultivate the essential critical thinking and decision-making abilities required to manage intricate surgical cases. Another area of development is incorporating simulation-based training into the wider surgical curriculum. As simulation technologies become more widespread, they will need to be incorporated into the fabric of surgical education, rather than simply serving as an adjunct to traditional training methods. This will require a fundamental shift in the way surgical education is delivered, with a greater emphasis on simulation-based training and assessment. Highlights Surgical simulation and training technologies have revolutionized general surgery education, enhancing technical skills and critical thinking abilities of surgical residents. Integration of simulation-based training into the broader surgical curriculum is necessary for its widespread adoption and effectiveness. With the support of educational agendas led by national neurosurgical committees, industry and new technology, simulators will become readily available, translatable, affordable, and effective. As specialized, well-organized curricula are developed that integrate simulations into daily resident training, these simulated procedures will enhance the surgeon’s skills, lower hospital costs, and lead to better patient outcomes.
Large language models improve clinical decision making of medical students through patient simulation and structured feedback: a randomized controlled trial
Background Clinical decision-making (CDM) refers to physicians’ ability to gather, evaluate, and interpret relevant diagnostic information. An integral component of CDM is the medical history conversation, traditionally practiced on real or simulated patients. In this study, we explored the potential of using Large Language Models (LLM) to simulate patient-doctor interactions and provide structured feedback. Methods We developed AI prompts to simulate patients with different symptoms, engaging in realistic medical history conversations. In our double-blind randomized design, the control group participated in simulated medical history conversations with AI patients (control group), while the intervention group, in addition to simulated conversations, also received AI-generated feedback on their performances (feedback group). We examined the influence of feedback based on their CDM performance, which was evaluated by two raters (ICC = 0.924) using the Clinical Reasoning Indicator – History Taking Inventory (CRI-HTI). The data was analyzed using an ANOVA for repeated measures. Results Our final sample included 21 medical students (age mean = 22.10 years, semester mean = 4, 14 females). At baseline, the feedback group (mean = 3.28 ± 0.09 [standard deviation]) and the control group (3.21 ± 0.08) achieved similar CRI-HTI scores, indicating successful randomization. After only four training sessions, the feedback group (3.60 ± 0.13) outperformed the control group (3.02 ± 0.12), F (1,18) = 4.44, p  = .049 with a strong effect size, partial η 2  = 0.198. Specifically, the feedback group showed improvements in the subdomains of CDM of creating context ( p  = .046) and securing information ( p  = .018), while their ability to focus questions did not improve significantly ( p  = .265). Conclusion The results suggest that AI-simulated medical history conversations can support CDM training, especially when combined with structured feedback. Such training format may serve as a cost-effective supplement to existing training methods, better preparing students for real medical history conversations.
Exploring prospects, hurdles, and road ahead for generative artificial intelligence in orthopedic education and training
Generative Artificial Intelligence (AI), characterized by its ability to generate diverse forms of content including text, images, video and audio, has revolutionized many fields, including medical education. Generative AI leverages machine learning to create diverse content, enabling personalized learning, enhancing resource accessibility, and facilitating interactive case studies. This narrative review explores the integration of generative artificial intelligence (AI) into orthopedic education and training, highlighting its potential, current challenges, and future trajectory. A review of recent literature was conducted to evaluate the current applications, identify potential benefits, and outline limitations of integrating generative AI in orthopedic education. Key findings indicate that generative AI holds substantial promise in enhancing orthopedic training through its various applications such as providing real-time explanations, adaptive learning materials tailored to individual student’s specific needs, and immersive virtual simulations. However, despite its potential, the integration of generative AI into orthopedic education faces significant issues such as accuracy, bias, inconsistent outputs, ethical and regulatory concerns and the critical need for human oversight. Although generative AI models such as ChatGPT and others have shown impressive capabilities, their current performance on orthopedic exams remains suboptimal, highlighting the need for further development to match the complexity of clinical reasoning and knowledge application. Future research should focus on addressing these challenges through ongoing research, optimizing generative AI models for medical content, exploring best practices for ethical AI usage, curriculum integration and evaluating the long-term impact of these technologies on learning outcomes. By expanding AI’s knowledge base, refining its ability to interpret clinical images, and ensuring reliable, unbiased outputs, generative AI holds the potential to revolutionize orthopedic education. This work aims to provides a framework for incorporating generative AI into orthopedic curricula to create a more effective, engaging, and adaptive learning environment for future orthopedic practitioners.
Performance of ChatGPT and Bard on the medical licensing examinations varies across different cultures: a comparison study
Background This study aimed to evaluate the performance of GPT-3.5, GPT-4, GPT-4o and Google Bard on the United States Medical Licensing Examination (USMLE), the Professional and Linguistic Assessments Board (PLAB), the Hong Kong Medical Licensing Examination (HKMLE) and the National Medical Licensing Examination (NMLE). Methods This study was conducted in June 2023. Four LLMs (Large Language Models) (GPT-3.5, GPT-4, GPT-4o and Google Bard) were applied to four medical standardized tests (USMLE, PLAB, HKMLE and NMLE). All questions are multiple-choice questions and were sourced from the question banks of these examinations. Results In USMLE step 1, step 2CK and Step 3, there are accuracy rates of 91.5%, 94.2% and 92.7% provided from GPT-4o, 93.2%, 95.0% and 92.0% provided from GPT-4, 65.6%, 71.6% and 68.5% provided from GPT-3.5, and 64.3%, 55.6%, 58.1% from Google Bard, respectively. In PLAB, HKMLE and NMLE, GPT-4o scored 93.3%, 91.7% and 84.9%, GPT-4 scored 86.7%, 89.6% and 69.8%, GPT-3.5 scored 80.0%, 68.1% and 60.4%, and Google Bard scored 54.2%, 71.7% and 61.3%. There was significant difference in the accuracy rates of four LLMs in the four medical licensing examinations. Conclusion GPT-4o performed better in the medical licensing examinations than other three LLMs. The performance of the four models in the NMLE examination needs further improvement. Clinical trial number Not applicable.
Application of ChatGPT-assisted problem-based learning teaching method in clinical medical education
Introduction Artificial intelligence technology has a wide range of application prospects in the field of medical education. The aim of the study was to measure the effectiveness of ChatGPT-assisted problem-based learning (PBL) teaching for urology medical interns in comparison with traditional teaching. Methods A cohort of urology interns was randomly assigned to two groups; one underwent ChatGPT-assisted PBL teaching, while the other received traditional teaching over a period of two weeks. Performance was assessed using theoretical knowledge exams and Mini-Clinical Evaluation Exercises. Students’ acceptance and satisfaction with the AI-assisted method were evaluated through a survey. Results The scores of the two groups of students who took exams three days after the course ended were significantly higher than their scores before the course. The scores of the PBL-ChatGPT assisted group were significantly higher than those of the traditional teaching group three days after the course ended. The PBL-ChatGPT group showed statistically significant improvements in medical interviewing skills, clinical judgment and overall clinical competence compared to the traditional teaching group. The students gave highly positive feedback on the PBL-ChatGPT teaching method. Conclusion The study suggests that ChatGPT-assisted PBL teaching method can improve the results of theoretical knowledge assessment, and play an important role in improving clinical skills. However, further research is needed to examine the validity and reliability of the information provided by different chat AI systems, and its impact on a larger sample size.
The future of AI clinicians: assessing the modern standard of chatbots and their approach to diagnostic uncertainty
Background Artificial intelligence (AI) chatbots have demonstrated proficiency in structured knowledge assessments; however, there is limited research on their performance in scenarios involving diagnostic uncertainty, which requires careful interpretation and complex decision-making. This study aims to evaluate the efficacy of AI chatbots, GPT-4o and Claude-3, in addressing medical scenarios characterized by diagnostic uncertainty relative to Family Medicine residents. Methods Questions with diagnostic uncertainty were extracted from the Progress Tests administered by the Department of Family and Community Medicine at the University of Toronto between 2022 and 2023. Diagnostic uncertainty questions were defined as those presenting clinical scenarios where symptoms, clinical findings, and patient histories do not converge on a definitive diagnosis, necessitating nuanced diagnostic reasoning and differential diagnosis. These questions were administered to a cohort of 320 Family Medicine residents in their first (PGY-1) and second (PGY-2) postgraduate years and inputted into GPT-4o and Claude-3. Errors were categorized into statistical, information, and logical errors. Statistical analyses were conducted using a binomial generalized estimating equation model, paired t-tests, and chi-squared tests. Results Compared to the residents, both chatbots scored lower on diagnostic uncertainty questions ( p  < 0.01). PGY-1 residents achieved a correctness rate of 61.1% (95% CI: 58.4–63.7), and PGY-2 residents achieved 63.3% (95% CI: 60.7–66.1). In contrast, Claude-3 correctly answered 57.7% ( n  = 52/90) of questions, and GPT-4o correctly answered 53.3% ( n  = 48/90). Claude-3 had a longer mean response time (24.0 s, 95% CI: 21.0-32.5 vs. 12.4 s, 95% CI: 9.3–15.3; p  < 0.01) and produced longer answers (2001 characters, 95% CI: 1845–2212 vs. 1596 characters, 95% CI: 1395–1705; p  < 0.01) compared to GPT-4o. Most errors by GPT-4o were logical errors (62.5%). Conclusions While AI chatbots like GPT-4o and Claude-3 demonstrate potential in handling structured medical knowledge, their performance in scenarios involving diagnostic uncertainty remains suboptimal compared to human residents.
Shaping the future: perspectives on the Integration of Artificial Intelligence in health profession education: a multi-country survey
Background Artificial intelligence (AI) is transforming health profession education (HPE) through personalized learning technologies. HPE students must also learn about AI to understand its impact on healthcare delivery. We examined HPE students’ AI-related knowledge and attitudes, and perceived challenges in integrating AI in HPE. Methods This cross-sectional included medical, nursing, physiotherapy, and clinical nutrition students from four public universities in Jordan, the Kingdom of Saudi Arabia (KSA), the United Arab Emirates (UAE), and Egypt. Data were collected between February and October 2023 via an online survey that covered five main domains: benefits of AI in healthcare, negative impact on patient trust, negative impact on the future of healthcare professionals, inclusion of AI in HPE curricula, and challenges hindering integration of AI in HPE. Results Of 642 participants, 66.4% reported low AI knowledge levels. The UAE had the largest proportion of students with low knowledge (72.7%). The majority (54.4%) of participants had learned about AI outside their curriculum, mainly through social media (66%). Overall, 51.2% expressed positive attitudes toward AI, with Egypt showing the largest proportion of positive attitudes (59.1%). Although most participants viewed AI in healthcare positively (91%), significant variations were observed in other domains. The majority (77.6%) supported integrating AI in HPE, especially in Egypt (82.3%). A perceived negative impact of AI on patient trust was expressed by 43.5% of participants, particularly in Egypt (54.7%). Only 18.1% of participants were concerned about the impact of AI on future healthcare professionals, with the largest proportion from Egypt (33.0%). Some participants (34.4%) perceived AI integration as challenging, notably in the UAE (47.6%). Common barriers included lack of expert training (53%), awareness (50%), and interest in AI (41%). Conclusion This study clarified key considerations when integrating AI in HPE. Enhancing students’ awareness and fostering innovation in an AI-driven medical landscape are crucial for effectively incorporating AI in HPE curricula.
AI-powered standardised patients: evaluating ChatGPT-4o’s impact on clinical case management in intern physicians
Background Artificial Intelligence is currently being applied in healthcare for diagnosis, decision-making and education. ChatGPT-4o, with its advanced language and problem-solving capabilities, offers an innovative alternative as a virtual standardised patient in clinical training. Intern physicians are expected to develop clinical case management skills such as problem-solving, clinical reasoning and crisis management. In this study, ChatGPT-4o’s served as virtual standardised patient and medical interns as physicians on clinical case management. This study aimed to evaluate intern physicians’ competencies in clinical case management; problem-solving, clinical reasoning, crisis management and explore the impact and potential of ChatGPT-4o as a viable tool for assessing these competencies. Methods This study used a simultaneous triangulation design, integrating quantitative and qualitative data. Conducted at Aydın Adnan Menderes University, with 21 sixth-year medical students, ChatGPT-4o simulated realistic patient interactions requiring competencies in clinical case management; problem-solving, clinical reasoning, crisis management. Data were gathered through self-assessment survey, semi-structured interviews, observations of the students and ChatGPT-4o during the process. Analyses included Pearson correlation, Chi-square, and Kruskal-Wallis tests, with content analysis conducted on qualitative data using MAXQDA software for coding. Results According to the findings, observation and self-assessment survey scores of intern physicians’ clinical case management skills were positively correlated. There was a significant gap between participants’ self-assessment and actual performance, indicating discrepancies in self-perceived versus real clinical competence. Participants reported feeling inadequate in their problem-solving and clinical reasoning competencies and experienced time pressure. They were satisfied with the Artificial Intelligence-powered standardised patient process and were willing to continue similar practices. Participants engaged with a uniform patient experience. Although participants were satisfied, the application process was sometimes negatively affected due to disconnection problems and language processing challenges. Conclusions ChatGPT-4o successfully simulated patient interactions, providing a controlled environment without risking harm to real patients for practicing clinical case management. Although some of the technological challenges limited effectiveness, it was useful, cost-effective and accessible. It is thought that intern physicians will be better supported in acquiring clinical management skills through varied clinical scenarios using this method. Clinical trial number Not applicable.
Artificial intelligence based assessment of clinical reasoning documentation: an observational study of the impact of the clinical learning environment on resident documentation quality
Background Objective measures and large datasets are needed to determine aspects of the Clinical Learning Environment (CLE) impacting the essential skill of clinical reasoning documentation. Artificial Intelligence (AI) offers a solution. Here, the authors sought to determine what aspects of the CLE might be impacting resident clinical reasoning documentation quality assessed by AI. Methods In this observational, retrospective cross-sectional analysis of hospital admission notes from the Electronic Health Record (EHR), all categorical internal medicine (IM) residents who wrote at least one admission note during the study period July 1, 2018– June 30, 2023 at two sites of NYU Grossman School of Medicine’s IM residency program were included. Clinical reasoning documentation quality of admission notes was determined to be low or high-quality using a supervised machine learning model. From note-level data, the shift (day or night) and note index within shift (if a note was first, second, etc. within shift) were calculated. These aspects of the CLE were included as potential markers of workload, which have been shown to have a strong relationship with resident performance. Patient data was also captured, including age, sex, Charlson Comorbidity Index, and primary diagnosis. The relationship between these variables and clinical reasoning documentation quality was analyzed using generalized estimating equations accounting for resident-level clustering. Results Across 37,750 notes authored by 474 residents, patients who were older, had more pre-existing comorbidities, and presented with certain primary diagnoses (e.g., infectious and pulmonary conditions) were associated with higher clinical reasoning documentation quality. When controlling for these and other patient factors, variables associated with clinical reasoning documentation quality included academic year (adjusted odds ratio, aOR, for high-quality: 1.10; 95% CI 1.06–1.15; P  <.001), night shift (aOR 1.21; 95% CI 1.13–1.30; P  <.001), and note index (aOR 0.93; 95% CI 0.90–0.95; P  <.001). Conclusions AI can be used to assess complex skills such as clinical reasoning in authentic clinical notes that can help elucidate the potential impact of the CLE on resident clinical reasoning documentation quality. Future work should explore residency program and systems interventions to optimize the CLE.
Using ChatGPT for medical education: the technical perspective
Background The chatbot application Bennie and the Chats was introduced due to the outbreak of COVID-19, which is aimed to provide substitution for teaching conventional clinical history-taking skills. It was implemented with DialogFlow with preset responses, which consists of a large constraint on responding to different conversations. The rapid advancement of artificial intelligence, such as the recent introduction of ChatGPT, offers innovative conversational experiences with computer-generated responses. It provides an idea to develop the second generation of Bennie and the Chats . As the epidemic slows, it can become an assisting tool for students as additional exercise. In this work, we present the second generation of Bennie and the Chats with ChatGPT, which provides room for flexible and expandable improvement. Methods The objective of this research is to examine the influence of the newly proposed chatbot on learning efficacy and experiences in bedside teaching, and its potential contributions to international teaching collaboration. This study employs a mixed-method design that incorporates both quantitative and qualitative approaches. From the quantitative approach, we launched the world’s first cross-territory virtual bedside teaching with our proposed application and conducted a survey between the University of Hong Kong (HKU) and the National University of Singapore (NUS). Descriptive statistics and Spearman’s Correlation were applied for data analysis. From the qualitative approach, a comparative analysis was conducted between the two versions of the chatbot. And, we discuss the interrelationship between the quantitative and qualitative results. Results For the quantitative result, we collected a questionnaire from 45 students about the evaluation of virtual bedside teaching between territories. Over 75% of the students agreed that teaching can enhance learning effectiveness and experience. Moreover, by exchanging patients cases, 82.2% of students agreed that it helps to gain more experiences with diseases that may not be prevalent in their own locality. For the qualitative result, the new chatbot provides better usability and flexibility. Conclusion Virtual bedside teaching with chatbots has revolutionized conventional bedside teaching by its advantages and allowing international collaborations. We believe that the training of history taking skills by chatbot will be a feasible supplementary teaching tool to conventional bedside teaching.