Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
29 result(s) for "Yasutaka Yanagita"
Sort by:
Relationship between diagnostic accuracy and self-confidence among medical students when using Google search: A mixed-method study
With the growing volume of medical information, proficiency in utilizing clinical decision support systems (CDSSs) is increasingly important for physicians. Further, research has primarily focused on CDSSs' accuracy for specific symptoms, diseases, and treatments, but the extent to which CDSSs contribute to the clinical reasoning process and evaluation of their output remains unclear. While Google is not a traditional CDSS, previous studies have evaluated its role as a diagnostic support tool, demonstrating its ability to assist physicians in retrieving relevant medical information and influencing diagnostic decision-making. This study aimed to assess whether using Google search can enhance diagnostic accuracy and confidence among medical students, and to evaluate how the interpretation of search results influences their diagnostic confidence. Forty-eight fifth-year medical students in clinical clerkship at Chiba University Hospital were presented with ten clinical scenarios in text format. Initially, they provided the most likely diagnosis without assistance and recorded their confidence levels. Subsequently, they used Google search to revisit their diagnoses and confidence levels, using a 7-point Likert Scale. Focus group interviews were conducted to discuss changes in confidence, and the interviews were analyzed qualitatively using content analysis. A mixed-methods analysis compared the average number of correct diagnoses and confidence levels before and after using Google search. In total, 470 responses from 48 fifth-year medical students were evaluated after excluding 10 inappropriate responses. Correct diagnoses increased from an average of 63.6% without assistance to 76.2% using Google search (P < .001), and confidence levels rose from 4.9 to 5.9 (P < .001). Qualitative analysis of higher-confidence responses identified 108 codes within 17 subcategories related to diagnostic processes. This study underscores the value of using Google search in medical education to enhance diagnostic skills and confidence. The improvement in accuracy and confidence among students demonstrates the supportive role of Google search in clinical reasoning and education. This highlights the need for educators to teach discernment in information analysis to ensure optimal use of CDSS in medical training. Proper integration of these tools is crucial for developing future physicians capable of effectively navigating vast amounts of medical data.
Accuracy of ChatGPT on Medical Questions in the National Medical Licensing Examination in Japan: Evaluation Study
Background:ChatGPT (OpenAI) has gained considerable attention because of its natural and intuitive responses. ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers, as stated by OpenAI as a limitation. However, considering that ChatGPT is an interactive AI that has been trained to reduce the output of unethical sentences, the reliability of the training data is high and the usefulness of the output content is promising. Fortunately, in March 2023, a new version of ChatGPT, GPT-4, was released, which, according to internal evaluations, was expected to increase the likelihood of producing factual responses by 40% compared with its predecessor, GPT-3.5. The usefulness of this version of ChatGPT in English is widely appreciated. It is also increasingly being evaluated as a system for obtaining medical information in languages other than English. Although it does not reach a passing score on the national medical examination in Chinese, its accuracy is expected to gradually improve. Evaluation of ChatGPT with Japanese input is limited, although there have been reports on the accuracy of ChatGPT’s answers to clinical questions regarding the Japanese Society of Hypertension guidelines and on the performance of the National Nursing Examination.Objective:The objective of this study is to evaluate whether ChatGPT can provide accurate diagnoses and medical knowledge for Japanese input.Methods:Questions from the National Medical Licensing Examination (NMLE) in Japan, administered by the Japanese Ministry of Health, Labour and Welfare in 2022, were used. All 400 questions were included. Exclusion criteria were figures and tables that ChatGPT could not recognize; only text questions were extracted. We instructed GPT-3.5 and GPT-4 to input the Japanese questions as they were and to output the correct answers for each question. The output of ChatGPT was verified by 2 general practice physicians. In case of discrepancies, they were checked by another physician to make a final decision. The overall performance was evaluated by calculating the percentage of correct answers output by GPT-3.5 and GPT-4.Results:Of the 400 questions, 292 were analyzed. Questions containing charts, which are not supported by ChatGPT, were excluded. The correct response rate for GPT-4 was 81.5% (237/292), which was significantly higher than the rate for GPT-3.5, 42.8% (125/292). Moreover, GPT-4 surpassed the passing standard (>72%) for the NMLE, indicating its potential as a diagnostic and therapeutic decision aid for physicians.Conclusions:GPT-4 reached the passing standard for the NMLE in Japan, entered in Japanese, although it is limited to written questions. As the accelerated progress in the past few months has shown, the performance of the AI will improve as the large language model continues to learn more, and it may well become a decision support system for medical professionals by providing more accurate information.
Oropharyngeal Carcinoma Presenting as a Large Cervical Mass With Impending Airway Obstruction
A middle-aged woman presented with progressive pharyngeal symptoms and an enlarging neck mass, eventually diagnosed as human papillomavirus (HPV)-negative oropharyngeal squamous cell carcinoma (OPSCC). Due to impending airway obstruction, an emergency tracheostomy was performed. Histopathological examination confirmed squamous cell carcinoma, leading to a final diagnosis of non-HPV-associated OPSCC, a rarer and more aggressive subtype. Despite receiving palliative radiotherapy for symptom control, the disease progressed aggressively, and the patient was ultimately transitioned to best supportive care. OPSCC arises from the mucosal epithelium of the oropharynx. Early symptoms such as persistent throat discomfort, dysphagia, and cervical lymphadenopathy are relatively characteristic of oropharyngeal tumors. Tumor progression can cause critical airway obstruction depending on the lesion's size and location. In this report, we present a case of OPSCC that required emergency airway management due to the risk of airway obstruction. This case underscores the importance of early diagnosis and airway management in patients with progressive pharyngeal symptoms and high-risk features.
Expert assessment of ChatGPT’s ability to generate illness scripts: an evaluative study
Background An illness script is a specific script format geared to represent patient-oriented clinical knowledge organized around enabling conditions, faults (i.e., pathophysiological process), and consequences. Generative artificial intelligence (AI) stands out as an educational aid in continuing medical education. The effortless creation of a typical illness script by generative AI could help the comprehension of key features of diseases and increase diagnostic accuracy. No systematic summary of specific examples of illness scripts has been reported since illness scripts are unique to each physician. Objective This study investigated whether generative AI can generate illness scripts. Methods We utilized ChatGPT-4, a generative AI, to create illness scripts for 184 diseases based on the diseases and conditions integral to the National Model Core Curriculum in Japan for undergraduate medical education (2022 revised edition) and primary care specialist training in Japan. Three physicians applied a three-tier grading scale: “A” denotes that the content of each disease’s illness script proves sufficient for training medical students, “B” denotes that it is partially lacking but acceptable, and “C” denotes that it is deficient in multiple respects. Results By leveraging ChatGPT-4, we successfully generated each component of the illness script for 184 diseases without any omission. The illness scripts received “A,” “B,” and “C” ratings of 56.0% (103/184), 28.3% (52/184), and 15.8% (29/184), respectively. Conclusion Useful illness scripts were seamlessly and instantaneously created using ChatGPT-4 by employing prompts appropriate for medical students. The technology-driven illness script is a valuable tool for introducing medical students to key features of diseases.
The flipped classroom is effective for medical students to improve deep tendon reflex examination skills: A mixed-method study
Deep tendon reflexes (DTR) are a prerequisite skill in clinical clerkships. However, many medical students are not confident in their technique and need to be effectively trained. We evaluated the effectiveness of a flipped classroom for teaching DTR skills. We recruited 83 fifth-year medical students who participated in a clinical clerkship at the Department of General Medicine, Chiba University Hospital, from November 2018 to July 2019. They were allocated to the flipped classroom technique (intervention group, n = 39) or the traditional technique instruction group (control group, n = 44). Before procedural teaching, while the intervention group learned about DTR by e-learning, the control group did so face-to-face. A 5-point Likert scale was used to evaluate self-confidence in DTR examination before and after the procedural teaching (1 = no confidence, 5 = confidence). We evaluated the mastery of techniques after procedural teaching using the Direct Observation of Procedural Skills (DOPS). Unpaired t-test was used to analyze the difference between the two groups on the 5-point Likert scale and DOPS. We assessed self-confidence in DTR examination before and after procedural teaching using a free description questionnaire in the two groups. Additionally, in the intervention group, focus group interviews (FGI) (7 groups, n = 39) were conducted to assess the effectiveness of the flipped classroom after procedural teaching. Pre-test self-confidence in the DTR examination was significantly higher in the intervention group than in the control group (2.8 vs. 2.3, P = 0.005). Post-test self-confidence in the DTR examination was not significantly different between the two groups (3.9 vs. 4.1, P = 0.31), and so was mastery (4.3 vs. 4.1, P = 0.68). The questionnaires before the procedural teaching revealed themes common to the two groups, including “lack of knowledge” and “lack of self-confidence.” Themes about prior learning, including “acquisition of knowledge” and “promoting understanding,” were specific in the intervention group. The FGI revealed themes including “application of knowledge,” “improvement in DTR technique,” and “increased self-confidence.” Based on these results, teaching DTR skills to medical students in flipped classrooms improves readiness for learning and increases self-confidence in performing the procedure at a point before procedural teaching.
Association Between Physician Empathy and Difficult Patient Encounters: a Cross-Sectional Study
Background Physicians frequently experience patients as difficult. Our study explores whether more empathetic physicians experience fewer patient encounters as difficult. Objective To investigate the association between physician empathy and difficult patient encounters (DPEs). Design Cross-sectional study. Participants Participants were 18 generalist physicians with 3–8 years of experience. The investigation was conducted from August–September 2018 and April–May 2019 at six healthcare facilities. Main Measures Based on the Jefferson Scale of Empathy (JSE) scores, we classified physicians into low and high empathy groups. The physicians completed the Difficult Doctor-Patient Relationship Questionnaire-10 (DDPRQ-10) after each patient visit. Scores ≥ 31 on the DDPRQ-10 indicated DPEs. We implemented multilevel mixed-effects logistic regression models to examine the association between physicians’ empathy and DPE, adjusting for patient-level covariates (age, sex, history of mental disorders) and with physician-level clustering. Key Results The median JSE score was 114 (range: 96–126), and physicians with JSE scores 96–113 and 114–126 were assigned to low and high empathy groups, respectively ( n = 8 and 10 each); 240 and 344 patients were examined by physicians in the low and high empathy groups, respectively. Among low empathy physicians, 23% of encounters were considered difficulty, compared to 11% among high empathy groups (OR: 0.37; 95% CI = 0.19–0.72, p = 0.004). JSE scores and DDPRQ-10 scores were negatively correlated ( r = −0.22, p < 0.01). Conclusion Empathetic physicians were less likely to experience encounters as difficult. Empathy appears to be an important component of physician perception of encounter difficulty.
Hybrid PBL and Pure PBL: Which one is more effective in developing clinical reasoning skills for general medicine clerkship?—A mixed-method study
This study aims to compare the effectiveness of Hybrid and Pure problem-based learning (PBL) in teaching clinical reasoning skills to medical students. The study sample consisted of 99 medical students participating in a clerkship rotation at the Department of General Medicine, Chiba University Hospital. They were randomly assigned to Hybrid PBL (intervention group, n = 52) or Pure PBL group (control group, n = 47). The quantitative outcomes were measured with the students’ perceived competence in PBL, satisfaction with sessions, and self-evaluation of competency in clinical reasoning. The qualitative component consisted of a content analysis on the benefits of learning clinical reasoning using Hybrid PBL. There was no significant difference between intervention and control groups in the five students’ perceived competence and satisfaction with sessions. In two-way repeated measure analysis of variance, self-evaluation of competency in clinical reasoning was significantly improved in the intervention group in \"recalling appropriate differential diagnosis from patient’s chief complaint\" (F(1,97) = 5.295, p = 0.024) and \"practicing the appropriate clinical reasoning process\" (F(1,97) = 4.016, p = 0.038). According to multiple comparisons, the scores of \"recalling appropriate history, physical examination, and tests on clinical hypothesis generation\" (F(1,97) = 6.796, p = 0.011), \"verbalizing and reflecting appropriately on own mistakes,\" (F(1,97) = 4.352, p = 0.040) \"selecting keywords from the whole aspect of the patient,\" (F(1,97) = 5.607, p = 0.020) and \"examining the patient while visualizing his/her daily life\" (F(1,97) = 7.120, p = 0.009) were significantly higher in the control group. In the content analysis, 13 advantage categories of Hybrid PBL were extracted. In the subcategories, \"acquisition of knowledge\" was the most frequent subcategory, followed by \"leading the discussion,\" \"smooth discussion,\" \"getting feedback,\" \"timely feedback,\" and \"supporting the clinical reasoning process.\" Hybrid PBL can help acquire practical knowledge and deepen understanding of clinical reasoning, whereas Pure PBL can improve several important skills such as verbalizing and reflecting on one’s own errors and selecting appropriate keywords from the whole aspect of the patient.
Does a learner-centered approach using teleconference improve medical students’ psychological safety and self-explanation in clinical reasoning conferences? a crossover study
During clinical reasoning case conferences, a learner-centered approach using teleconferencing can create a psychologically safe environment and help learners speak up. This study aims to measure the psychological safety of students who are supposed to self-explain their clinical reasoning to conference participants. This crossover study compared the effects of two clinical reasoning case conference methods on medical students’ psychological safety. The study population comprised 4 th -5 th year medical students participating in a two-week general medicine clinical clerkship rotation, from September 2019 to February 2020. They participated in both a learner-centered approach teleconference and a traditional, live-style conference. Teleconferences were conducted in a separate room, with only a group of students and one facilitator. Participants in group 1 received a learner-centered teleconference in the first week and a traditional, live-style conference in the second week. Participants assigned to group 2 received a traditional, live-style conference in the first week and a learner-centered approach teleconference in the second week. After each conference, Edmondson’s Psychological Safety Scale was used to assess the students’ psychological safety. We also counted the number of students who self-explained their clinical reasoning processes during each conference. Of the 38 students, 34 completed the study. Six out of the seven psychological safety items were significantly higher in the learner-centered approach teleconferences (p<0.01). Twenty-nine (85.3%) students performed self-explanation in the teleconference compared to ten (29.4%) in the live conference (p<0.01). A learner-centered approach teleconference could improve psychological safety in novice learners and increase the frequency of their self-explanation, helping educators better assess their understanding. Based on these results, a learner-centered teleconference approach has the potential to be a method for teaching clinical reasoning to medical students.
Appropriate semantic qualifiers increase diagnostic accuracy when using a clinical decision support system: a randomized controlled trial
Background The role of appropriate semantic qualifiers (SQs) in the effective use of a clinical decision support system (CDSS) is not yet fully understood. Previous studies have not investigated the input. This study aimed to investigate whether the appropriateness of SQs modified the impact of CDSS on diagnostic accuracy among medical students. Methods For this randomized controlled trial, a total of forty-two fifth-year medical students in a clinical clerkship at Chiba University Hospital were enrolled from May to December 2020. They were divided into the CDSS (CDSS use; 22 participants) and control groups (no CDSS use; 20 participants). Students were presented with ten expert-developed case vignettes asking for SQs and a diagnosis. Three appropriate SQs were established for each case vignette. The participants were awarded one point for each SQ that was consistent with the set SQs. Those with two or more points were considered to have provided appropriate SQs. The CDSS used was the Current Decision Support Ⓡ . We evaluated diagnostic accuracy and the appropriateness of SQ differences between the CDSS and control groups. Results Data from all 42 participants were analyzed. The CDSS and control groups provided 133 (60.5%; 220 answers) and 115 (57.5%; 200 answers) appropriate SQs, respectively. Among CDSS users, diagnostic accuracy was significantly higher with appropriate SQs compared to inappropriate SQs (χ 2 (1) = 4.97, p  = 0.026). With appropriate SQs, diagnostic accuracy was significantly higher in the CDSS group compared to the control group (χ 2 (1) = 1.16 × 10, p  < 0.001). With inappropriate SQs, there was no significant difference in diagnostic accuracy between the two groups (χ 2 (1) = 8.62 × 10 –2 , p  = 0.769). Conclusions Medical students may make more accurate diagnoses using the CDSS if appropriate SQs are set. Improving students’ ability to set appropriate SQs may improve the effectiveness of CDSS use. Trial registration This study was registered with the University Hospital Medical Information Network Clinical Trials Registry on 24/12/2020 (Unique trial number: UMIN000042831).
Potential for reducing confirmation bias using the OMP model “6-microskills” with verbalizing discordance: a cross-sectional study
Objectives The “5-microskills” instructional method for clinical reasoning does not incorporate a step for learners’ critical reflection on their predicted hypotheses. This study aimed to correct this shortcoming by inserting a third step in which learners conduct critical self-examinations and furnish evidence that contradicts their predicted hypotheses, resulting in the “6-microskills” method. Methods In this cross-sectional study, changes in learners’ confidence in their predicted hypotheses were measured and examined to modify confirmation bias and diagnoses. A total of 108 medical students were presented with one randomly assigned clinical vignette from a set of eight, having to: (1) describe their first impression; (2) provide evidence for it; and (3) finally identify inconsistencies/state evidence against it. Participants rated their confidence in their diagnosis at each of the three steps on a 10 point scale, and results were analyzed using a two-way ANOVA with repeated measures for two between-participant levels (correct or incorrect diagnosis) and three within-participant factors (diagnostic steps). The Bonferroni method was used for multiple comparison tests. Results Mean confidence scores were 5.01 (Step 1), 5.20 (Step 2), and 4.98 (Step 3); multiple comparisons showed a significant difference between Steps 1–2 ( P  =.04) and 2–3 ( P  =.01). Verbalization of evidence in favor of the predicted hypothesis (Step 2) and against it (Step 3) prompted changes in diagnosis in four cases of misdiagnosis (three at Step 2, one at Step 3). Conclusions The 6-microskills method, which added a part encouraging learners to verbalize why something “does not fit” with a predicted diagnosis, may effectively correct the confirmation bias associated with diagnostic predictions.