Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
1,744 result(s) for "Medical History Taking - methods"
Sort by:
Usability, Engagement, and Report Usefulness of Chatbot-Based Family Health History Data Collection: Mixed Methods Analysis
Family health history (FHx) is an important predictor of a person's genetic risk but is not collected by many adults in the United States. This study aims to test and compare the usability, engagement, and report usefulness of 2 web-based methods to collect FHx. This mixed methods study compared FHx data collection using a flow-based chatbot (KIT; the curious interactive test) and a form-based method. KIT's design was optimized to reduce user burden. We recruited and randomized individuals from 2 crowdsourced platforms to 1 of the 2 FHx methods. All participants were asked to complete a questionnaire to assess the method's usability, the usefulness of a report summarizing their experience, user-desired chatbot enhancements, and general user experience. Engagement was studied using log data collected by the methods. We used qualitative findings from analyzing free-text comments to supplement the primary quantitative results. Participants randomized to KIT reported higher usability than those randomized to the form, with a mean System Usability Scale score of 80.2 versus 61.9 (P<.001), respectively. The engagement analysis reflected design differences in the onboarding process. KIT users spent less time entering FHx information and reported more conditions than form users (mean 5.90 vs 7.97 min; P=.04; and mean 7.8 vs 10.1 conditions; P=.04). Both KIT and form users somewhat agreed that the report was useful (Likert scale ratings of 4.08 and 4.29, respectively). Among desired enhancements, personalization was the highest-rated feature (188/205, 91.7% rated medium- to high-priority). Qualitative analyses revealed positive and negative characteristics of both KIT and the form-based method. Among respondents randomized to KIT, most indicated it was easy to use and navigate and that they could respond to and understand user prompts. Negative comments addressed KIT's personality, conversational pace, and ability to manage errors. For KIT and form respondents, qualitative results revealed common themes, including a desire for more information about conditions and a mutual appreciation for the multiple-choice button response format. Respondents also said they wanted to report health information beyond KIT's prompts (eg, personal health history) and for KIT to provide more personalized responses. We showed that KIT provided a usable way to collect FHx. We also identified design considerations to improve chatbot-based FHx data collection: First, the final report summarizing the FHx collection experience should be enhanced to provide more value for patients. Second, the onboarding chatbot prompt may impact data quality and should be carefully considered. Finally, we highlighted several areas that could be improved by moving from a flow-based chatbot to a large language model implementation strategy.
Feasibility study of using GPT for history-taking training in medical education: a randomized clinical trial
Backgrounds Traditional methods of teaching history-taking in medical education are limited by scalability and resource intensity. This study aims to assess the effectiveness of simulated patient interactions based on a custom-designed Generative Pre-trained Transformer (GPT) model, developed using OpenAI’s ChatGPT GPTs platform, in enhancing medical students’ history-taking skills compared to traditional role-playing methods. Methods A total of 56 medical students were randomly assigned into two groups: an GPT group using GPT-simulated patients and a control group using traditional role-playing. Pre- and post-training assessments were conducted using a structured clinical examination to measure students’ abilities in history collection, clinical reasoning, communication skills, and professional behavior. Additionally, students’ evaluations of the educational effectiveness, satisfaction, and recommendation likelihood were assessed. Results The GPT-simulation group showed significantly higher post-training scores in the structured clinical examination compared to the control group (86.79 ± 5.46,73.64 ± 4.76, respectively, P  < 0.001). Students in the GPT group exhibited higher enthusiasm for learning, greater self-directed learning motivation, and better communication feedback abilities compared to the control group ( P  < 0.05). Additionally, the student satisfaction survey revealed that the GPT group rated higher on the diversity of diseases encountered, ease of use, and likelihood of recommending the training compared to the control group ( P  < 0.05). Conclusions GPT-based history-taking training effectively enhances medical students’ history-taking skills, providing a solid foundation for the application of artificial intelligence (AI) in medical education. Clinical trial number NCT06766383.
Impact of providing a customized guideline on virtual medical history taking in two serious games for medical education
Serious games are known as safe learning environments, allowing medical students to train their skills without endangering patients' safety. By integrating virtual patients via chatbots, serious games provide the opportunity to practice history taking. The study investigated the impact of self-directed learning by means of a customized guideline on history taking in two distinct chatbot systems embedded in serious games. Fourth-year medical students (  = 159) were randomized to one of two serious games, each representing an emergency department and simulating different clinical scenarios. Students played the serious games at two measurement points and received a guideline between both sessions. The chatbots differed in the manner of query entry, with one requiring students to formulate history taking questions themselves, while the other provided a long menu of selectable questions. The dependent variables analyzed included the history taking data entered into the chatbots, represented as a quantified history score, as well as students' comparative self-assessments of their learning outcomes. Comparing only the first measurement point, students achieved higher scores in the free-entry chatbot (85.2 ± 27.7) compared to the long menu chatbot (78.8 ± 35.7). Students achieved significantly higher scores in the second than in the first session in the long menu chatbot ( (315) = -2.918,  = .004,  = -0.229) but not in the free-entry chatbot after receiving the guideline. In terms of students' self-assessment, no significant difference between both serious games was found. The results suggest that history taking benefits from self-directed learning in a long menu format relying on cued recall but not in a free-entry chatbot relying on free recall. Since serious games are partially artificial learning environments for training history taking, future studies should examine the extent to which students can transfer their learning in and out of serious games.
MRI does not add value over and above patient history and clinical examination in predicting time to return to sport after acute hamstring injuries: a prospective cohort of 180 male athletes
BackgroundMRI is frequently used in addition to clinical evaluation for predicting time to return to sport (RTS) after acute hamstring injury. However, the additional value of MRI to patient history taking and clinical examination remains unknown and is debated.AimTo prospectively investigate the predictive value of patient history and clinical examination at baseline alone and the additional predictive value of MRI findings for time to RTS using multivariate analysis while controlling for treatment confounders.MethodsMale athletes (N=180) with acute onset posterior thigh pain underwent standardised patient history, clinical and MRI examinations within 5 days, and time to RTS was registered. A general linear model was constructed to assess the associations between RTS and the potential baseline predictors. A manual backward stepwise technique was used to keep treatment variables fixed.ResultsIn the first multiple regression model including only patient history and clinical examination, maximum pain score (visual analogue scale, VAS), forced to stop within 5 min, length of hamstring tenderness and painful resisted knee flexion (90°), showed independent associations with RTS and the final model explained 29% of the total variance in time to RTS. By adding MRI variables in the second multiple regression model, maximum pain score (VAS), forced to stop within 5 min, length of hamstring tenderness and overall radiological grading, showed independent associations and the adjusted R2 increased from 0.290 to 0.318. Thus, additional MRI explained 2.8% of the variance in RTS.SummaryThere was a wide variation in time to RTS and the additional predictive value of MRI was negligible compared with baseline patient history taking and clinical examinations alone. Thus, clinicians cannot provide an accurate time to RTS just after an acute hamstring injury. This study provides no rationale for routine MRI after acute hamstring injury.Trial registration numberClinicalTrials.gov Identifier: NCT01812564.
Evaluation of the Use of a Virtual Patient on Student Competence and Confidence in Performing Simulated Clinic Visits
Objective. To assess the effect of incorporating virtual patient activities in a pharmacy skills lab on student competence and confidence when conducting real-time comprehensive clinic visits with mock patients. Methods. Students were randomly assigned to a control or intervention group. The control group completed the clinic visit prior to completing virtual patient activities. The intervention group completed the virtual patient activities prior to the clinic visit. Student proficiency was evaluated in the mock lab. All students completed additional exercises with the virtual patient and were subsequently assessed. Student impressions were assessed via a pre- and post-experience survey. Results. Student performance conducting clinic visits was higher in the intervention group compared to the control group. Overall student performance continued to improve in the subsequent module. There was no change in student confidence from pre- to post-experience. Student rating of the ease of use and realistic simulation of the virtual patient increased; however, student rating of the helpfulness of the virtual patient decreased. Despite student rating of the helpfulness of the virtual patient program, student performance improved. Conclusion. Virtual patient activities enhanced student performance during mock clinic visits. Students felt the virtual patient realistically simulated a real patient. Virtual patients may provide additional learning opportunities for students.
Reducing Patients’ Unmet Concerns in Primary Care: the Difference One Word Can Make
In primary, acute-care visits, patients frequently present with more than 1 concern. Various visit factors prevent additional concerns from being articulated and addressed. To test an intervention to reduce patients' unmet concerns. Cross-sectional comparison of 2 experimental questions, with videotaping of office visits and pre and postvisit surveys. Twenty outpatient offices of community-based physicians equally divided between Los Angeles County and a midsized town in Pennsylvania. A volunteer sample of 20 family physicians (participation rate = 80%) and 224 patients approached consecutively within physicians (participation rate = 73%; approximately 11 participating for each enrolled physician) seeking care for an acute condition. After seeing 4 nonintervention patients, physicians were randomly assigned to solicit additional concerns by asking 1 of the following 2 questions after patients presented their chief concern: \"Is there anything else you want to address in the visit today?\" (ANY condition) and \"Is there something else you want to address in the visit today?\" (SOME condition). Patients' unmet concerns: concerns listed on previsit surveys but not addressed during visits, visit time, unanticipated concerns: concerns that were addressed during the visit but not listed on previsit surveys. Relative to nonintervention cases, the implemented SOME intervention eliminated 78% of unmet concerns (odds ratio (OR) = .154, p = .001). The ANY intervention could not be significantly distinguished from the control condition (p = .122). Neither intervention affected visit length, or patients'; expression of unanticipated concerns not listed in previsit surveys. Patients' unmet concerns can be dramatically reduced by a simple inquiry framed in the SOME form. Both the learning and implementation of the intervention require very little time.
Effect of Teaching Bayesian Methods Using Learning by Concept vs Learning by Example on Medical Students’ Ability to Estimate Probability of a Diagnosis
Clinicians use probability estimates to make a diagnosis. Teaching students to make more accurate probability estimates could improve the diagnostic process and, ultimately, the quality of medical care. To test whether novice clinicians can be taught to make more accurate bayesian revisions of diagnostic probabilities using teaching methods that apply either explicit conceptual instruction or repeated examples. A randomized clinical trial of 2 methods for teaching bayesian updating and diagnostic reasoning was performed. A web-based platform was used for consent, randomization, intervention, and testing of the effect of the intervention. Participants included 61 medical students at McMaster University and Eastern Virginia Medical School recruited from May 1 to September 30, 2018. Students were randomized to (1) receive explicit conceptual instruction regarding diagnostic testing and bayesian revision (concept group), (2) exposure to repeated examples of cases with feedback regarding posttest probability (experience group), or (3) a control condition with no conceptual instruction or repeated examples. Students in all 3 groups were tested on their ability to update the probability of a diagnosis based on either negative or positive test results. Their probability revisions were compared with posttest probability revisions that were calculated using the Bayes rule and known test sensitivity and specificity. Of the 61 participants, 22 were assigned to the concept group, 20 to the experience group, and 19 to the control group. Approximate age was 25 years. Two participants were first-year; 37, second-year; 12, third-year; and 10, fourth-year students. Mean (SE) probability estimates of students in the concept group were statistically significantly closer to calculated bayesian probability than the other 2 groups (concept, 0.4%; [0.7%]; experience, 3.5% [0.7%]; control, 4.3% [0.7%]; P < .001). Although statistically significant, the differences between groups were relatively modest, and students in all groups performed better than expected, based on prior reports in the literature. The study showed a modest advantage for students who received theoretical instruction on bayesian concepts. All participants' probability estimates were, on average, close to the bayesian calculation. These findings have implications for how to teach diagnostic reasoning to novice clinicians. ClinicalTrials.gov identifier: NCT04130607.
Differential Diagnosis Assessment in Ambulatory Care With a Digital Health History Device: Pseudorandomized Study
Digital health history devices represent a promising wave of digital tools with the potential to enhance the quality and efficiency of medical consultations. They achieve this by providing physicians with standardized, high-quality patient history summaries and facilitating the development of differential diagnoses (DDs) before consultation, while also engaging patients in the diagnostic process. This study evaluates the efficacy of one such digital health history device, diagnosis and anamnesis (DIANNA), in assisting with the formulation of appropriate DDs in an outpatient setting. A pseudorandomized controlled trial was conducted with 101 patients seeking care at the University Hospital Geneva emergency outpatient department. Participants presented with various conditions affecting the limbs, back, and chest. The first 51 patients were assigned to the control group, while the subsequent 50 formed the intervention group. In the control group, physicians developed DD lists based on traditional history-taking and clinical examination. In the intervention group, physicians reviewed DIANNA-generated DD reports before interacting with the patient. In both groups, a senior physician independently formulated a DD list, serving as the gold standard for comparison. The study findings indicate that DIANNA use was associated with a notable improvement in DD accuracy (mean 79.3%, SD 24%) compared with the control group (mean 70.5%, SD 33%; P=.01). Subgroup analysis revealed variations in effectiveness based on case complexity: low-complexity cases (1-2 possible DDs) showed 8% improvement in the intervention group (P=.08), intermediate-complexity cases (3 possible DDs) showed 17% improvement (P=.03), and high-complexity cases (4-5 possible DDs) showed 15% improvement (P=.92). The intervention was not superior to the control in low-complexity cases (P=.08) or high-complexity cases (P=.92). Overall, DIANNA successfully determined appropriate DDs in 81.6% of cases, and physicians reported that it helped establish the correct DD in 26% of cases. The study suggests that DIANNA has the potential to support physicians in formulating more precise DDs, particularly in intermediate-complexity cases. However, its effectiveness varied by case complexity and further validation is needed to assess its full clinical impact. These findings highlight the potential role of digital health history devices such as DIANNA in improving clinical decision-making and diagnostic accuracy in medical practice. ClinicalTrials.gov NCT03901495; https://clinicaltrials.gov/study/NCT03901495.
A cluster randomized controlled trial of an online psychoeducational intervention for people with a family history of depression
Background People with a family history of major depressive disorder (MDD) or bipolar disorder (BD) report specific psychoeducational needs that are unmet by existing online interventions. This trial aimed to test whether an interactive website for people at familial risk for depression (intervention) would improve intention to adopt, or actual adoption of, depression prevention strategies (primary outcome) and a range of secondary outcome measures. Methods In this cluster randomised trial, primary care practises were randomised to either provide the link to the intervention or the control website. Primary health care attendees were invited by letter to opt into this study if they had at least one first-degree relative with MDD or BD and were asked to complete online questionnaires at baseline and 2-week follow-up. Results Twenty general practices were a randomized, and 202 eligible patients completed both questionnaires. Thirty-nine (19.3%) of participants were male and 163 (80.7%) female. At follow-up, compared to controls, the intervention group: (i) were more likely to intend to undergo, or to have actually undergone, psychological therapies (OR = 5.83, 95% CI: 1.58–21.47, p  = .008); (ii) had better knowledge of depression risk factors and prevention strategies (mean difference = 0.47, 95% CI: 0.05–0.88, p  = .029); and (iii) were more likely to accurately estimate their lifetime risk of developing BD (mean difference = 11.2, 95% CI: -16.52– -5.73, p  < .001). There were no statistically significant between-group differences in change from baseline to follow up for any of the remaining outcome measures (Patient Health Questionnaire, Perceived Devaluation-Discrimination Questionnaire and Perceived Risk of Developing MDD). Conclusion The opt-in nature of the study may have led to participation bias, e.g. underrepresentation of males, and hence may limit generalisability to the broader population at familial risk for depression. This is the first website internationally focusing specifically on informational needs of those at familial risk of depression. Our interactive website can play an important role in improving the outcomes of individuals at familial risk for depression. Testing the intervention in other settings (e.g. psychology, psychiatry, genetic counselling) appears warranted. Trial registration The study was prospectively registered with the Australian and New Zealand Clinical Trials Group (Registration no: ACTRN12613000402741 ).
A multicentre, double-blind, randomised, controlled, parallel-group study of the effectiveness of a pharmacist-acquired medication history in an emergency department
Background Admission to an emergency department (ED) is a key vulnerable moment when patients are at increased risk of medication discrepancies and medication histories are an effective way of ensuring that fewer errors are made. This study measured whether a pharmacist-acquired medication history in an ED focusing on a patient’s current home medication regimen, and available to be used by a doctor when consulting in the ED, would reduce the number of patients having at least 1 medication discrepancy related to home medication. Methods This multicentre, double-blind, randomised, controlled parallel-group study was conducted at 3 large teaching hospitals. Two hundred and seventy participants were randomly allocated to an intervention (n = 134) or a standard care (n = 136) arm. All consecutive patients >18 years old admitted through the ED were eligible. The intervention consisted of pharmacists conducting a standardised comprehensive medication history interview focusing on a patient’s current home medication regimen, prior to being seen by a doctor. Data recorded on the admission medication order form was available to be used by a doctor during consultation in the ED. The admission medication order form was given to doctors at a later stage in the control arm for them to amend prescriptions. The effect of the intervention was assessed primarily by comparing the number of patients having at least 1 admission medication discrepancy regarding medication being taken at home. Secondary outcomes concerned the characteristics and clinical severity of such medication discrepancies. Results The intervention reduced discrepancies occurring by 33% (p < 0.0001; 0.1055 odds ratio, 0.05-0.24 95% confidence interval), despite recall bias. Regarding total discrepancies, omitting medication occurred most frequently (55.1%) and most discrepancies (42.7%) were judged to have the potential to cause moderate discomfort or clinical deterioration. Conclusions A pharmacist-acquired medication history in an ED focusing on a patient’s current home medication regimen available to be used by a doctor at the time of consulting in the ED reduced the number of patients having at least 1 home medication-related discrepancy. Trial registration Current Controlled Trials ISRCTN63455839 .