Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Series Title
      Series Title
      Clear All
      Series Title
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Content Type
    • Item Type
    • Is Full-Text Available
    • Subject
    • Country Of Publication
    • Publisher
    • Source
    • Target Audience
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
8,150 result(s) for "Cook, David A"
Sort by:
Validity evidence for the Fundamentals of Laparoscopic Surgery (FLS) program as an assessment tool: a systematic review
Background The Fundamentals of Laparoscopic Surgery (FLS) program uses five simulation stations (peg transfer, precision cutting, loop ligation, and suturing with extracorporeal and intracorporeal knot tying) to teach and assess laparoscopic surgery skills. We sought to summarize evidence regarding the validity of scores from the FLS assessment. Methods We systematically searched for studies evaluating the FLS as an assessment tool (last search update February 26, 2013). We classified validity evidence using the currently standard validity framework (content, response process, internal structure, relations with other variables, and consequences). Results From a pool of 11,628 studies, we identified 23 studies reporting validity evidence for FLS scores. Studies involved residents ( n  = 19), practicing physicians ( n  = 17), and medical students ( n  = 8), in specialties of general ( n  = 17), gynecologic ( n  = 4), urologic ( n  = 1), and veterinary ( n  = 1) surgery. Evidence was most common in the form of relations with other variables ( n  = 22, most often expert–novice differences). Only three studies reported internal structure evidence (inter-rater or inter-station reliability), two studies reported content evidence (i.e., derivation of assessment elements), and three studies reported consequences evidence (definition of pass/fail thresholds). Evidence nearly always supported the validity of FLS total scores. However, the loop ligation task lacks discriminatory ability. Conclusion Validity evidence confirms expected relations with other variables and acceptable inter-rater reliability, but other validity evidence is sparse. Given the high-stakes use of this assessment (required for board eligibility), we suggest that more validity evidence is required, especially to support its content (selection of tasks and scoring rubric) and the consequences (favorable and unfavorable impact) of assessment.
Virtual Patients Using Large Language Models: Scalable, Contextualized Simulation of Clinician-Patient Dialogue With Feedback
Virtual patients (VPs) are computer screen-based simulations of patient-clinician encounters. VP use is limited by cost and low scalability. We aimed to show that VPs powered by large language models (LLMs) can generate authentic dialogues, accurately represent patient preferences, and provide personalized feedback on clinical performance. We also explored using LLMs to rate the quality of dialogues and feedback. We conducted an intrinsic evaluation study rating 60 VP-clinician conversations. We used carefully engineered prompts to direct OpenAI's generative pretrained transformer (GPT) to emulate a patient and provide feedback. Using 2 outpatient medicine topics (chronic cough diagnosis and diabetes management), each with permutations representing different patient preferences, we created 60 conversations (dialogues plus feedback): 48 with a human clinician and 12 \"self-chat\" dialogues with GPT role-playing both the VP and clinician. Primary outcomes were dialogue authenticity and feedback quality, rated using novel instruments for which we conducted a validation study collecting evidence of content, internal structure (reproducibility), relations with other variables, and response process. Each conversation was rated by 3 physicians and by GPT. Secondary outcomes included user experience, bias, patient preferences represented in the dialogues, and conversation features that influenced authenticity. The average cost per conversation was US $0.51 for GPT-4.0-Turbo and US $0.02 for GPT-3.5-Turbo. Mean (SD) conversation ratings, maximum 6, were overall dialogue authenticity 4.7 (0.7), overall user experience 4.9 (0.7), and average feedback quality 4.7 (0.6). For dialogues created using GPT-4.0-Turbo, physician ratings of patient preferences aligned with intended preferences in 20 to 47 of 48 dialogues (42%-98%). Subgroup comparisons revealed higher ratings for dialogues using GPT-4.0-Turbo versus GPT-3.5-Turbo and for human-generated versus self-chat dialogues. Feedback ratings were similar for human-generated versus GPT-generated ratings, whereas authenticity ratings were lower. We did not perceive bias in any conversation. Dialogue features that detracted from authenticity included that GPT was verbose or used atypical vocabulary (93/180, 51.7% of conversations), was overly agreeable (n=56, 31%), repeated the question as part of the response (n=47, 26%), was easily convinced by clinician suggestions (n=35, 19%), or was not disaffected by poor clinician performance (n=32, 18%). For feedback, detractors included excessively positive feedback (n=42, 23%), failure to mention important weaknesses or strengths (n=41, 23%), or factual inaccuracies (n=39, 22%). Regarding validation of dialogue and feedback scores, items were meticulously developed (content evidence), and we confirmed expected relations with other variables (higher ratings for advanced LLMs and human-generated dialogues). Reproducibility was suboptimal, due largely to variation in LLM performance rather than rater idiosyncrasies. LLM-powered VPs can simulate patient-clinician dialogues, demonstrably represent patient preferences, and provide personalized performance feedback. This approach is scalable, globally accessible, and inexpensive. LLM-generated ratings of feedback quality are similar to human ratings.
Web-based learning: pros, cons and controversies
Advantages of web-based learning (WBL) in medical education include overcoming barriers of distance and time, economies of scale, and novel instructional methods, while disadvantages include social isolation, up-front costs, and technical problems. Web-based learning is purported to facilitate individualised instruction, but this is currently more vision than reality. More importantly, many WBL instructional designs fail to incorporate principles of effective learning, and WBL is often used for the wrong reasons (eg for the sake of technology). Rather than trying to decide whether WBL is superior to or equivalent to other instructional media (research addressing this question will always be confounded), we should accept it as a potentially powerful instructional tool, and focus on learning when and how to use it. Educators should recognise that high fidelity, multimedia, simulations, and even WBL itself will not always be necessary to effectively facilitate learning.
ما الأدب الإفريقي..؟! : دراسة تحليلية
يهدف الكتاب إلى تقديم دراسة تحليلية للأدب الإفريقي من خلال استعراض تاريخه، أنماطه، وخصائصه، بالإضافة إلى تأثير العوامل الثقافية والاجتماعية على تطوره، يبدأ الكتاب بتعريف الأدب الإفريقي كمفهوم، حيث يركز على الأدب المكتوب والشفهي الذي أنتجه الكتاب الأفارقة في مختلف المناطق، يناقش التنوع اللغوي والثقافي في القارة الإفريقية وكيف أثر ذلك على تطور الأدب، خاصة في ضوء الاستعمار وتفاعلاته مع الثقافات المحلية، يقدم الكتاب نبذة عن تطور الأدب الإفريقي منذ العصور القديمة، مرورا بفترة الاستعمار الأوروبي وحتى العصر الحديث، يناقش كيف أن الأدب الإفريقي في فترة الاستعمار كان وسيلة لمقاومة الاستعمار والتعبير عن الهوية الثقافية.
Constructing a validity argument for the Objective Structured Assessment of Technical Skills (OSATS): a systematic review of validity evidence
In order to construct and evaluate the validity argument for the Objective Structured Assessment of Technical Skills (OSATS), based on Kane’s framework, we conducted a systematic review. We searched MEDLINE, EMBASE, CINAHL, PsycINFO, ERIC, Web of Science, Scopus, and selected reference lists through February 2013. Working in duplicate, we selected original research articles in any language evaluating the OSATS as an assessment tool for any health professional. We iteratively and collaboratively extracted validity evidence from included articles to construct and evaluate the validity argument for varied uses of the OSATS. Twenty-nine articles met the inclusion criteria, all focussed on surgical technical skills assessment. We identified three intended uses for the OSATS, namely formative feedback, high-stakes assessment and program evaluation. Following Kane’s framework, four inferences in the validity argument were examined (scoring, generalization, extrapolation, decision). For formative feedback and high-stakes assessment, there was reasonable evidence for scoring and extrapolation. However, for high-stakes assessment there was a dearth of evidence for generalization aside from inter-rater reliability data and an absence of evidence linking multi-station OSATS scores to performance in real clinical settings. For program evaluation, the OSATS validity argument was supported by reasonable generalization and extrapolation evidence. There was a complete lack of evidence regarding implications and decisions based on OSATS scores. In general, validity evidence supported the use of the OSATS for formative feedback. Research to provide support for decisions based on OSATS scores is required if the OSATS is to be used for higher-stakes decisions and program evaluation.
Practice variation and practice guidelines: Attitudes of generalist and specialist physicians, nurse practitioners, and physician assistants
To understand clinicians' beliefs about practice variation and how variation might be reduced. We surveyed board-certified physicians (N = 178), nurse practitioners (N = 60), and physician assistants (N = 12) at an academic medical center and two community clinics, representing family medicine, general internal medicine, and cardiology, from February-April 2016. The Internet-based questionnaire ascertained clinicians' beliefs regarding practice variation, clinical practice guidelines, and costs. Respondents agreed that practice variation should be reduced (mean [SD] 4.5 [1.1]; 1 = strongly disagree, 6 = strongly agree), but agreed less strongly (4.1 [1.0]) that it can realistically be reduced. They moderately agreed that variation is justified by situational differences (3.9 [1.2]). They strongly agreed (5.2 [0.8]) that clinicians should help reduce healthcare costs, but agreed less strongly (4.4 [1.1]) that reducing practice variation would reduce costs. Nearly all respondents (234/249 [94%]) currently depend on practice guidelines. Clinicians rated differences in clinician style and experience as most influencing practice variation, and inaccessibility of guidelines as least influential. Time to apply standards, and patient decision aids, were rated most likely to help standardize practice. Nurse practitioners and physicians assistants (vs physicians) and less experienced (vs senior) clinicians rated more favorably several factors that might help to standardize practice. Differences by specialty and academic vs community practice were small. Clinicians believe that practice variation should be reduced, but are less certain that this can be achieved. Accessibility of guidelines is not a significant barrier to practice standardization, whereas more time to apply standards is viewed as potentially helpful.
Digital Education for Health Professionals: An Evidence Map, Conceptual Framework, and Research Agenda
Health professions education has undergone major changes with the advent and adoption of digital technologies worldwide. This study aims to map the existing evidence and identify gaps and research priorities to enable robust and relevant research in digital health professions education. We searched for systematic reviews on the digital education of practicing and student health care professionals. We searched MEDLINE, Embase, Cochrane Library, Educational Research Information Center, CINAHL, and gray literature sources from January 2014 to July 2020. A total of 2 authors independently screened the studies, extracted the data, and synthesized the findings. We outlined the key characteristics of the included reviews, the quality of the evidence they synthesized, and recommendations for future research. We mapped the empirical findings and research recommendations against the newly developed conceptual framework. We identified 77 eligible systematic reviews. All of them included experimental studies and evaluated the effectiveness of digital education interventions in different health care disciplines or different digital education modalities. Most reviews included studies on various digital education modalities (22/77, 29%), virtual reality (19/77, 25%), and online education (10/77, 13%). Most reviews focused on health professions education in general (36/77, 47%), surgery (13/77, 17%), and nursing (11/77, 14%). The reviews mainly assessed participants' skills (51/77, 66%) and knowledge (49/77, 64%) and included data from high-income countries (53/77, 69%). Our novel conceptual framework of digital health professions education comprises 6 key domains (context, infrastructure, education, learners, research, and quality improvement) and 16 subdomains. Finally, we identified 61 unique questions for future research in these reviews; these mapped to framework domains of education (29/61, 47% recommendations), context (17/61, 28% recommendations), infrastructure (9/61, 15% recommendations), learners (3/61, 5% recommendations), and research (3/61, 5% recommendations). We identified a large number of research questions regarding digital education, which collectively reflect a diverse and comprehensive research agenda. Our conceptual framework will help educators and researchers plan, develop, and study digital education. More evidence from low- and middle-income countries is needed.