MbrlCatalogueTitleDetail

هل ترغب في حجز الكتاب؟
Evaluating Expert-Layperson Agreement in Identifying Jargon Terms in Electronic Health Record Notes: Observational Study
Evaluating Expert-Layperson Agreement in Identifying Jargon Terms in Electronic Health Record Notes: Observational Study
لقد وضعنا الحجز لك!
لقد وضعنا الحجز لك!
بالمناسبة ، لماذا لا تستكشف الفعاليات التي يمكنك حضورها عند زيارتك للمكتبة لإستلام كتبك
أنت حاليًا في قائمة الانتظار لالتقاط هذا الكتاب. سيتم إخطارك بمجرد انتهاء دورك في التقاط الكتاب
عفوًا! هناك خطأ ما.
عفوًا! هناك خطأ ما.
يبدو أننا لم نتمكن من وضع الحجز. يرجى المحاولة مرة أخرى في وقت لاحق.
هل أنت متأكد أنك تريد إزالة الكتاب من الرف؟
Evaluating Expert-Layperson Agreement in Identifying Jargon Terms in Electronic Health Record Notes: Observational Study
وجه الفتاة! هناك خطأ ما.
وجه الفتاة! هناك خطأ ما.
أثناء محاولة إزالة العنوان من الرف ، حدث خطأ ما :( يرجى إعادة المحاولة لاحقًا!
تم إضافة الكتاب إلى الرف الخاص بك!
تم إضافة الكتاب إلى الرف الخاص بك!
عرض الكتب الموجودة على الرف الخاص بك .
وجه الفتاة! هناك خطأ ما.
وجه الفتاة! هناك خطأ ما.
أثناء محاولة إضافة العنوان إلى الرف ، حدث خطأ ما :( يرجى إعادة المحاولة لاحقًا!
هل تريد طلب الكتاب؟
Evaluating Expert-Layperson Agreement in Identifying Jargon Terms in Electronic Health Record Notes: Observational Study
Evaluating Expert-Layperson Agreement in Identifying Jargon Terms in Electronic Health Record Notes: Observational Study

يرجى العلم أن الكتاب الذي طلبته لا يمكن استعارته. إذا كنت ترغب في إستعارة هذا الكتاب ، يمكنك حجز نسخة أخرى
كيف تريد الحصول عليه؟
لقد طلبنا الكتاب لك! عذرا ، تسليم الروبوت غير متوفر في الوقت الحالي
لقد طلبنا الكتاب لك!
لقد طلبنا الكتاب لك!
تم معالجة طلبك بنجاح وستتم معالجته خلال ساعات عمل المكتبة. يرجى التحقق من حالة طلبك في طلباتي.
وجه الفتاة! هناك خطأ ما.
وجه الفتاة! هناك خطأ ما.
يبدو أننا لم نتمكن من تقديم طلبك. يرجى المحاولة مرة أخرى في وقت لاحق.
Evaluating Expert-Layperson Agreement in Identifying Jargon Terms in Electronic Health Record Notes: Observational Study
Evaluating Expert-Layperson Agreement in Identifying Jargon Terms in Electronic Health Record Notes: Observational Study
Journal Article

Evaluating Expert-Layperson Agreement in Identifying Jargon Terms in Electronic Health Record Notes: Observational Study

2024
نظرة عامة
Studies have shown that patients have difficulty understanding medical jargon in electronic health record (EHR) notes, particularly patients with low health literacy. In creating the NoteAid dictionary of medical jargon for patients, a panel of medical experts selected terms they perceived as needing definitions for patients. This study aims to determine whether experts and laypeople agree on what constitutes medical jargon. Using an observational study design, we compared the ability of medical experts and laypeople to identify medical jargon in EHR notes. The laypeople were recruited from Amazon Mechanical Turk. Participants were shown 20 sentences from EHR notes, which contained 325 potential jargon terms as identified by the medical experts. We collected demographic information about the laypeople's age, sex, race or ethnicity, education, native language, and health literacy. Health literacy was measured with the Single Item Literacy Screener. Our evaluation metrics were the proportion of terms rated as jargon, sensitivity, specificity, Fleiss κ for agreement among medical experts and among laypeople, and the Kendall rank correlation statistic between the medical experts and laypeople. We performed subgroup analyses by layperson characteristics. We fit a beta regression model with a logit link to examine the association between layperson characteristics and whether a term was classified as jargon. The average proportion of terms identified as jargon by the medical experts was 59% (1150/1950, 95% CI 56.1%-61.8%), and the average proportion of terms identified as jargon by the laypeople overall was 25.6% (22,480/87,750, 95% CI 25%-26.2%). There was good agreement among medical experts (Fleiss κ=0.781, 95% CI 0.753-0.809) and fair agreement among laypeople (Fleiss κ=0.590, 95% CI 0.589-0.591). The beta regression model had a pseudo-R of 0.071, indicating that demographic characteristics explained very little of the variability in the proportion of terms identified as jargon by laypeople. Using laypeople's identification of jargon as the gold standard, the medical experts had high sensitivity (91.7%, 95% CI 90.1%-93.3%) and specificity (88.2%, 95% CI 86%-90.5%) in identifying jargon terms. To ensure coverage of possible jargon terms, the medical experts were loose in selecting terms for inclusion. Fair agreement among laypersons shows that this is needed, as there is a variety of opinions among laypersons about what is considered jargon. We showed that medical experts could accurately identify jargon terms for annotation that would be useful for laypeople.