Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
6 result(s) for "Inter-Rater Concordance"
Sort by:
Development and analysis of quality assessment tools for different types of patient information – websites, decision aids, question prompt lists, and videos
Objective Our working group has developed a set of quality assessment tools for different types of patient information material. In this paper we review and evaluate these tools and their development process over the past eight years. Methods We compared the content and structure of quality assessment tools for websites, patient decision aids (PDAs), question prompt lists (QPLs), and videos. Using data from their various applications, we calculated inter-rater concordance using Kendall’s W. Results The assessment tools differ in content, structure and length, but many core aspects remained throughout the development over time. We found a relatively large variance regarding the amount of quality aspects combined into one item, which may influence the weighting of those aspects in the final scores of evaluated material. Inter-rater concordance was good in almost all applications of the tool. Subgroups of similar expertise showed higher concordance rates than the overall agreement. Conclusion All four assessment tools are ready to be used by people of different expertise. However, varying expertise may lead to some differences in the resulting assessments when using the tools. The lay and patient perspective needs to be further explored and taken into close consideration when refining the instruments.
The Use of Analytic Rubric in the Assessment of Writing Performance -Inter-Rater Concordance Study
In this study, the purpose was to determine whether there was concordance among raters in the assessment of the writing performance using analytic rubric; furthermore, factors affecting the assessment process were examined. The analytic rubric used in the study consists of three sections and ten properties: External structure (format, spelling and punctuation), language and expression (vocabulary, sentences, paragraphs, and expression), organization (title, introduction, story, and conclusion). The basis of the study is composed of narrative texts written by 200 students studying at the sixth and seventh grades of schools located on the Anatolian side of Istanbul (i.e., Beykoz, Kadikoy, Umraniye, and Uskudar). Texts were assessed in accordance with the analytic rubric by six raters. It was determined that the concordance among raters was sufficient according to the results of the assessment. (Contains 2 tables.)
Yazma Performansını Değerlendirmede Çözümleyici Puanlama Yönergesi Kullanımı -Değerlendirmeciler Arası Uyum Araştırması
Bu araştırmada çözümleyici puanlama yönergesine (analytic rubric) göre yapılan yazma performansının değerlendirilmesinde, değerlendirmeciler arasında uyum olup olmadığı belirlenmeye çalışılmış; ayrıca değerlendirme sürecini etkileyen faktörler incelenmiştir. Araştırmada kullanılan çözümleyici puanlama yönergesi üç bölüm ve on özellikten oluşmaktadır: Dış yapı (biçim, yazım ve noktalama), dil ve anlatım (sözcükler, cümleler, paragrafl ar, anlatım), düzenleme (başlık, serim, düğüm, çözüm). Çalışmanın temelini, İstanbul Anadolu yakasındaki (Beykoz, Kadıköy, Ümraniye, Üsküdar) okullarda altıncı ve yedinci sınıfta öğrenim gören iki yüz öğrencinin oluşturduğu öyküleyici metinler teşkil etmektedir. Metinler altı değerlendirmeci tarafından çözümleyici puanlama yönergesine göre değerlendirilmiştir. Değerlendirme sonuçlarına göre değerlendirmeciler arası uyumun yeterli olduğu belirlenmiştir. In this study, the purpose was determine whether there was concordance among raters in the assessment of the writing performance using analytic rubric; furthermore, factors aff ecting the assessment process were examined. The analytic rubric used in the study consists of three sections and ten properties: External structure (format, spelling and punctuation), language and expression (vocabulary, sentences, paragraphs, and expression), organization (title, introduction, story, and conclusion). The basis of the study is composed of narrative texts written by 200 students studying at the sixth and seventh grades of schools located on the Anatolian side of Istanbul (i.e., Beykoz, Kadikoy, Umraniye, and Uskudar). Texts were assessed in accordance with the analytic rubric by six raters. It was determined that the concordance among raters was suff icient according to the results of the assessment.
High inter-rater reliability of Japanese bedriddenness ranks and cognitive function scores: a hospital-based prospective observational study
Background The statistical validities of the official Japanese classifications of activities of daily living (ADLs), including bedriddenness ranks (BR) and cognitive function scores (CFS), have yet to be assessed. To this aim, we evaluated the ability of BR and CFS to assess ADLs using inter-rater reliability and criterion-related validity. Methods New inpatients aged ≥75 years were enrolled in this hospital-based prospective observational study. BR and CFS were assessed once by an attending nurse, and then by a social worker/medical clerk. We evaluated inter-rater reliability between different professions by calculating the concordance rate, kappa coefficient, Cronbach’s α, and intraclass correlation coefficient. We also estimated the relationship of the Barthel Index and Katz Index with the BR and CFS using Spearman’s correlation coefficients. Results For the 271 patients enrolled, BR at the first assessment revealed 66 normal, 10 of J1, 15 of J2, 18 of A1, 31 of A2, 37 of B1, 35 of B2, 22 of C1, and 32 of C2. The concordance rate between the two BR assessments was 68.6%, with a kappa coefficient of 0.61, Cronbach’s α of 0.91, and an intraclass correlation coefficient of 0.83, thus showing good inter-rater reliability. BR was negatively correlated with the Barthel Index (r = − 0.848, p  < 0.001) and Katz Index (r = − 0.820, p  < 0.001), showing justifiable criterion-related validity. Meanwhile, CFS at the first assessment revealed 92 normal, 47 of 1, 19 of 2a, 30 of 2b, 60 of 3a, 8 of 3b, 8 of 4, and 0 of M. The concordance rate between the two CFS assessments was 70.1%, with a kappa coefficient of 0.62, Cronbach’s α of 0.87, and an intraclass correlation coefficient 0.78, thus also showing good inter-rater reliability. CFS was negatively correlated with the Barthel Index (r = − 0.667, p  < 0.001) and Katz Index (r = − 0.661, p  < 0.001), showing justifiable criterion-related validity. Conclusions BR and CFS could be reliable and easy-to-use grading scales of ADLs in acute clinical practice or large-scale screening, with high inter-rater reliabilities among different professions and significant correlations with well-established, though complicated to use, instruments to assess ADLs. Trial registration UMIN000041051 (2020/7/10).
Inter‐rater agreement of HER2 ‐low scores between expert breast pathologists and the Visiopharm digital image analysis application ( HER2 APP , CE2797 )
Inter‐observer concordance data for the HER2 category as assessed by a group of 16 specialist breast pathologists on 50 diagnostic core biopsies was compared with that produced by digital image analysis (DIA) using the HER2 APP, CE2797 (VP APP; Visiopharm, Hoersholm, Denmark). Comparing pathologists' consensus scores and DIA scores, 36 cases (73.5%) agreed. Fleiss' kappa statistic was 0.433 (indicative of moderate agreement). Cohen's weighted kappa was used to compare the scores of individual raters to consensus scores; for all 50 cases the kappa scores had a range between 0.412 and 0.854; the VP APP was ranked 12th of 17 raters (kappa score 0.638 indicating substantial agreement). Results for HER2‐low cases ( N  = 44) showed a kappa score range of 0.295 to 0.823; the VP APP ranked 12th of 17 (score 0.535 indicating moderate agreement). For high agreement cases the kappa score range was 0.664 to 1.000 for all HER2 scores ( N  = 24) and the VP APP scored 0.916 (indicating almost perfect agreement). For the HER2‐low scores ( N  = 20), the kappa score range was 0.506–1.000 and the VP APP scored 0.860 (almost perfect agreement). DIA of the proportions of tumour cells showing expression within each of the HER2 categories demonstrated that the majority of cases showing a low level of agreement between pathologists showed heterogeneity and/or a level of expression close to a cut‐point for decision making. This study demonstrates that the VP APP produces results that are extremely well‐aligned to those of expert pathologists in cases with good overall agreement, and in difficult cases its reproducibility will outperform that of the visual scorer. The results also suggest that use of the VP APP has the potential to reduce the proportion of cases referred for gene amplification testing by reducing the number of cases incorrectly classified as HER2 2+.
Morphological concordance between CBCT and MDCT: a paranasal sinus-imaging anatomical study
Purpose Cone-beam computed tomography (CBCT) is an imaging technique, first developed for use during oral and pre-implant surgery. In sinonasal surgery, CBCT might represent a valuable tool for anatomical research given its high spatial resolution and low irradiation dose. However, clinical and anatomical evidence pertaining to its efficacy is lacking. This study assessed the morphological concordance between CBCT and multislice detector computed tomography (MDCT) in the context of sinonasal anatomy. Methods We performed an anatomical study using 15 fresh cadaver heads. Each head underwent both CBCT and MDCT. Two independent reviewers evaluated 26 notable anatomical landmarks. The primary outcome was the overall morphological concordance between the two imaging techniques. Secondary objectives included assessment of inter-rater agreement and comparison of the radiation doses received by different parts of the anatomy. Results Overall morphological concordance between the two imaging techniques was excellent (>98 %); the inter-rater agreement for CBCT was approximately 97 %, which is highly similar to MDCT, but achieved using a significantly decreased irradiation dose. Conclusion Our preliminary study indicates that CBCT represents a valid, reproducible, and safe technique for the identification of relevant sinonasal anatomical structures. Further research, particularly in pathological contexts, is required.