Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
2,535 result(s) for "Rubric"
Sort by:
Development and Delivery of Species Distribution Models to Inform Decision-Making
Information on where species occur is an important component of conservation and management decisions, but knowledge of distributions is often coarse or incomplete. Species distribution models provide a tool for mapping habitat and can produce credible, defensible, and repeatable information with which to inform decisions. However, these models are sensitive to data inputs and methodological choices, making it important to assess the reliability and utility of model predictions. We provide a rubric that model developers can use to communicate a model’s attributes and its appropriate uses. We emphasize the importance of tailoring model development and delivery to the species of interest and the intended use and the advantages of iterative modeling and validation. We highlight how species distribution models have been used to design surveys for new populations, inform spatial prioritization decisions for management actions, and support regulatory decision-making and compliance, tying these examples back to our model assessment rubric.
Rubric Design: A Designer’s Perspective
The rubric, a canonical matrix of criteria presented to students as the road map to academic success. An “Ah-ha” moment, “that is what I’m looking for” utopia for the fresh-minted instructor. While rubrics provide the possibility for solving the complexity of some teaching problems, like many fuzzy oases we have come to know rubrics as a thinly veiled tools that are as useful as they are well designed. If you ever have designed rubrics, you know well they resolve some issues but present many more. They contain a conundrum of specific questions: What do I want my students to know? How do I want them to perform? How am I going to evaluate them? What am I attempting to transfer? Reveal? Show? What am I teaching? As many instructors would attest, these questions are not always easy. Creating rubrics can be difficult, very time-consuming, and can feel like a paradox. But in the end, they can also provide real value to the student and instructor. This reflective essay takes on the design and development of the rubric, challenging the notion there are different types of rubrics or inherent evaluation methodologies, offers an alternative layout, and reviews rubrics through the designer’s lens, approaching them through concept development and design thinking.
Rubric Design: A Designer’s Perspective
The rubric, a canonical matrix of criteria presented to students as the road map to academic success. An “Ah-ha” moment, “that is what I’m looking for” utopia for the fresh-minted instructor. While rubrics provide the possibility for solving the complexity of some teaching problems, like many fuzzy oases we have come to know rubrics as a thinly veiled tools that are as useful as they are well designed. If you ever have designed rubrics, you know well they resolve some issues but present many more. They contain a conundrum of specific questions: What do I want my students to know? How do I want them to perform? How am I going to evaluate them? What am I attempting to transfer? Reveal? Show? What am I teaching? As many instructors would attest, these questions are not always easy. Creating rubrics can be difficult, very time-consuming, and can feel like a paradox. But in the end, they can also provide real value to the student and instructor. This reflective essay takes on the design and development of the rubric, challenging the notion there are different types of rubrics or inherent evaluation methodologies, offers an alternative layout, and reviews rubrics through the designer’s lens, approaching them through concept development and design thinking.
Effects of Rubrics on Academic Performance, Self-Regulated Learning, and self-Efficacy: a Meta-analytic Review
Rubrics are widely used as instructional and learning instrument. Though they have been claimed to have positive effects on students’ learning, these effects have not been meta-analyzed. Our aim was to synthesize the effects of rubrics on academic performance, self-regulated learning, and self-efficacy. The moderator effect of the following variables was also investigated: year of publication, gender, mean age, educational level, type of educational level (compulsory vs. higher education), number of sessions, number of assessment criteria, number of performance levels, use of self and peer assessment, research design, and empirical quality of the study. Standardized mean differences (for the three outcomes) and standardized mean changes (SMC; for academic performance) were calculated from the retrieved studies. After correcting for publication bias, a moderate and positive effect was found in favor of rubrics on academic performance (g = 0.45, k = 21, m = 54, 95% CI [0.312, 0.831]; SMC = 0.38, 95% CI [0.02, 0.75], k = 12, m = 30), whereas a small pooled effect was observed for self-regulated learning (g = 0.23, k = 5, m = 17, 95% CI [-0.15, 0.60]) and for self-efficacy (g = 0.18, k = 3, m = 5, 95% CI [-0.81, 0.91]). Most of the moderator variables were not significant. Importantly, to improve the quality of future reports on the effects of rubrics, we provide an instrument to be filled out for rubric scholars in forthcoming studies.
Experimental Evidence on Teachers’ Racial Bias in Student Evaluation: The Role of Grading Scales
A vast research literature documents racial bias in teachers’ evaluations of students. Theory suggests bias may be larger on grading scales with vague or overly general criteria versus scales with clearly specified criteria, raising the possibility that well-designed grading policies may mitigate bias. This study offers relevant evidence through a randomized Web-based experiment with 1,549 teachers. On a vague grade-level evaluation scale, teachers rated a student writing sample lower when it was randomly signaled to have a Black author, versus a White author. However, there was no evidence of racial bias when teachers used a rubric with more clearly defined evaluation criteria. Contrary to expectation, I found no evidence that the magnitude of grading bias depends on teachers’ implicit or explicit racial attitudes.
Rubric formats for the formative assessment of oral presentation skills acquisition in secondary education
Acquiring complex oral presentation skills is cognitively demanding for students and demands intensive teacher guidance. The aim of this study was twofold: (a) to identify and apply design guidelines in developing an effective formative assessment method for oral presentation skills during classroom practice, and (b) to develop and compare two analytic rubric formats as part of that assessment method. Participants were first-year secondary school students in the Netherlands (n = 158) that acquired oral presentation skills with the support of either a formative assessment method with analytic rubrics offered through a dedicated online tool (experimental groups), or a method using more conventional (rating scales) rubrics (control group). One experimental group was provided text-based and the other was provided video-enhanced rubrics. No prior research is known about analytic video-enhanced rubrics, but, based on research on complex skill development and multimedia learning, we expected this format to best capture the (non-verbal aspects of) oral presentation performance. Significant positive differences on oral presentation performance were found between the experimental groups and the control group. However, no significant differences were found between both experimental groups. This study shows that a well-designed formative assessment method, using analytic rubric formats, outperforms formative assessment using more conventional rubric formats. It also shows that higher costs of developing video-enhanced analytic rubrics cannot be justified by significant more performance gains. Future studies should address the generalizability of such formative assessment methods for other contexts, and for complex skills other than oral presentation, and should lead to more profound understanding of video-enhanced rubrics.
Comparison of Machine Learning Performance Using Analytic and Holistic Coding Approaches Across Constructed Response Assessments Aligned to a Science Learning Progression
We systematically compared two coding approaches to generate training datasets for machine learning (ML): (i) a holistic approach based on learning progression levels and (ii) a dichotomous, analytic approach of multiple concepts in student reasoning, deconstructed from holistic rubrics. We evaluated four constructed response assessment items for undergraduate physiology, each targeting five levels of a developing flux learning progression in an ion context. Human-coded datasets were used to train two ML models: (i) an 8-classification algorithm ensemble implemented in the Constructed Response Classifier (CRC), and (ii) a single classification algorithm implemented in LightSide Researcher’s Workbench. Human coding agreement on approximately 700 student responses per item was high for both approaches with Cohen’s kappas ranging from 0.75 to 0.87 on holistic scoring and from 0.78 to 0.89 on analytic composite scoring. ML model performance varied across items and rubric type. For two items, training sets from both coding approaches produced similarly accurate ML models, with differences in Cohen’s kappa between machine and human scores of 0.002 and 0.041. For the other items, ML models trained with analytic coded responses and used for a composite score, achieved better performance as compared to using holistic scores for training, with increases in Cohen’s kappa of 0.043 and 0.117. These items used a more complex scenario involving movement of two ions. It may be that analytic coding is beneficial to unpacking this additional complexity.
The Effect of Scoring Rubrics Use on Self-Efficacy and Self-Regulation
This meta-analysis explores the effect of using scoring rubrics on self-efficacy and self-regulation in K-16 formal learning settings and its potential moderators. From the literature, we identified 14 relevant experimental or quasi-experimental primary studies conducted with a total of 2793 students. We retrieved 17 effect sizes for self-efficacy and 18 effect sizes for self-regulation outcomes from the primary studies. Rubric use has a statistically significant moderate to large positive effect on students’ self-efficacy (Hedges’ g = 0.39) and self-regulation (Hedges’ g = 1.00). Large within- and -between study variability of effect sizes is common: self-efficacy (Hedges’ g: −.06; 2.47) and self-regulation (Hedges’ g: −1.17; 3.30). We found no significant moderation of the effect of rubric use by students’ level of education, providing feedback, or instruction using the rubric, whereas there is evidence of an effect of rubrics on self-efficacy and self-regulation, variability of theoretical approaches, measures, and implementation quality raise questions about best practices for rubric development and use.
8121 The impact of national PEWS on children and young people admitted for paediatric medical stabilisation of eating disorders
Why did you do this work?Physiological observation monitoring is a key aspect of care in children and young people (CYPs) admitted for medical stabilisation of eating disorders (EDs) to assess severity of illness and plan care accordingly. Nottingham Children’s Hospital (NCH) utilises a local paediatric early warning system (PEWS) with a special circumstance module designed for CYPs with EDs to account for physiological abnormalities associated with the condition. PEWS is set to be replaced by a national paediatric early warning system (NPEWS) standardised across hospitals.What did you do?Our Aim to assess the impact of introducing NPEWS on our current patterns of trigger and escalation of paediatric care for CYPs with restrictive EDs currently monitored under local PEWS. This study consists of a retrospective clinical audit on CYPs admitted to NCH with restrictive EDs. Inclusion criteria CYPs aged 12–18 years old with restrictive ED diagnoses (including anorexia nervosa, atypical anorexia and ARFID), admitted to NCH between 2019 and 2022 whose admission duration exceeded 1 day. The highest local PEWS scores during the first week of admission were obtained and NPEWS calculated from the same set of observations data.What did you find?There is correlation between the highest local PEWS and NPEWS scores (r2 = 0.665) and limits of agreement on a Bland Altman plot of -2.5 to +3 (95% CI). The change in scoring rubric between local PEWS and NPEWS resulted in more NPEWS scores in the 1–4 range and 9–12 ranges and fewer NPEWS scores in the 5–8 range. When comparing the escalations triggered by local PEWS and NPEWS scores, there was a significant increase in overall escalations triggered 21/70 vs 70/70 using Fisher’s exact test (two-tailed P value < 0.0001) and a greater proportion of urgent and immediate reviews seen with NPEWS x 2 (1, N=70) = 20.8 p<.00001.What does it meanNPEWS results in an increased number of triggers and an overall increase in the level of escalation of paediatric care in CYPs admitted with restrictive EDs when compared to local PEWS. NPEWS requires a special circumstances consideration to account for the physiological difference in CYPs with EDs in order to prevent over-escalation of medical care.
Does audience matter? Comparing teachers' and non-teachers' application and perception of quality rubrics for evaluating Open Educational Resources
While many rubrics have been developed to guide people in evaluating the quality of Open Educational Resources (OER), few studies have empirically investigated how different people apply and perceive such rubrics. This study examines how participants (22 teachers and 22 non-teachers) applied three quality rubrics (comprised of a total of 17 quality indicators) to evaluate 20 OER, and how they perceived the utility of these rubrics. Results showed that both teachers and non-teachers found some indicators more difficult to apply, and displayed different response styles on different indicators. In addition, teachers gave higher overall ratings to OER, but non-teachers' ratings had generally higher agreement values. Regarding rubric perception, both groups perceived these rubrics as useful in helping them find high-quality OER, but differed in their preferences for quality rubrics and indicators.