Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
Scoring with the computer: Alternative procedures for improving the reliability of holistic essay scoring
by
Steier, Michael
, Lewis, Will
, Attali, Yigal
in
Agreements
/ Ausdrucksfähigkeit
/ Automation
/ Bewertungsskala
/ College Entrance Examinations
/ Computer
/ Computer Generated Language Analysis
/ Correlation
/ Educational evaluation
/ Educational research
/ Educational Testing
/ Entrance examinations
/ Essay Tests
/ Essay Writing
/ Essays
/ Graduate Record Examinations
/ High Stakes Tests
/ Humans
/ Interrater Reliability
/ Language
/ Language Tests
/ Leistungsbeurteilung
/ Reliability
/ Schriftlicher Ausdruck
/ Scores
/ Scoring
/ Scoring Rubrics
/ Test
/ Writing
/ Writing Evaluation
/ Writing Tests
2013
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
Scoring with the computer: Alternative procedures for improving the reliability of holistic essay scoring
by
Steier, Michael
, Lewis, Will
, Attali, Yigal
in
Agreements
/ Ausdrucksfähigkeit
/ Automation
/ Bewertungsskala
/ College Entrance Examinations
/ Computer
/ Computer Generated Language Analysis
/ Correlation
/ Educational evaluation
/ Educational research
/ Educational Testing
/ Entrance examinations
/ Essay Tests
/ Essay Writing
/ Essays
/ Graduate Record Examinations
/ High Stakes Tests
/ Humans
/ Interrater Reliability
/ Language
/ Language Tests
/ Leistungsbeurteilung
/ Reliability
/ Schriftlicher Ausdruck
/ Scores
/ Scoring
/ Scoring Rubrics
/ Test
/ Writing
/ Writing Evaluation
/ Writing Tests
2013
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
Scoring with the computer: Alternative procedures for improving the reliability of holistic essay scoring
by
Steier, Michael
, Lewis, Will
, Attali, Yigal
in
Agreements
/ Ausdrucksfähigkeit
/ Automation
/ Bewertungsskala
/ College Entrance Examinations
/ Computer
/ Computer Generated Language Analysis
/ Correlation
/ Educational evaluation
/ Educational research
/ Educational Testing
/ Entrance examinations
/ Essay Tests
/ Essay Writing
/ Essays
/ Graduate Record Examinations
/ High Stakes Tests
/ Humans
/ Interrater Reliability
/ Language
/ Language Tests
/ Leistungsbeurteilung
/ Reliability
/ Schriftlicher Ausdruck
/ Scores
/ Scoring
/ Scoring Rubrics
/ Test
/ Writing
/ Writing Evaluation
/ Writing Tests
2013
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
Scoring with the computer: Alternative procedures for improving the reliability of holistic essay scoring
Journal Article
Scoring with the computer: Alternative procedures for improving the reliability of holistic essay scoring
2013
Request Book From Autostore
and Choose the Collection Method
Overview
Automated essay scoring can produce reliable scores that are highly correlated with human scores, but is limited in its evaluation of content and other higher-order aspects of writing. The increased use of automated essay scoring in high-stakes testing underscores the need for human scoring that is focused on higher-order aspects of writing. This study experimentally evaluated several alternative procedures for eliciting distinct human scores and improving their reliability. Essays written in response to the argument and issue tasks of the Analytical Writing measure of the GRE General Test were scored by experienced raters under different conditions. Criteria for evaluation included inter-rater agreement, agreement with machine scores, and cross-task reliability. First, the use of a modified scoring rubric that focused on higher-order writing skills increased the reliability for one type of task but decreased it for another. Second, scoring in batches of similar length essays did not have any effect on scores. Third, scoring with available automated essay scores increased reliability of human scores, but also increased their similarity with automated scores. Finally, the use of a more refined 18-point scoring scale significantly increased reliability. (Verlag).
Publisher
SAGE Publications,Sage Publications Ltd
Subject
This website uses cookies to ensure you get the best experience on our website.