A Comparison of the Generalizability of Scores Produced by Expert Raters and Automated Scoring Systems
- 1 July 1999
- journal article
- Published by Taylor & Francis in Applied Measurement in Education
- Vol. 12 (3) , 281-299
- https://doi.org/10.1207/s15324818ame1203_4
Abstract
No abstract availableKeywords
This publication has 19 references indexed in Scilit:
- Components of Rater Error in a Complex Performance AssessmentJournal of Educational Measurement, 1999
- Development of a Scoring Algorithm to Replace Expert Rating for Scoring a Complex Performance-Based AssessmentApplied Measurement in Education, 1997
- Generalizability Analysis for Performance Assessments of Student Achievement or School EffectivenessEducational and Psychological Measurement, 1997
- Development of Automated Scoring Algorithms for Complex Performance Assessments: A Comparison of Two ApproachesJournal of Educational Measurement, 1997
- The generalizability of scores from a performance assessment of physiciansʼ patient management skillsAcademic Medicine, 1996
- The Accuracy of Expert-System Diagnoses of Mathematical Problem SolutionsApplied Measurement in Education, 1996
- Scoring a Performance‐Based Assessment by Modeling the Judgments of ExpertsJournal of Educational Measurement, 1995
- Evaluation of Procedure‐Based Scoring for Hands‐On Science AssessmentJournal of Educational Measurement, 1992
- A methodology for scoring open-ended architectural design problems.Journal of Applied Psychology, 1991
- Scoring Constructed Responses Using Expert SystemsJournal of Educational Measurement, 1990