Calibrating graded assessments: Rasch partial credit analysis of performance in writing
- 1 June 1987
- journal article
- research article
- Published by SAGE Publications in Language Testing
- Vol. 4 (1) , 72-92
- https://doi.org/10.1177/026553228700400107
Abstract
Language tests have until now followed one or other of two strategies, focusing either on the difficulty of the questions or on the quality of the performance, but never on both. This article describes the use of the partial credit form of the Rasch model in the analysis and calibration of a set of writing tasks. Qualitative assessments of components of writing competence are rescaled to take into account the empirical difficulty of each grade in each assessment, in order to pro vide more generalisable measures of writing ability. To exploit this kind of test analysis in assessing language production, it is necessary that the tasks be carefully controlled and that the assessment scales and criteria be adapted to suit the specific demands of each task. The analysis shows the pattern of grade difficulties across task competents and this pattern is dis cussed in some detail. For these tasks at least, it appears that the difficulty of minimally engaging with a task is highly task specific whereas the difficulty of achieving competence in the task depends more on competence in the compo nents of writing skill.Keywords
This publication has 4 references indexed in Scilit:
- Item response theoryLanguage Testing, 1985
- A self-rating scale of English difficulty: Rasch scalar analysis of items and rating categoriesLanguage Testing, 1985
- THEORETICAL BASES OF COMMUNICATIVE APPROACHES TO SECOND LANGUAGE TEACHING AND TESTINGApplied Linguistics, 1980
- Scaling Attitude Items Constructed and Scored in the Likert TraditionEducational and Psychological Measurement, 1978