The Interplay of Evidence and Consequences in the Validation of Performance Assessments
- 1 March 1994
- journal article
- Published by American Educational Research Association (AERA) in Educational Researcher
- Vol. 23 (2) , 13-23
- https://doi.org/10.3102/0013189x023002013
Abstract
Authentic and direct assessments of performances and products are examined in the light of contrasting functions and purposes having implications for validation, especially with respect to the need for specialized validity criteria tailored for performance assessment. These include contrasts between performances and products, between assessment of performance per se and performance assessment of competence or other constructs, between structured and unstructured problems and response modes, and between breadth and depth of domain coverage. These distinctions are elaborated in the context of an overarching contrast between task-driven and construct-driven performance assessment. Rhetoric touting performance assessments because they eschew decomposed skills and decontextualized tasks is viewed as misguided, in that component skills and abstract problems have a legitimate place in pedagogy. Hence, the essence of authentic assessment must be sought elsewhere, that is, in the quest for complete construct representation. With this background, the concepts of “authenticity” and “directness” of performance assessment are treated as tantamount to promissory validity claims that they offset, respectively, the two major threats to construct validity, namely, construct underrepresentation and construct-irrelevant variance. With respect to validation, the salient role of both positive and negative consequences is underscored as well as the need, as in all assessment, for evidence of construct validity.Keywords
This publication has 15 references indexed in Scilit:
- Shifting Conceptions of Validity in Educational Measurement: Implications for Performance AssessmentReview of Educational Research, 1992
- Using Performance Assessment for Accountability PurposesEducational Measurement: Issues and Practice, 1992
- Complex, Performance-Based Assessment: Expectations and Validation CriteriaEducational Researcher, 1991
- A Systems Approach to Educational TestingEducational Researcher, 1989
- The real test bias: Influences of testing on teaching and learning.American Psychologist, 1984
- Evidence and Ethics in the Evaluation of TestsEducational Researcher, 1981
- The standard problem: Meaning and values in measurement and evaluation.American Psychologist, 1975
- Intelligence? Creativity? A Parsimonious Reinterpretation of the Wallach-Kogan DataAmerican Educational Research Journal, 1968
- Personality measurement and the ethics of assessment.American Psychologist, 1965
- The organization of human abilities.American Psychologist, 1962