Expert-System Scores for Complex Constructed-Response Quantitative Items: A Study of Convergent Validity

Abstract
This study investigated the convergent validity of expert-system scores for four mathematical constructed-response item formats. A five-factor model comprised of four constructed-response for mat factors and a Graduate Record Examination (GRE) General Test quantitative factor was posed. Confirmatory factor analysis was used to test the fit of this model and to compare it with several alter natives. The five-factor model fit well, although a solution comprised of two highly correlated dimensions_GRE-quantitative and constructed- response—represented the data almost as well. These results extend the meaning of the expert system's constructed-response scores by relating them to a well-established quantitative measure and by indicating that they signify the same underlying proficiency across item formats.