A Comparison of the Reliability and Validity of Ratings of Student Performance on Essay Examinations by Professors of English and by Professors in Other Disciplines

Abstract
For two essay questions—Question 1 given to one sample ( N1 = 100) and Question 2 to a second group ( N2 = 100)—ratings of student performance rendered by professors of English and by professors in other disciplines were compared for reliability and concurrent validity. From the data analyses it was concluded that the reliability and validity of the ratings provided by professors outside of English departments and by professors in English departments were nearly comparable. It appeared that differences in the reliability and concurrent validity of ratings assigned to writing samples of students might be more a function of the nature of the question posed or of variations in the average ability level of examinee groups than of differences in level of expertise on the part of readers.