The reliability of expert judgement' in the marking of state nursing examinations

Abstract
A pilot study was undertaken to investigate the reliability of the 'expert judgement' exercised in marking state examination papers. Estimates of inter-rater reliability had a median coefficient of 0.76. Intra-rater reliability was marginally higher overall, with a median coefficient of 0.81. While marker reliability is only one source of variation in the assessment of extended answers, it is important the level be known. This study provides useful information and guidelines for improvements.