Peer review and editorial decision-making

Abstract
Introduction: This paper describes and analyses the editor's decision-making process at the British Journal of Psychiatry (BJP), and investigates the association between reviewers' assessments and editorial decisions.Method: Four hundred consecutive manuscripts submitted over a six-month period to the BJP were examined prospectively for assessors' comments and editorial decisions on acceptance or rejection. Interrater reliability of assessments was calculated and a logistic regression analysis investigated the effect of the rank allocated by assessors and the comprehensiveness of the assessments on the editor's decision.Results: The editor sent 248/400 (62%) manuscripts to assessors for peer review. Kappa for reliability of assessors' rankings was 0.1 indicating poor interrater reliability. Assessors agreed best on whether to reject a paper. A ranking of five (indicating rejection) had the greatest association with editor's rejection (P < 0.001, odds ratio 0.079), and the mean ranking of assessments was also significantly associated with editorial acceptance or rejection (P=0.004, odds ratio 0.24)Conclusion: Assessors and editors tend to agree on what is clearly not acceptable for publication but there is less agreement on what is suitable.