Improving Participation and Interrater Agreement in Scoring Ambulatory Pediatric Association Abstracts: How Well Have We Succeeded?
- 1 April 1996
- journal article
- research article
- Published by American Medical Association (AMA) in Archives of Pediatrics & Adolescent Medicine
- Vol. 150 (4) , 380-383
- https://doi.org/10.1001/archpedi.1996.02170290046007
Abstract
Objective: To determine whether increasing the number and types of raters affected interrater agreement in scoring abstracts submitted to the Ambulatory Pediatric Association. Methods: In 1990, all abstracts were rated by each of the 11 members of the board of directors of the Ambulatory Pediatric Association. In 1995, abstracts were reviewed by four to five raters, including eight members of the board of directors, two chairpersons of special interest groups, and 10 regional chairpersons, for a total of 20 potential reviewers. Submissions were divided into the following three categories for review: emergency medicine, behavioral pediatrics, and general pediatrics. Weighted percentage agreement and weighted κ scores were computed for 1990 and 1995 abstract scores. Results: Between 1990 and 1995, the number of abstracts submitted to the Ambulatory Pediatric Association increased from 246 to 407, the number of reviewers increased from 11 to 20, the weighted percentage agreement between raters remained approximately 79%, and weighted κ scores remained less than 0.25. Agreement was not significantly better for the emergency medicine and behavioral pediatrics abstracts than for general pediatrics, nor was it better for the raters who reviewed fewer abstracts than those who reviewed many. Conclusions: The number and expertise of those rating abstracts increased from 1990 to 1995. However, interrater agreement did not change and remained low. Further efforts are needed to improve the interrater agreement. (Arch Pediatr Adolesc Med. 1996;150:380-383)Keywords
This publication has 11 references indexed in Scilit:
- Diagnosing Autism using ICD-10 criteria: A comparison of neural networks and standard multivariate proceduresChild Neuropsychology, 1995
- Call for Comments on a Proposal To Improve Reporting of Clinical Trials in the Biomedical LiteratureAnnals of Internal Medicine, 1994
- An intervention to improve the reliability of manuscript reviews for the Journal of the American Academy of Child and Adolescent PsychiatryAmerican Journal of Psychiatry, 1993
- How reliable is peer review of scientific abstracts?Journal of General Internal Medicine, 1993
- Clinical studies in surgical journals—have we improved?Diseases of the Colon & Rectum, 1993
- Measures of clinical agreement for nominal and categorical data: The kappa coefficientComputers in Biology and Medicine, 1992
- The reliability of peer review for manuscript and grant submissions: A cross-disciplinary investigationBehavioral and Brain Sciences, 1991
- More Informative Abstracts RevisitedAnnals of Internal Medicine, 1990
- Use of check lists in assessing the statistical content of medical studies.BMJ, 1986
- Inference About Weighted Kappa in the Non-Null CaseApplied Psychological Measurement, 1978