Reliability of a Structured Method of Selecting Abstracts for a Plastic Surgical Scientific Meeting
- 1 June 2003
- journal article
- research article
- Published by Wolters Kluwer Health in Plastic and Reconstructive Surgery
- Vol. 111 (7) , 2215-2222
- https://doi.org/10.1097/01.prs.0000061092.88629.82
Abstract
There is no generally accepted method for assessing abstracts that are submitted for a medical scientific meeting. This article describes the development and prospective evaluation of such a method applied to the 220 abstracts submitted for the 2000 Annual Meeting of the European Association of Plastic Surgeons. Structured abstracts were evaluated in three categories: aesthetic surgery, basic research, and clinical study. Each anonymous abstract was assessed separately by 10 reputable European plastic surgeons. These reviewers used a structured rating questionnaire which resulted in a score given by each reviewer to each abstract between -6 and +6. The scores of all 10 reviewers were added for each abstract, and the papers were accepted in each of the three categories on the basis of this abridged score. To evaluate the reliability of this structured method of selection, the interrater agreement among the reviewers was tested by means of kappa analysis and the Cronbach alpha coefficient. The kappa values for agreement among reviewers regarding acceptability of abstracts were low, but the alpha coefficient indicated an acceptable degree of reliability of the average reviewers' ratings for all categories. Using a structured questionnaire can be helpful in the objective assessment of abstracts for a scientific meeting and may facilitate comparison of abstracts. Meritocratic dichotomy of abstracts by the reviewers is advocated to further improve reliability of the rating. Even though reliability generally increases with the number of reviewers, the annual increase of submitted abstracts may necessitate a decrease in the number of reviewers for eachKeywords
This publication has 14 references indexed in Scilit:
- Inter-rater agreement in the scoring of abstracts submitted to a primary care research conferenceBMC Health Services Research, 2002
- The research abstract: worth getting it right.Irish Journal of Medical Science, 2001
- Reproducibility of peer review in clinical neuroscience: Is agreement between reviewers any greater than would be expected by chance alone?Brain, 2000
- What Makes a Good Reviewer and a Good Review for a General Medical Journal?JAMA, 1998
- Assessment of Clinical Outcome after Flashlamp Pumped Pulsed Dye Laser Treatment of Portwine Stains: A Comprehensive QuestionnairePlastic and Reconstructive Surgery, 1998
- An intervention to improve the reliability of manuscript reviews for the Journal of the American Academy of Child and Adolescent PsychiatryAmerican Journal of Psychiatry, 1993
- How reliable is peer review of scientific abstracts?Journal of General Internal Medicine, 1993
- Research Policy: Problems with peer review and alternativesBMJ, 1988
- Nominal Scale Agreement Among ObserversPsychometrika, 1986
- Coefficient alpha and the internal structure of testsPsychometrika, 1951