Reliability of a Structured Method of Selecting Abstracts for a Plastic Surgical Scientific Meeting

Abstract
There is no generally accepted method for assessing abstracts that are submitted for a medical scientific meeting. This article describes the development and prospective evaluation of such a method applied to the 220 abstracts submitted for the 2000 Annual Meeting of the European Association of Plastic Surgeons. Structured abstracts were evaluated in three categories: aesthetic surgery, basic research, and clinical study. Each anonymous abstract was assessed separately by 10 reputable European plastic surgeons. These reviewers used a structured rating questionnaire which resulted in a score given by each reviewer to each abstract between -6 and +6. The scores of all 10 reviewers were added for each abstract, and the papers were accepted in each of the three categories on the basis of this abridged score. To evaluate the reliability of this structured method of selection, the interrater agreement among the reviewers was tested by means of kappa analysis and the Cronbach alpha coefficient. The kappa values for agreement among reviewers regarding acceptability of abstracts were low, but the alpha coefficient indicated an acceptable degree of reliability of the average reviewers' ratings for all categories. Using a structured questionnaire can be helpful in the objective assessment of abstracts for a scientific meeting and may facilitate comparison of abstracts. Meritocratic dichotomy of abstracts by the reviewers is advocated to further improve reliability of the rating. Even though reliability generally increases with the number of reviewers, the annual increase of submitted abstracts may necessitate a decrease in the number of reviewers for each