INTER‐OBSERVER AGREEMENT OF THE ASSESSMENT OF COMA SCALES AND BRAINSTEM SIGNS IN NON‐TRAUMATIC COMA
- 1 September 1995
- journal article
- Published by Wiley in Developmental Medicine and Child Neurology
- Vol. 37 (9) , 807-813
- https://doi.org/10.1111/j.1469-8749.1995.tb12064.x
Abstract
The authors evaluated the inter-observer agreement between two experienced clinicians examining 19 unconscious children who were not paralysed or ventilated. Inter-observer reliability was assessed by proportion of agreement, disagreement rate and kappa statistics. Corneal reflexes, pupillary responses to light and motor responses were the most reliably elicited. Reduction of the number of categories improved inter-observer agreement. Some of the disagreement may be attributed to changes in the child's condition during the period of assessment. There was more agreement about the five-category 0-IV scale than the summated Adelaide (10-category) and Jacobi (13-category) scales. The ability of these scales to follow changes in the patient's condition and to predict outcome needs to be evaluated in a prospective trialKeywords
This publication has 13 references indexed in Scilit:
- Coma Scales in Pediatric PracticeArchives of Pediatrics & Adolescent Medicine, 1990
- Measuring interrater reliability among multiple raters: An example of methods for nominal dataStatistics in Medicine, 1990
- Interobserver Agreement in Assessment of Motor Response and Brain Stem ReflexesNeurosurgery, 1987
- Interobserver agreement in assessment of motor response and brain stem reflexesNeurosurgery, 1987
- The Management of the Comatose ChildNeuropediatrics, 1983
- PAEDIATRIC COMA SCALEThe Lancet, 1982
- Interobserver agreement in assessment of ocular signs in coma.Journal of Neurology, Neurosurgery & Psychiatry, 1979
- Large sample standard errors of kappa and weighted kappa.Psychological Bulletin, 1969
- Weighted kappa: Nominal scale agreement provision for scaled disagreement or partial credit.Psychological Bulletin, 1968