Interobserver Agreement for the Bedside Clinical Assessment of Suspected Stroke
- 1 March 2006
- journal article
- research article
- Published by Wolters Kluwer Health in Stroke
- Vol. 37 (3) , 776-780
- https://doi.org/10.1161/01.str.0000204042.41695.a1
Abstract
Background and Purpose— Stroke remains primarily a clinical diagnosis, with information obtained from history and examination determining further management. We aimed to measure inter-rater reliability for the clinical assessment of stroke, with emphasis on items of history, timing of symptom onset, and diagnosis of stroke or mimic. We explored reasons for poor reliability. Methods— The study was based in an urban hospital with an acute stroke unit. Pairs of observers independently assessed suspected stroke patients. Findings from history, neurological examination, and the diagnosis of stroke or mimic, were recorded on a standard form. Reliability was measured by the κ statistic. We assessed the impact of observer experience and confidence, time of assessment, and patient-related factors of age, confusion, and aphasia on inter-rater reliability. Results— Ninety-eight patients were recruited. Most items of the history and the diagnosis of stroke were found to have moderate to good inter-rater reliability. There was agreement for the hour and minute of symptom onset in only 45% of cases. Observer experience and confidence improved reliability; patient-related factors of confusion and aphasia made the assessment more difficult. There was a trend for worse inter-rater reliability among patients assessed very early and very late after symptom onset. Conclusions— Clinicians should be aware that inter-rater reliability of the clinical assessment is affected by a variety of factors and is improved by experience and confidence. Our findings have implications for training of doctors who assess patients with suspected stroke and identifies the more reliable components of the clinical assessment.Keywords
This publication has 11 references indexed in Scilit:
- Guidelines for the Early Management of Patients With Ischemic StrokeStroke, 2003
- Observer Agreement in the Angiographic Assessment of Arteriovenous Malformations of the BrainStroke, 2002
- Inter-rater reliability of stroke sub-type classification by neurologists and nurses within a community-based stroke incidence studyJournal of Clinical Neuroscience, 2001
- Interrater Reliability of the National Institutes of Health Stroke Scale: Rating by Neurologistsand N urses in a Community-Based Stroke Incidence StudyCerebrovascular Diseases, 1999
- Interphysician agreement in the diagnosis of subtypes of acute ischemic strokeNeurology, 1993
- Statistical methods for assessing observer variability in clinical measures.BMJ, 1992
- Classification and natural history of clinically identifiable subtypes of cerebral infarctionPublished by Elsevier ,1991
- Interrater Reliability of the NIH Stroke ScaleArchives of Neurology, 1989
- Interobserver Variability in the Assessment of Neurologic History and Examination in the Stroke Data BankArchives of Neurology, 1985
- A Coefficient of Agreement for Nominal ScalesEducational and Psychological Measurement, 1960