Measures of Interobserver Agreement and Reliability
- 28 July 2003
- book
- Published by Taylor & Francis
Abstract
Agreement among at least two evaluators is an issue of prime importance to statisticians, clinicians, epidemiologists, psychologists, and many other scientists. Measuring interobserver agreement is a method used to evaluate inconsistencies in findings from different evaluators who collect the same or similar information. Highlighting applications oKeywords
This publication has 0 references indexed in Scilit: