Reliable Assessment Data in Multisite Programs

Abstract
Evaluation of long-term care programs has suffered from a lack of comparable, reliable data, both within and across programs. This article discusses a methodology for development of clinical/research assessment tools, training of assessment interviewers, and ongoing checking of interrater reliability, applied to a multisite national health and long-term care demonstration. Audio tapes of case managers' clinical assessment interviews are extensively used. Reliability results are fed back into an ongoing training program. First year results are reported, showing steady improvement across all sites in reliability of data from a complex assessment instrument. The program has potential for wide application and improvement of data bases that are important for future development of public policy in long-term care.