Performance Analysis of Manual and Automated Systemized Nomenclature of Medicine (SNOMED) Coding
- 1 March 1994
- journal article
- research article
- Published by Oxford University Press (OUP) in American Journal of Clinical Pathology
- Vol. 101 (3) , 253-256
- https://doi.org/10.1093/ajcp/101.3.253
Abstract
Many pathology departments rely on the accuracy of computer-generated diagnostic coding for surgical specimens. At present, there are no published guidelines to assure the quality of coding devices. To assess the performance of systemized nomenclature of medicine (SNOMED) coding software, manual coding was compared with automated coding in 9353 consecutive surgical pathology reports at the Baltimore Veterans Affairs Medical Center. Manual SNOMED coding produced 13,454 morphologic codes comprising 519 distinct codes; 209 were unique codes (assigned to only one report apiece). Automated coding obtained 23,744 morphologic codes comprising 498 distinct codes, of which 129 were unique codes. Only 44 (.5%) instances were found in which automated coding missed key diagnoses on surgical case reports. Thus, automated coding compared favorably with manual coding. To achieve the maximum performance, departments should monitor the output from automatic coders. Modifications in reporting style, code dictionaries, and coding algorithms can lead to improved coding performance.Keywords
This publication has 0 references indexed in Scilit: