Concept-Match Medical Data Scrubbing
Open Access
- 1 June 2003
- journal article
- Published by Archives of Pathology and Laboratory Medicine in Archives of Pathology & Laboratory Medicine
- Vol. 127 (6) , 680-686
- https://doi.org/10.5858/2003-127-680-cmds
Abstract
Context.—In the normal course of activity, pathologists create and archive immense data sets of scientifically valuable information. Researchers need pathology-based data sets, annotated with clinical information and linked to archived tissues, to discover and validate new diagnostic tests and therapies. Pathology records can be used for research purposes (without obtaining informed patient consent for each use of each record), provided the data are rendered harmless. Large data sets can be made harmless through 3 computational steps: (1) deidentification, the removal or modification of data fields that can be used to identify a patient (name, social security number, etc); (2) rendering the data ambiguous, ensuring that every data record in a public data set has a nonunique set of characterizing data; and (3) data scrubbing, the removal or transformation of words in free text that can be used to identify persons or that contain information that is incriminating or otherwise private. This article addresses the problem of data scrubbing. Objective.—To design and implement a general algorithm that scrubs pathology free text, removing all identifying or private information. Methods.—The Concept-Match algorithm steps through confidential text. When a medical term matching a standard nomenclature term is encountered, the term is replaced by a nomenclature code and a synonym for the original term. When a high-frequency “stop” word, such as a, an, the, or for, is encountered, it is left in place. When any other word is encountered, it is blocked and replaced by asterisks. This produces a scrubbed text. An open-source implementation of the algorithm is freely available. Results.—The Concept-Match scrub method transformed pathology free text into scrubbed output that preserved the sense of the original sentences, while it blocked terms that did not match terms found in the Unified Medical Language System (UMLS). The scrubbed product is safe, in the restricted sense that the output retains only standard medical terms. The software implementation scrubbed more than half a million surgical pathology report phrases in less than an hour. Conclusions.—Computerized scrubbing can render the textual portion of a pathology report harmless for research purposes. Scrubbing and deidentification methods allow pathologists to create and use large pathology databases to conduct medical research.Keywords
This publication has 8 references indexed in Scilit:
- Threshold protocol for the exchange of confidential medical dataBMC Medical Research Methodology, 2002
- Confidentiality issues for medical data minersPublished by Elsevier ,2002
- Clinicians Are From Mars and Pathologists Are From VenusArchives of Pathology & Laboratory Medicine, 2000
- Automatic Record Hash Coding and Linkage for Epidemiological Follow-up Data ConfidentialityMethods of Information in Medicine, 1998
- Internet autopsy databaseHuman Pathology, 1997
- Performance Analysis of Manual and Automated Systemized Nomenclature of Medicine (SNOMED) CodingAmerican Journal of Clinical Pathology, 1994
- Pathology Accessioning and Retrieval System with Encoding by Computer (PARSEC): A Microcomputer-based System for Anatomic Pathology Featuring Automated SNOP Coding and Multiple Administrative FunctionsAmerican Journal of Clinical Pathology, 1980
- Progress in Medical Information ManagementJAMA, 1980