An evaluation of GO annotation retrieval for BioCreAtIvE and GOA
Open Access
- 24 May 2005
- journal article
- research article
- Published by Springer Nature in BMC Bioinformatics
- Vol. 6 (S1) , 1-S17
- https://doi.org/10.1186/1471-2105-6-s1-s17
Abstract
The Gene Ontology Annotation (GOA) database http://www.ebi.ac.uk/GOA aims to provide high-quality supplementary GO annotation to proteins in the UniProt Knowledgebase. Like many other biological databases, GOA gathers much of its content from the careful manual curation of literature. However, as both the volume of literature and of proteins requiring characterization increases, the manual processing capability can become overloaded. Consequently, semi-automated aids are often employed to expedite the curation process. Traditionally, electronic techniques in GOA depend largely on exploiting the knowledge in existing resources such as InterPro. However, in recent years, text mining has been hailed as a potentially useful tool to aid the curation process. To encourage the development of such tools, the GOA team at EBI agreed to take part in the functional annotation task of the BioCreAtIvE (Critical Assessment of Information Extraction systems in Biology) challenge. BioCreAtIvE task 2 was an experiment to test if automatically derived classification using information retrieval and extraction could assist expert biologists in the annotation of the GO vocabulary to the proteins in the UniProt Knowledgebase. GOA provided the training corpus of over 9000 manual GO annotations extracted from the literature. For the test set, we provided a corpus of 200 new Journal of Biological Chemistry articles used to annotate 286 human proteins with GO terms. A team of experts manually evaluated the results of 9 participating groups, each of which provided highlighted sentences to support their GO and protein annotation predictions. Here, we give a biological perspective on the evaluation, explain how we annotate GO using literature and offer some suggestions to improve the precision of future text-retrieval and extraction techniques. Finally, we provide the results of the first inter-annotator agreement study for manual GO curation, as well as an assessment of our current electronic GO annotation strategies. The GOA database currently extracts GO annotation from the literature with 91 to 100% precision, and at least 72% recall. This creates a particularly high threshold for text mining systems which in BioCreAtIvE task 2 (GO annotation extraction and retrieval) initial results precisely predicted GO terms only 10 to 20% of the time. Improvements in the performance and accuracy of text mining for GO terms should be expected in the next BioCreAtIvE challenge. In the meantime the manual and electronic GO annotation strategies already employed by GOA will provide high quality annotations.Keywords
This publication has 21 references indexed in Scilit:
- Mapping gene ontology to proteins based on protein–protein interaction dataBioinformatics, 2004
- Comparing genomic expression patterns across species identifies shared transcriptional profile in agingNature Genetics, 2004
- UniProt: the Universal Protein knowledgebaseNucleic Acids Research, 2004
- Tough MiningPLoS Biology, 2003
- Investigating semantic similarity measures across the Gene Ontology: the relationship between sequence and annotationBioinformatics, 2003
- Prediction of human protein function according to Gene Ontology categoriesBioinformatics, 2003
- PRISM, a Generic Large Scale Proteomic Investigation Strategy for Mammals*SMolecular & Cellular Proteomics, 2003
- The FlyBase database of the Drosophila genome projects and community literatureNucleic Acids Research, 2003
- Guidelines for Human Gene NomenclatureGenomics, 2002
- Database resources of the National Center for Biotechnology InformationNucleic Acids Research, 2000