Metadata, recall, and abstracts: can abstracts ever be reliable indicators of document value?
- 1 August 1997
- journal article
- Published by Emerald Publishing in Aslib Proceedings
- Vol. 49 (8) , 206-213
- https://doi.org/10.1108/eb051467
Abstract
S from seven Internet subject trees (Euroferret, Excite, Infoseek, Lycos Top 5%, Magellan, WebCrawler, Yahoo!), five Internet subject gateways (ADAM, EEVL, NetFirst, OMNI, SOSIG), and three online databases (ERIC, ISI, LISA) were examined for their subject content, treatment of various enriching features, physical properties such as overall length, and their readability. Considerable differences were measured, and consistent similarities among abstracts from each type of source were demonstrated. Internet subject tree abstracts were generally the shortest, and online database abstracts the longest. Subject tree and online database abstracts were the most informative, but the level of coverage of document features such as tables, bibliographies, and geographical constraints was disappointingly poor. On balance, the Internet gateways appeared to be providing the most satisfactory abstracts. The authors discuss the continuing rle in networked information retrieval of abstracts and their functional analogues such as metadata.Keywords
This publication has 8 references indexed in Scilit:
- Reflections on Summarizing and AbstractingJournal of Internet Cataloging, 1997
- Estimating the recall performance of Web search enginesAslib Proceedings, 1997
- Are Structured Abstracts Easier to Read Than Traditional Ones?Journal of Research in Reading, 1997
- Documentary abstracting: Toward a methodological modelJournal of the American Society for Information Science, 1995
- Readability and prestige in scientific journalsJournal of Information Science, 1988
- WRITING ABSTRACTS FOR FREE-TEXT SEARCHINGJournal of Documentation, 1986
- Criteria for acceptable abstracts: A survey of abstracters' instructionsAmerican Documentation, 1963
- Relative Effectiveness of Document Titles and Abstracts for Determining Relevance of DocumentsScience, 1961