Abstract
Counting citationsSo how did a simple calculation become so influential? The impact factor was first proposed in the early 1960s by information scientist Eugene Garfield, now chairman emeritus of the multinational information company Thomson Scientific. It was conceived as a way to make better use of the reams of data that resulted from his Science Citation Index, set up in the 1950s to track the “subsequent history” of scientific ideas through their citations in future publications.With the hundreds of thousands of references from scientific journals Dr Garfield and his team at the Institute of Scientific Information (ISI) collected and categorised for their index, they were able to analyse the publication histories of individual authors, identify papers that caught the imagination of other scientists, and, importantly for publishing, rank journals according to their talent for picking popular papers.Although initial efforts at journal rankings simply totted up the numbers of mentions each publication received in the reference lists of future papers, Dr Garfield quickly realised that this method favoured journals that published a lot but did not necessarily pick the best studies. He suggested that dividing the number of times a journal is cited by the number of articles that it publishes would eliminate the bias towards big journals and produce a meaningful measure of the importance of a journal—the impact of an average paper published.In 1975, ISI started publishing an annual summary of citations in journals including the impact factor calculation, primarily as an aid for librarians making budget decisions who needed to choose the most cost effective journals to buy. The process involved loading the references from each published paper on to the science citation index database and then, to get the impact factor for each journal, adding up the numbers of citations published in all journals in the current year to articles published in the journal of interest over the two previous years and dividing that total by the number of “scholarly” items published in the previous two years. The result was a number that quantified the average number of citations accrued by a paper published in a particular journal during a given year—the impact factor.Three decades later, an almost identical system underlies the Journal Citation Reports still produced by ISI, which is now subsumed by Thomson Scientific. Rather than ranking just the 152 top journals Dr Garfield began with, ISI now produces yearly impact factor lists, grouped by specialty, for the 6088 journals in their science citation index, which is growing by an astonishing 200 journals every year.Inclusion in the index is something of a badge of honour for new journals, which must pass ISI's stringent assessment procedure before being incorporated. Suitable candidates have to meet basic publishing standards and have a fairly good chance of influencing the scientific record. “We take a look at what they have been able to do since the beginning of the year and whether the journal can attract authors that make an impact. If it passes that test we go on to quantitative analysis,” says James Testa, senior director of editorial development for Journal Citation Reports.But whereas the theory hasn't changed in 40 years, the mechanics of the calculation have. ISI has to take into account changes in the nature of scientific publishing from print only to an increasing proportion of electronic publications. “We index everything from print to direct feed to FTP files,” says Marie McVeigh, senior manager of Journal Citation Reports. And a lot of work goes into keeping up with the journals' changing editorial content. “It's six months of pretty non-stop work,” she says. “We have begun the first preparatory steps for year 2006 now and we'll be publishing [this year's impact factors] in mid to late June.”For ISI, one of the most difficult aspects of the indexing process is deciding which articles from each journal should count as part of the scholarly record and should therefore be added into the denominator for calculating the impact factor. Many scientific journals—and medical journals are particularly bad offenders in this respect—publish an eclectic mix of article types that marry journalism with research, narrative reviews with clinical cases. Editorial policy changes that create new sections, alter numbers of references, or reorganise article types are made with what seems like—at least from ISI's perspective—dizzying frequency. All of them can affect the eventual impact factor.David Tempest, associate director of research academic relations for the scientific publisher Elsevier, which publishes the Lancet, says the denominator is a difficult thing for ISI to get right. “BMJ, JAMA, and the Lancet might not have the same article types, and ISI has to work out what should be included,” he explains.But whereas in the 1970s journals were disinterested enough in their rankings to let ISI do its calculations unimpeded— “they ignored them”, says Dr Garfield—editors and publishers are now active participants, helping ISI make sure their numbers are correct at every step of the way. Tempest says he and his colleagues count the number of scholarly articles in Elsevier's journals to highlight any possible misclassifications by ISI. “What we try to do is work with ISI to get the citable items, the dominator, to be as accurate as possible. Things like news items and conference listings don't get a lot of citations, so they are seen as non-citable by ISI. We work together to get the best outcome for journals”, he explains. But for many journal editors, particularly those outside the big publishing houses, checking on the accuracy of ISI's indexing of their own journal's content is no easy task. The first...

This publication has 0 references indexed in Scilit: