More than 50 years ago, information scientist Eugene Garfield and colleagues described a simple publication indicator, known as the Journal Impact Factor (JIF) . The JIF indicates the average number of citations received by papers published in a specific journal over a specific period of time. The calculation is very elementary math; divide the number of citations of all papers published in a journal over a specific time period (say 1, 2, or 5 years) by the number of papers published and …. Eureka! You have your number.
Like every other simple “discovery”, Garfield did not anticipate that this calculation was destined to become his greatest hit. This story is reminiscent of numerous other unlikely “successes”, one example being the polymerase chain reaction – a simple molecular biology technique that went on to make billions of dollars in commercial products and won a Nobel prize. Yet, why has the JIF enjoyed so much publicity in recent years? I believe that the JIF was born at just the right time, when explosive growth and revolutionary changes took place in academic publishing, mainly related to the electronic and digital era. Authors, publishers, funders, and other stakeholders were hungry for a metric that could separate, with a glimpse of eye, published gold from published mediocrity. But, is the JIF really telling you this?
Now, please allow me to divert, to make an analogy which has many similarities with the issue at hand. Many know that my favorite sport is tennis. Let us ask the question as to how many tennis balls, on average, a player is consuming over a certain period, say a year or two. Like the JIF, you can get your answer by dividing the number of sold balls by the number of the customers of a tennis shop. This is the Tennis Ball Impact Factor! Let us now ask if one shop should be considered ‘better’ than another if it sells more tennis balls per customer. It could indeed be that the better selling shop has better service, but other explanations may be at play, such as better advertising or a better location! Thus, selling more balls per customer (thus having higher Tennis Ball Impact Factor) may or may not be related to the quality of the shop. Let us now examine some other issues, not so much related to the shop but to its customers. Could a customer who buys balls from the best-selling shop claim that they could play better tennis than somebody who buys balls from a less popular shop? If the answer is yes (but unfortunately it isn’t), then I would rush to buy balls from that shop, in hopes of improving my game! Even better, I would buy balls from Roger Federer’s favorite shop, in hopes that my chances to win Wimbledon will skyrocket! I wish it were that simple!
With these thoughts in mind, I came to the conclusion, in 2009, that the JIF will not endure the test of time and would soon be replaced . I predicted that Garfield’s simple calculation was not likely to become, or remain, the widely sought top quality indicator in scientific publishing, like Wimbledon is for tennis, the Master’s for golf, or the Kentucky Derby for horse racing. My colleague, Dr. E.J. Favaloro, had the opposite opinion. Our debate on this issue has been published [2, 3].
Of course, I am not the first to criticize the JIF. Numerous editors, authors, and Associations of Editors expressed concerns with the use and misuse of JIF, and these concerns will not be repeated here [4–6]. I briefly summarize that the consensus is that the JIF should not be used as a surrogate measure of quality of individual research papers or for assessing an author’s scientific contribution, nor used to hire, promote, or fund individuals .
The JIF debate is closely monitored by authors, editors, and publishers. Some leading journals, including Nature and Science, are trying to devise improved indicators of journal performance [8, 9]. For example, it is well-known that the distributions of citations among published papers are highly skewed to the right (see data in ) and the use of mean citation values leads to the estimation of a much higher JIF than by using the medians instead. For example, Nature’s JIF went from 38 (with means) to 24 (with medians). It is rather appalling that this simple, and seemingly better, calculation has not already been widely implemented. After all, we teach our undergraduates never to use parametric tests (including means) if the data distributions are not normal! Yet, the reason may be obvious – I would not imagine any editor or publishing executive who would adopt an improved measure of the JIF if it implies that their current JIF would go down! The reverse is likely also true, wherein nearly all editors would immediately adopt a revision of the current calculation if it were to improve their JIF. Nature, and some other elite journals, are a special case. Since they rein in scientific publishing prestige, the JIF is of no consequence to them. Authors would still prefer them from any other journal with a much higher JIF.