This is the calculation used:
|Figure 1: Calculation for journal impact factor.|
|A= total cites in 1992|
|B= 1992 cites to articles published in 1990-91 (this is a subset of A)|
|C= number of articles published in 1990-91|
|D= B/C = 1992 impact factor|
All this means is that the impact factor is telling you how popular the previous year's articles were in the community (as measured by number of citations). Because of how simple this calculation is, it is prone to manipulations that artificially inflate it. There have been quite a few articles and papers detailing the issues with the impact factor including a recent one that talks about some of the damage that impact factor mania leads to. It has gotten to the point where some just accept science in high impact factor journals like Science, Nature and Cell without examining the science despite the dangers of this. Even worse, the impact factor has become part and parcel of the process of getting grants, positions, and tenure despite some people decrying this practice.
How did we get to the point where the impact factor is the end all be all of academic progress and the sole measure of reliability? Part of the problem is that some institutions, grant panels, and science communicators have used the impact factor as a quick measure of reliability. Take what this blogger has said about the impact factor:
"One of the better ways to ascertain scientific research quality to examine the quality of the journal where the research was published. Articles in high quality journals are cited more often, because those journals attract the best scientific articles (which are cited more). Yes, it’s a self-fulfilling system, but that’s not all bad. Some of the most prestigious journals have been around for a century or more, and their reputation is deserved.
Obviously, the best articles are sent to these journals partially because of the prestige of the journal, but also because the peer review is so thorough. Journals use a metric called “impact factor” that essentially states how many times an average article is cited by other articles in an index (in this case for all medical journals)."
Not only is this an incorrect explanation of what an impact factor is (remember it is a measure of the number of citations the previous year divided by the number of articles published the previous two years (not the average number of citations per article as stated), but it sets the impact factor on the same level as quality of peer review and reliability. Although it might be true that journals like Cell, Nature and Science are harder to publish in, they also are very specific in what they are interested in publishing (known in academia as scope) and tend to publish flashier pieces. For example, 30 years ago it would be common to publish either the complete or partial genome of a virus in Science or Nature. These days you are more likely to publish such a paper in Genome Announcements or Archives of Virology. Does this mean that the peer review in GA of AoV is not rigorous or that the research published there is lesser quality than those published previously in Science or Nature? It is not likely to be the case due to the advancements in technology that eliminated the novelty (a big draw for journals like Science and Nature) in fact it is likely that the genome coverage is higher and the sequences are more reliable in recent papers than the days when a researcher called the bases on a long polyacrylamide sequencing gel. Does this mean that one journal is better than the other? No, they just have different scopes and therefore foci.
The aforementioned blogger does mention that impact factors aren't the sole determinant of reliability; however, they then come back to impact factors as a shortcut for determining reliability.
"As an independent, objective method to judge the quality of published research, Impact Factor is one of the best available."
Sadly nothing could be further from reality. This makes the assumption that journals with high impact factors never have to retract articles due to fraud. This is not the case as high impact factor journals have more retractions on average than lower impact factor journals. One possible explanation is that journals with a high impact factor have more concrete plans to deal with retractions; however, this has thus far only been studied in high impact factor journals with similar editorial practices regarding retractions and does not account for the increase in retractions as the impact factor increases.
Photo caption: Correlation between impact factor and retraction index. The 2010 journal impact factor s plotted against the retraction index as a measure of the frequency of retracted articles from 2001 to 2010 (see text for details). Journals analyzed were Cell, EMBO Journal, FEMS Microbiology Letters, Infection and Immunity, Journal of Bacteriology, Journal of Biological Chemistry, Journal of Experimental Medicine, Journal of Immunology, Journal of Infectious Diseases, Journal of Virology, Lancet, Microbial Pathogenesis, Molecular Microbiology, Nature, New England Journal of Medicine, PNAS, and Science. Credit: Fang et al., 2011, Figure 1.
So what can we do? There is no easy answer or shortcut in determining the quality of a particular paper. The only surefire way to judge an article's quality is to examine its methods, results and conclusions. Another option is to look for a meta analysis or a review covering the article in question as these are often written by experts who have an intimate knowledge of the topic and can determine the impact of the article. However, this runs into the issue that scientists are human with jealousies and grudges that can color their views on work by those they dislike. There really isn't an easy way around it. Sometimes, you just have to go to the data to see the quality of a paper.
So what's the take away from this? From my research on the history and intended use of the impact factor I found that:
- The impact factor does not measure reliability, just journal popularity
- Higher impact factor journals have higher retraction rates that cannot be explained away as differences in editorial practices as only high impact factor journals have been studied for this
- Just because a journal has a high impact factor it does not mean that the articles published there are of high quality
- The impact factor of journals in different disciplines and scopes cannot be directly compared without further mathematical adjustment
- There is no real shortcut for determining the quality of a research paper other than examining it critically