This is the calculation used:
Figure 1: Calculation for journal impact factor. |
---|
A= total cites in 1992 |
B= 1992 cites to articles published in 1990-91 (this is a subset of A) |
C= number of articles published in 1990-91 |
D= B/C = 1992 impact factor |
All this means is that the impact factor is telling you how popular the previous year's articles were in the community (as measured by number of citations). Because of how simple this calculation is, it is prone to manipulations that artificially inflate it. There have been quite a few articles and papers detailing the issues with the impact factor including a recent one that talks about some of the damage that impact factor mania leads to. It has gotten to the point where some just accept science in high impact factor journals like Science, Nature and Cell without examining the science despite the dangers of this. Even worse, the impact factor has become part and parcel of the process of getting grants, positions, and tenure despite some people decrying this practice.
How did we get to the point where the impact factor is the end all be all of academic progress and the sole measure of reliability? Part of the problem is that some institutions, grant panels, and science communicators have used the impact factor as a quick measure of reliability. Take what this blogger has said about the impact factor:
"One of the better ways to ascertain scientific research quality to examine the quality of the journal where the research was published. Articles in high quality journals are cited more often, because those journals attract the best scientific articles (which are cited more). Yes, it’s a self-fulfilling system, but that’s not all bad. Some of the most prestigious journals have been around for a century or more, and their reputation is deserved.
Obviously, the best articles are sent to these journals partially because of the prestige of the journal, but also because the peer review is so thorough. Journals use a metric called “impact factor” that essentially states how many times an average article is cited by other articles in an index (in this case for all medical journals)."
Not only is this an incorrect explanation of what an impact factor is (remember it is a measure of the number of citations the previous year divided by the number of articles published the previous two years (not the average number of citations per article as stated), but it sets the impact factor on the same level as quality of peer review and reliability. Although it might be true that journals like Cell, Nature and Science are harder to publish in, they also are very specific in what they are interested in publishing (known in academia as scope) and tend to publish flashier pieces. For example, 30 years ago it would be common to publish either the complete or partial genome of a virus in Science or Nature. These days you are more likely to publish such a paper in Genome Announcements or Archives of Virology. Does this mean that the peer review in GA of AoV is not rigorous or that the research published there is lesser quality than those published previously in Science or Nature? It is not likely to be the case due to the advancements in technology that eliminated the novelty (a big draw for journals like Science and Nature) in fact it is likely that the genome coverage is higher and the sequences are more reliable in recent papers than the days when a researcher called the bases on a long polyacrylamide sequencing gel. Does this mean that one journal is better than the other? No, they just have different scopes and therefore foci.
The aforementioned blogger does mention that impact factors aren't the sole determinant of reliability; however, they then come back to impact factors as a shortcut for determining reliability.
"As an independent, objective method to judge the quality of published research, Impact Factor is one of the best available."
Sadly nothing could be further from reality. This makes the assumption that journals with high impact factors never have to retract articles due to fraud. This is not the case as high impact factor journals have more retractions on average than lower impact factor journals. One possible explanation is that journals with a high impact factor have more concrete plans to deal with retractions; however, this has thus far only been studied in high impact factor journals with similar editorial practices regarding retractions and does not account for the increase in retractions as the impact factor increases.
Photo caption: Correlation between impact factor and retraction index. The 2010 journal impact factor s plotted against the retraction index as a measure of the frequency of retracted articles from 2001 to 2010 (see text for details). Journals analyzed were Cell, EMBO Journal, FEMS Microbiology Letters, Infection and Immunity, Journal of Bacteriology, Journal of Biological Chemistry, Journal of Experimental Medicine, Journal of Immunology, Journal of Infectious Diseases, Journal of Virology, Lancet, Microbial Pathogenesis, Molecular Microbiology, Nature, New England Journal of Medicine, PNAS, and Science. Credit: Fang et al., 2011, Figure 1.
So how do the virology journals make out in this impact factor game? I've put together a list of the 2014 impact factors for the virology-specific journals I could find. Note: I did not include general focus journals that include papers on virology in this list to help illustrate how field-specific journals suffer from this system.
So what can we do? There is no easy answer or shortcut in determining the quality of a particular paper. The only surefire way to judge an article's quality is to examine its methods, results and conclusions. Another option is to look for a meta analysis or a review covering the article in question as these are often written by experts who have an intimate knowledge of the topic and can determine the impact of the article. However, this runs into the issue that scientists are human with jealousies and grudges that can color their views on work by those they dislike. There really isn't an easy way around it. Sometimes, you just have to go to the data to see the quality of a paper.- Virology; IF 3.321
- Journal of Virology; IF 4.439
- Journal of General Virology; IF 3.183
- Archives of Virology; IF 2.390
- Virus Research; IF 2.324
- Virus Genes; IF 1.576
- Virology Journal; IF 2.181
- Journal of Virological Methods; IF 1.781
- Viruses*; IF 3.353
- Journal of Viruses*; IF None
- Annual Review of Virology; IF Brand new journal
- Viral Immunology; IF 1.446
- Current Opinion in Virology; IF 6.064
- Reviews in Medical Virology; IF 5.574
- Retrovirology; IF 4.19
- Current HIV/AIDS Reports; IF 3.795
- Antiviral Research; IF 3.938
*Journal publisher is on Beall's List of Predatory Publishers
If you notice, none of these journals, even the review journals, has an impact factor above 10, which people often use as their minimum for trustworthiness. So does this mean that these journals are automatically suspect? What are we to make of a journal published by a publisher on Beall's List having as high of an impact factor as other highly respected virology journals? The answer to both of these questions is simple: there must be something else going on here. In order to determine exactly what is going on, we need to dig deeper into what the impact factor was meant to convey and how it originated.
The idea of the impact factor dates back to the 1920's with the proposal to produce an index for university librarians to determine which journals were worth purchasing. In 1955, it was mentioned by Dr. Eugene Garfield with the intention of building an index for scientific papers so that outdated or fraudulent work could be put to rest. This original index had the idea of individual papers being cataloged and the number of citations that they had being recorded. This idea lives on in google scholar citations, ResearchGate, Web of Science and several others. Take my google scholar page for example. One can easily see which papers I have published and how many times they have been cited.The original idea for the impact factor was to make citing work easier for researchers. Over time, the idea of the impact factor evolved from comparing individual papers to journals. Dr. Garfield originally intended the journal impact factor to be used to evaluate an author when their most recent papers had not had time to be cited. Dr. Garfield also had this to say about comparing journals across disciplines: "In any case, comparisons should not be made across disciplines unless the data are
normalized in to take into account the various parameters that affect impact." This is something that is not done as the raw impact factors are used for the comparisons.
The usefulness of the impact factor is hotly debated among bibliometricians, with some pointing out rather large flaws in the system. Among the flaws are the ease of impact factor manipulation, the evolution from the journal impact factor from being used as a selective criteria for purchasing to the end all, be all of academia, negative citations carry the same weight as positive citations, that the importance of papers may be noticed after the impact factor, and that some papers that are not cited actually have an impact on the field. A recurring theme among the bibliometricians is that the impact factor is being misused to judge quality of a journal and not the influence. Some are quite blunt in their disapproval of this practice. Take what Dr. Anthony van Raan, director of the Center for Science and Technology Studies at Leiden University in the Netherlands has said about this habit: "If there is one thing every bibliometrician agrees, it is that you should never use the journal impact factor to evaluate research performance for an article or for an individual — that is a mortal sin." Several other alternatives have been proposed, but they would suffer the same fate as the impact factor in that they would be misappropriated as a way to judge quality.
So what's the take away from this? From my research on the history and intended use of the impact factor I found that:
- The impact factor does not measure reliability, just journal popularity
- Higher impact factor journals have higher retraction rates that cannot be explained away as differences in editorial practices as only high impact factor journals have been studied for this
- Just because a journal has a high impact factor it does not mean that the articles published there are of high quality
- The impact factor of journals in different disciplines and scopes cannot be directly compared without further mathematical adjustment
- There is no real shortcut for determining the quality of a research paper other than examining it critically