• Charlesworth Author Services

Is the h-index better than the impact factor?

When it comes to choosing a journal to submit your paper to, the journal’s impact is likely to be one of your first considerations. ‘Impact’ is often used as a synonym for quality, so publishing in a high-impact journal typically means that other researchers will perceive your study to be high-quality, whether in terms of the robustness of the study design, the significance of the results, or the importance of the findings to the rest of the field.

Many journals have a well-known reputation for either high- or low-quality output, whether it’s within a specific field or scientific research as a whole. That being said, there are times when you may not have a clear idea of the impact of a specific journal, or want more information about how to distinguish several journals with similar reputations from each other. This is where quality metrics come in.

Journal quality metrics

Journal quality metrics are tools that are used to quantify a journal’s impact. Each of these metrics has its own advantages and disadvantages, which we will go into in more detail later in this post. Two of the most well-known and widely used journal quality metrics are the impact factor and the h-index. So how do you decide which of these metrics to rely on when choosing a journal? Well, the first step is understanding what each of these metrics is and what it tells you about the journal.

What is the impact factor?

Impact factor is a well-known and commonly used way to denote the quality or impact of a scientific journal. An impact factor is based on the number of citations that articles published in that journal receive over time, and provides an idea of how highly other scientists think of the work (or how much they trust the results) based on how often they refer to it in their own work.

A journal’s impact factor is calculated based on three years’ worth of data, by counting the number of citations a journal receives in a full year of the articles it published in the two previous years, and then dividing that number by the total number of articles published in those two previous years. So for example, a journal’s 2020 impact factor is calculated by determining how many times that articles it published in 2018 and 2019 were cited in 2020, and then dividing that number by the total number of articles published in 2018 and 2019.

One obvious disadvantage of the impact factor calculation is that the resulting value can be easily thrown off by just one or a few highly cited papers, so it doesn’t necessarily reflect the quality of each individual paper that is published in the journal. It also takes at least three years’ worth of data to calculate, so new journals need to wait at least three years before earning their first impact factor. Many journals feature their impact factor prominently on their homepage.

What is the h-index?

A more recent journal metric that was designed to create a simpler way to represent quality, and ideally a method that is more reflective of the majority of papers published in a journal instead of the potentially few highly cited papers, is the h-index. The h-index value is the number of papers (h) published in a journal that have been cited at least h times. So, for example, if a journal has published twenty papers that have each been cited at least twenty times, then the journal’s h-index is 20.

If you’re finding this a little hard to understand, you can think of it visually as plotting papers vs citations and looking for the value at which a square box in the lower left-hand corner of the graph intersects this curve. h-index scores are listed in most of the commonly used journal databases, such as SJR.

The h-index is not limited solely to assessing journals. In fact, it is often used to score individual researchers, countries, etc., as it conveniently incorporates a measure of productivity with one of quality (by accounting for the number of papers, not just the number of citations).

Is the h-index a better metric than the impact factor?

There is considerable debate in the research field regarding the best metric for assessing journal quality. While it is generally accepted that impact factor is an imperfect system for scoring and comparing journal quality, it is also widely recognised, used, and accepted as at least a reasonable approximation.

As mentioned above, the h-index is gaining popularity because the way it is calculated means that it is more representative of the average paper published in a given journal, rather than reflecting primarily the most popular (i.e., the most frequently cited) paper or papers published in that journal. That being said, because it is a relatively recent metric (it was first proposed in 2005), fewer researchers are as familiar with this system, and they are therefore less likely to know what the h-index of a specific journal is and what it means.

An intriguing article recently published by The Royal Society explores how both researchers and journals can ‘game the system’ to increase their impact factor or h-index. Among the strategies mentioned in this paper are citing your own work in a paper or ‘exchanging’ citations with a colleague, which can increase the number of citations that a specific work receives, without properly reflecting the work or the paper’s actual impact. An important point to note here is that these strategies can be used to manipulate both h-index and impact factor; so, while h-index scores may be considered more reliable or accurate in some ways, they are just as prone to misuse as the more widely used impact factor.

If you are primarily interested in using the impact factor or h-index to choose a journal to submit to, the best approach is most likely to consider both scores, as each will give you useful, and slightly different, information about that journal’s quality. The impact factor is primarily useful as a measure of what other researchers think about the quality of that journal; that is, the journal’s reputation in the field. In contrast, the h-index is likely to give you a more realistic perspective on the relative ‘success’ of your paper if you publish in that journal, meaning how many times it may be cited by other works in the future.

Conclusion

As the scientific publishing field continues to evolve, it is likely that journal quality metrics will adapt and change as well, as researchers and publishers work to identify more and more reliable ways to assess research quality.

 

Charlesworth Author Services, a trusted brand supporting the world’s leading academic publishers, institutions and authors since 1928. 

To know more about our services, visit: Our Services

Visit our new Researcher Education Portal that offers articles and webinars covering all aspects of your research to publication journey! And sign up for our newsletter on the Portal to stay updated on all essential researcher knowledge and information!

Register now: Researcher Education Portal

Maximise your publication success with Charlesworth Author Services.

Share with your colleagues