Skip to Main Content
It looks like you're using Internet Explorer 11 or older. This website works best with modern browsers such as the latest versions of Chrome, Firefox, Safari, and Edge. If you continue with this browser, you may see unexpected results.

Course Research Impacts: Introduction


Impact measurement is becoming more and more prominent in universities: impact indicators, like the journal impact factor and the H-index (for individual researchers), are used as tools in the allocation of research funds, in national research reviews, even in job applications.

It’s important for researchers to know about these indicators: how they are calculated, the contexts in which they can and can’t be used, which ones can be compared and which definitely can’t, how you can influence them yourself. These are the goals of this course: to get to know the indicators used and the possible pitfalls.

The sources covered

In this course we have chosen to cover the three most used data sources for impact measurement:

  • Web of Science and Journal Citation Reports. A database of Clarivate Analytics, formerly known as Thomson Reuters, formerly known as ISI Web of Knowledge.
  • Scopus - a database of Elsevier.
  • Google Scholar – using Publish or Perish and Google Scholar Citations.

These databases all have their pros and cons: Web of Science and Scopus don’t cover all available journals and literature. Google Scholar covers more, but it’s impossible to know its boundaries (and they can change every day) and Google Scholar doesn’t take into account the source of a citation (a citation in a bachelor thesis available online is counted as high as a citation in a journal article of a leading scholar in the field).

These sources are the ones most commonly used. Alternative methods of impact measurement are being developed, making use of the possibilities the internet has to offer, like taking into account the number of downloads of articles. You can read more about these methods in the report 'Users, narcissism and control – tracking the impact of scholarly publications in the 21st century'. One of the main conclusions of this report is that these alternatives can't legitimately be used in research assessments yet, because they don't comply with more strict quality criteria.

Some practical notes about this course

The number of search results, cited references and the H-index can have change since the last update of this course. Most pictures in the course can be enlarged by clicking them. Then they appear in a new screen.

Differences between disciplines

When looking at impact indicators you have to keep in mind that citation patterns differ per discipline:

  • In some disciplines the literature list of an average article is longer than in other disciplines: more citations are given and thus more citations can be received within that discipline. This is called the citation density.
  • Some disciplines cite recently published documents more frequently than other disciplines. This can influence the often used Journal Impact Factor, which uses a two year window. For some disciplines (like history) you should look at a longer time frame.
  • The used publication channels differ per discipline. For example, in engineering and applied sciences conference proceedings and in the humanities monographs are very important in scholarly communication. In other disciplines journal articles are the most important channel of communication.
  • For some research topics English is not the most used language, for example regional history or french philosophy.
  • One of the consequences of these last two remarks is that the coverage within Web of Science and Scopus differs per discipline, since these databases look mostly at international, English journals.

For example: the coverage in Web of Science of the field of immunology is 93 percent: 93% of the references in this field in Web of Science refer to articles published in journals covered by Web of Science. For economics it’s 47 percent; for history it’s only 9 percent!

Source: Moed, H. F. (2005). Citation analysis in research evaluation. Dordrecht: Springer. pp. 129-130.

This influences the comparability of the impact measurements: the journal impact factor of the highest ranking journal in one discipline can be much lower than the journal impact factor of the highest ranking journal in another discipline.

For example: ‘Behavioral & Brain Sciences’ has the highest impact factor in the Journal Citation Report (JCR) subject category Biological Psychology. Its 2020 Journal Impact Factor is 12.579. The impact factor of the highest ranking journal in the subject category History, ‘Journal of Economic History' is 3.547. These are both the highest ranking journal in their subject category, but when you only look at the numbers you won't notice!

Conclusion: within bibliometrics it’s important to ‘compare like with like’.

Examples of differences between disciplines

Suppose we have two articles:

  • Author X, an economist, published an article in 2005. Until 2012 this article recieved 37 citations within Web of Science.
  • Author Y, working in de field of genetics, published his article in 2007. This article recieved 44 citations within Web of Science, until 2012.

Can you say Author Y is performing better then Author X?

You have to take into account the citation cultures of their specific fields: how many citations receives an article on average in those fields? These numbers are available in the Essential Science Indicators (ESI) of Clarivate Analytics, under Field Baselines.

How do you read these graphs?

  • The 1% line shows the minimum number of citations the top 1% of papers, published in journals in this field, in the year on the horizontal axis, have received.
    For example: in the field of Economics & Business a paper published in 2005 that is cited at least 88 times belongs to the top 1 % of this field.
  • The 10% line shows the minimum number of ciations the top 10% of papers, published in journals in this field, in the year on the horizontal axis, have received.
    For example: in the field of Molecular Biology & Genetics a paper published in 2008 has to have at least 37 citations to belong to the top 10%.
  • The Average Citation Rate line shows the number of times the average paper published in that field in the year on the horizontal axis, has been cited.
  • The example shows the position of the example article in this graph.

The article of the economist belongs to the top 10% within its field, the article in the field of genetics doesn't belong to the top 10% of its field yet (it's 5 citations short).

Please note: these figures only give an indication: the number of ESI fields is limited (only 22 fields are identified) and thus not very precise.