Impact measurement is becoming more and more prominent in universities: impact indicators, like the journal impact factor and the H-index (for individual researchers), are used as tools in the allocation of research funds, in national research reviews, even in job applications.
It’s important for researchers to know about these indicators: how they are calculated, the contexts in which they can and can’t be used, which ones can be compared and which definitely can’t, how you can influence them yourself. These are the goals of this course: to get to know the indicators used and the possible pitfalls.
The sources covered
In this course we have chosen to cover the three most used data sources for impact measurement:
These databases all have their pros and cons: Web of Science and Scopus don’t cover all available journals and literature. Google Scholar covers more, but it’s impossible to know its boundaries (and they can change every day) and Google Scholar doesn’t take into account the source of a citation (a citation in a bachelor thesis available online is counted as high as a citation in a journal article of a leading scholar in the field).
These sources are the ones most commonly used. Alternative methods of impact measurement are being developed, making use of the possibilities the internet has to offer, like taking into account the number of downloads of articles. You can read more about these methods in the report 'Users, narcissism and control – tracking the impact of scholarly publications in the 21st century'. One of the main conclusions of this report is that these alternatives can't legitimately be used in research assessments yet, because they don't comply with more strict quality criteria.
Some practical notes about this course
The number of search results, cited references and the H-index can have change since the last update of this course. Most pictures in the course can be enlarged by clicking them. Then they appear in a new screen.
When looking at impact indicators you have to keep in mind that citation patterns differ per discipline:
For example: the coverage in Web of Science of the field of immunology is 93 percent: 93% of the references in this field in Web of Science refer to articles published in journals covered by Web of Science. For economics it’s 47 percent; for history it’s only 9 percent!
Source: Moed, H. F. (2005). Citation analysis in research evaluation. Dordrecht: Springer. pp. 129-130.
This influences the comparability of the impact measurements: the journal impact factor of the highest ranking journal in one discipline can be much lower than the journal impact factor of the highest ranking journal in another discipline.
For example: ‘Behavioral & Brain Sciences’ has the highest impact factor in the Journal Citation Report (JCR) subject category Biological Psychology. Its 2020 Journal Impact Factor is 12.579. The impact factor of the highest ranking journal in the subject category History, ‘Journal of Economic History' is 3.547. These are both the highest ranking journal in their subject category, but when you only look at the numbers you won't notice!
Conclusion: within bibliometrics it’s important to ‘compare like with like’.
Suppose we have two articles:
Can you say Author Y is performing better then Author X?
You have to take into account the citation cultures of their specific fields: how many citations receives an article on average in those fields? These numbers are available in the Essential Science Indicators (ESI) of Clarivate Analytics, under Field Baselines.
How do you read these graphs?
The article of the economist belongs to the top 10% within its field, the article in the field of genetics doesn't belong to the top 10% of its field yet (it's 5 citations short).
Please note: these figures only give an indication: the number of ESI fields is limited (only 22 fields are identified) and thus not very precise.