Journal level metrics measure the influence of a journal, taking into account the number of citations received by articles published in the journal.
These metrics, once developed to investigate the scholarly communication system, are nowadays used for different reasons:
Several journal level metrics exist, with different calculation methods and based on different underlying datasets. The main journal level metrics are the Journal Impact Factor, the CiteScore, the SJR (SCImago Journal Rank) and the SNIP (Source Normalized Impact per Paper). In the chapters below these metrics are introduced: how are they calculated, where can you find them and how can you use them? Due to differences in citation patterns between disciplines you can't always use a journal level metric to compare journals from different disciplines.
Be aware that journal level metrics are no indicator for the quality of individual articles within a journal, or for the quality of a researcher. There is a lot of discussion about the use of journal level metrics. Watch for example how Nobel Laureates speak out against the role of impact factors in research in the video
Why do journal level metrics matter for PhD-candidates?
The number of citations in the JCR year to items published in the two previous year divided by the total number of articles & reviews published in the two previous years.
The Journal of Happiness Studies received in 2017 425 citations to items published in 2015 and 2016. In 2015 and 2016 the journal published in total 214 articles and reviews. The Journal Impact Factor of the Journal of Happiness Studies is 425/214, 1.986.
The Journal Impact Factor (JIF) is published in the Journal Citation Reports (JCR) of Clarivate Analytics (formerly Thomson Reuters). You can search for a particular journal and create lists by categories. JIFs from previous years (going back to 1997) are available in the Journal Profile page of a journal.
When you use the JIF to compare journals be aware that you often can’t use the Journal Impact Factor to make the comparison. The height of the JIF depends on the citation culture within a discipline: the number of citations given and the age of these citations differs per discipline. The JIF doesn’t take these differences into account: it’s not field normalized. Therefore a JIF of 2.000 can be high in one discipline, but relatively low in another discipline.
To compare journals from different disciplines, you can use the quartile or the JIF percentile. These metric show how the journal is performing compared to the other journals within the same category. When the journal is in the highest quartile (Q1) it is within the top 25% of the journals in its category. When you check the quartile of the journals with a JIF of 2.000, you can compare them.
You can find the quartile and percentile scores of a journal on the Journal Profile page in the JCR under the header ‘Rank’, below the Key Indicators table in the JCR. A journal can be in multiple categories: the quartile and percentile scores can be different per category.
View the video
CiteScore counts the citations received in four years to articles, reviews, conference papers, book chapters and data papers published in those four years, and divides this by the number of articles, reviews, conference papers, data papers and book chapters published in the four years.
The Journal of Happiness Studies received 2,220 citations in the years 2016 to 2019 to articles, reviews, conference papers, book chapters and data papers published in 2016, 2017, 2018 and 2019. In 2016, 2017, 2018 and 2019 the journal published in total 476 articles, reviews, conference papers, book chapters and data papers, indexed by Scopus. The CiteScore 2019 of the Journal of Happiness Studies is 2,220/476 = 4.7.
If a journal doesn't have a publication history of four years within Scopus, Scopus calculates the CiteScore based on the available articles and the citations these articles received. This is not visible on the Source page in Scopus, see for example Annual Review of Criminology, which is indexed in Scopus since 2018. The 2019 CiteScore is based on the articles published in 2018 and 2019.
Please note: the calculation method of CiteScore was changed in June 2020, see https://blog.scopus.com/posts/citescore-2019-now-live for more information. The older CiteScores (before 2019) were also recalculated using this new method.
The CiteScore of a journal is available in Scopus from 2011, on the Source details page of the journal. You can also visit the freely available website https://www.scopus.com/sources.
The CiteScore of a journal doesn’t take differences in citation cultures between disciplines into account, it’s not field normalized. Therefore you can’t use the CiteScores to compare journals from different disciplines. To compare journals you can use the CiteScore rank or percentile – they indicate the relative standing of a journal in its subject field.
The CiteScore rank and percentile are available on the Source details page of the journal in Scopus. A journal can be assigned to multiple categories – in that case the journal will have multiple ranks and percentiles.
The video Scopus Tutorial: CiteScore metrics in Scopus (2.14 min) explains how CiteScore is calculated and how you can use it.
Please note: this video was created in 2019, the described calculation method is no longer used.
The SCImago Journal Rank is a measure of scientific influence of journals that accounts for both the number of citations received by a journal and the importance or prestige of the journals where the citations come from. The SJR indicator expresses the average number of weighted citations received in the selected year by the documents published in the selected journal in the three previous years, --i.e. weighted citations received in year X to documents published in the journal in years X-1, X-2 and X-3. A detailed description of the calculation is available here.
Due to the iterative calculation method it’s impossible to calculate the SJR of a journal yourself.
The most recent SJR of a journal is available in Scopus, on the Source details page of the journal, or on the freely available website https://www.scopus.com/sources. The SJR website - https://www.scimagojr.com/ - shows the historical values of the SJR (from 1999).
See for example the SJR of the Journal of Happiness Studies.
The SJR is field normalized: the differences in citation practice between subject fields are evened out in the calculation. This means that you can use the SJR values to compare journals from different disciplines directly.
SNIP is the ratio of a source's average citation count per paper and the citation potential of its subject field. SNIP measures a source’s contextual citation impact by weighting citations based on the total number of citations in a subject field. Links to a detailed description of the methodology can be found here. It is not possible to calculate the SNIP of a journal yourself.
The most recent SNIP of a journal is available in Scopus, on the Source details page of the journal, or on the freely available website https://www.scopus.com/sources. The CWTS Journal Indicators website - http://www.journalindicators.com/ - shows the historical values of the SNIP (from 2006).
See for example the SNIP of the Journal of Happiness Studies.
SNIP is field normalized: it takes into account the characteristics of the subject field, here defined as the set of documents citing that source. It considers the length of the reference lists, the speed at which citation impact matures, and the extent to which the database used covers the field’s literature. This means that you can use the SNIP to compare journals from different disciplines directly.
Next to the Journal Impact Factor, CiteScore, SJR and SNIP you might encounter other journal level metrics, for example on the homepage of a journal or in an e-mail you receive from a publisher or editor.
Be aware that also fake, bogus or predatory journal metrics exists. Examples are the CiteFactor and JIFACTOR. An example of a journal with a bogus impact factor on the homepage can be found here. In most cases editors can submit their journal to get such an Impact Factor, and often they have to pay to get this Impact Factor. The metrics are sometimes used by predatory publishers to give a predatory journal an official touch.
When in doubt, check if you can find a detail description of how the impact factor is calculated and if underlying data is available. The publisher Wiley offered some guidelines on how to spot fake metrics. You can also ask the University Library, then we can do a check as well.