Skip to main content

How journal impact metrics work

Journal metrics help evaluate the quality and impact of a journal. But which ones should you use? And how are they calculated?

This guide shows how you can use knowledge of journal metrics to increase the visibility of your research. We’ll look at a range of tools that can help you assess journal quality when used carefully together, and to make informed decisions about where to publish.

What you'll learn:

  • What is the Journal Impact Factor?

  • What is CiteScore?

  • How do JIF and CiteScore compare to each other?

  • What other metrics are available?

  • What is DORA?

What is the Journal Impact Factor (JIF)?

Introduced in 1955, the Journal Impact Factor (JIF) is a metric used to evaluate the relative importance of a journal within its field. It is calculated annually by Clarivate through their Journal Citation Report. It reflects the average number of times articles from a journal, published in the past two years, have been cited in the current year.

Example: If a journal published 351 citable articles in 2020 and 2021, and received 1124 citations in 2022, its 2022 JIF is 3.2 (1124 ÷ 351).

Infographic showing how the Journal Impact Factor is calculated

The JIF has become a widely used tool for gauging journal influence.

In 2024, the Journal Citation Reports made significant changes to improve transparency and inclusivity for JIFs, allowing fairer comparisons across publication platforms.

  • Percentiles and quartiles now apply to journals in the Emerging Sources Citation Index (ESCI).

  • Journals in the same subject category are now ranked together, regardless of which index they appear in. AHCI-only categories are excluded from this. See more about Clarivate’s categories for journals

What are the limitations of the Journal Impact Factor?

Journal Impact Factor is the most well-known metric. While it is widely used, it’s important to remember that it has limitations. It should be considered alongside other metrics for a comprehensive evaluation of journal quality and impact.

  • Time span: In some fields, two years may be too short to judge the impact of research.

  • Citation distribution: JIF is an average value and doesn't account for the spread of citations. A single highly cited article can disproportionately inflate the JIF, skewing the result.

  • Selective indexing: Only citations from journals indexed in Web of Science are counted, which includes only a portion of journals in a field, missing many citations.

  • Yearly variation: JIFs can significantly vary year by year, especially for smaller journals. This fluctuation is more prominent in journals that publish fewer articles annually.

What is CiteScore?

Introduced in 2016, CiteScore offers a more comprehensive view of a journal’s influence. It uses a broader four-year citation window instead of the two-year period used by JIF. This gives an evaluation of a journal's performance over time.

CiteScore uses citation and publication data from Scopus and includes more document types, such as articles, reviews, and conference papers. This ensures a more accurate reflection of a journal's contribution to its field.

Example: A journal publishes 714 documents from 2019 to 2022 and receives 3,324 citations. That would give it a 2023 CiteScore of 4.7 (3324 ÷ 714).

Infographic showing how the CiteScore is calculated

What are the limitations of CiteScore?

Like JIFs, there are several reasons CiteScore shouldn't be used as a sole measure of journal quality.

  • Long citation window: Looking at citations over four years might delay the recognition of new research in favor of older, highly cited articles.

  • Document type saturation: Including all document types, such as reviews and editorials, can inflate scores and make comparisons with research-focused metrics less accurate.

  • Selective data source: Dependence on Scopus data may lead to underrepresenting certain fields and journals.

  • Field variation: CiteScore can be affected by different citation practices across disciplines, as some fields naturally accumulate citations more slowly.

What’s the difference between JIF and CiteScore?

Journal Impact Factor

CiteScore

Data source

Web of Science

Scopus

Citation window

2 years

4 years

Documents included

Peer-reviewed articles, reviews

More inclusive (eg conference papers and editorials)

Update frequency

Annual

Annual

JIF is typically preferred by:

  • established journals that have built a community.

  • specific disciplines with quicker citation turnover.

  • journals that publish fewer but more selective papers, providing more favorable statistics.

CiteScore is typically preferred by:

  • newer journals, as they account for four years’ worth of citations, not just the most recent two.

  • interdisciplinary research with varied citation practices.

  • fields where citations take longer to accumulate due to longer research cycles.

What are quartiles?

Quartiles are used to assess the impact of citations for academic journals. They do this by ranking journals in the same discipline – based on their JIF or CiteScore – into four groups: Q1 (top 25%), Q2 (25%-50%), Q3 (50%-75%), and Q4 (bottom 25%). Researchers can use quartiles as a quick way to determine how a specific journal compares to others in its field.

Are there citation metrics other than JIF and CiteScore?

JIF and CiteScore are the most well-known methods, but there are others. It’s always important to use a range of tools together to assess the quality and merit of journals and articles.

Here’s a quick summary of some other tools:

Metric

What it can tell you

Journal Citation Indicator (JCI)

A single journal-level metric that is field-normalised, making it easier to compare across disciplines.

Category Normalized Citation Indicator (CNCI)

A simple value that shows whether a publication has received more or fewer citations than expected, based on the average for its field

H-index

The impact of an individual researcher’s productivity and citations.

Altmetric

Tracks a document’s popularity through social media, news, and online attention, rather than long-term academic value.

Eigenfactor

A journal's influence in its field over five years. Citations from highly cited journals carry more weight than those from less frequently cited ones.

Article Influence Score

The average influence of a journal’s articles over the first five years after publication. It does not provide immediate feedback on recent publication. Based on Web of Science citations and included in the Journal Citation Reports.

Article-level metrics (ALMs)

Factors such as download, citation, and social media mentions are used to focus on the performance of individual articles, rather than the journal as a whole.

SCImago Journal Rank (SJR)

Based on the quality and reputation of the citing journals, using data from the Scopus database.

SNIP (Source Normalized Impact per Paper)

Citation impact, by accounting for variations in citation practices across different disciplines.

When are other metrics useful?

No single metric tells the whole story, so an awareness of different tools can help to highlight different types of influence.

For early-career researchers with no citation count: Tools like Altmetric data and article-level metrics can capture the immediate impact and engagement of your work, including online discussions and media coverage.

For interdisciplinary work: The H-index or SNIP can normalize citation practices across fields, providing a fairer assessment of research impact.

For individual articles: ALMs are ideal for assessing the impact of individual articles rather than the journal as a whole. This highlights significant contributions that might be overlooked in a high-JIF journal.

For broadening impact assessment: Altmetric captures a wider range of impacts, including public engagement and policy influence, that traditional citation metrics can’t. Especially useful in fields that prioritize societal impact.

Using JIF and CiteScores alongside other qualitative and quantitative indicators can help to evaluate research impact and quality comprehensively, for several reasons.

  • Field bias: Citation rates vary significantly between disciplines.

  • Short-term focus: Two or four years may not capture long-term value.

  • Citation manipulation: Some journals boost metrics through excessive self-citation.

  • Averages are misleading: A few highly cited articles can distort a journal’s overall score.

What is the Journal Citation Reports (JCR)?

Journal Citation Reports (JCR) is an annual publication from Clarivate, giving details of the academic journals indexed in the Web of Science database.

JCR provides detailed citation information, helping users detect patterns and trends, and ranks journals within specific subject categories. In turn, this can help researchers make informed decisions about where to submit their work.

What is the San Francisco Declaration on Research Assessment (DORA)?

Signed by Frontiers and many other publishers and institutions, DORA is a set of recommendations created in 2012 to promote a more inclusive and fair system for evaluating research.

  • Avoiding journal-based metrics like JIF in funding, hiring and promotion: Instead, DORA encourages the evaluation of research on its own merits.

  • Recognize a broader range of outputs: Data, software and other non-traditional research products should be given appropriate credit.

  • Evaluate research on its own merits: Focus on the article, rather than the journal’s reputation.

  • Be transparent about how research is assessed: Institutions should clearly state their criteria for evaluating research.

The main point to take away about journal metrics

Understanding impact metrics is crucial for evaluating academic journals' quality and influence. However, all metrics measure and demonstrate something different.

Using multiple metrics together can give you a more comprehensive and fair understanding of research impact. This can help you evaluate journals and choose where to publish your work.

More information