Article Galaxy | Research Solutions/Reprints Desk

Metrics Matter: Getting Top Value from Research Performance Data

Written by Research Solutions|Marketing Team | Aug 29, 2019 4:34:00 PM

Like it or not, the world of scholarly research is fiercely competitive. To be successful, a range of research stakeholders—from librarians to journal publishers to scientists—must continuously demonstrate the quality and impact of their research outputs. As a result, research performance metrics have become essential tools.

Perhaps most significantly, research performance metrics are used to show return on research value—in order to attract and secure more research funding.

But getting funding isn't the only thing research metrics are used for. Here are just a few examples of how librarians and information managers at scientific corporations and academic institutions use performance metrics to support their institutional goals:

  • Understand what's been done in a particular area of study
  • Identify the best opportunities for progressing research and making the greatest impact
  • Drive decisions on whether to allocate or withdraw personnel, space, and other resources
  • Build data that can be used to showcase and promote the institution externally

Research performance metrics are used to support career-level goals, too. Individual researchers and scientists, for example, use impact metrics to showcase their own scholarly influence, as well as to identify and attract collaborators. And some academic librarians even include research performance data in their dossiers when seeking a promotion (or if applicable, tenure).

Using Research Performance Metrics Effectively

Like any type of metric, research performance metrics are only worthwhile when they provide meaningful, accurate information. And extracting useful information from research performance metrics isn’t easy.

Certain metrics provide easy to understand data that's helpful when making certain types of quick decisions (e.g. using the at-a-glance Altmetric Donut to decide whether to purchase a specific peer-reviewed article). But for those who depend on research performance metrics to secure funding and inform more strategic decisions, extracting relevant, high-value information is a complex task. With so many types of metrics available, just choosing the right tools to use can be a challenge.

Here’s a high-level look at some of the most popular research performance metrics available today.

Types of performance metrics:

Journal Impact Factor (JIF)

TYPE: Journal-level (Citation-based)

OVERVIEW: Aims to reflect a journal’s performance by measuring the average number of citations received by articles in a journal during a two-year period.

CALCULATION: Journal X current year citations ÷ total number of articles Journal X published during the two previous years

PROS:

  • Simple calculation based on historical data

CONS:

  • Doesn’t adjust for the distribution of citations, increasing potential for skewed results
  • 2-year publication window is too short, resulting in significant variation from year to year
  • Not comparable across different subject areas due to different citation patterns among disciplines

CiteScore

TYPE: Journal-level (Citation-based)

OVERVIEW: Aims to reflect a journal’s performance by measuring the average citations per document that a title receives over a three-year period. It considers all content published in a journal (not just peer-reviewed articles)

CALCULATION: Total number of Journal X citations in a given year to content published in the last three years ÷ the total number of documents published in Journal X in the past three years

PROS:

  • Free, no subscription required
  • Increased transparency (numerator and denominator both include all document types).

CONS:

  • Creates bias against journals that publish a lot of rarely-cited content, like editorials, news, and letters
  • Not comparable across different subject areas due to different citation patterns among disciplines

Source Normalized Impact per Paper (SNIP)

TYPE: Journal-level (Citation-based)

OVERVIEW: Aims to reflect a journal’s performance, while accounting for the citation potential in different fields

CALCULATION: Journal X citation count per paper ÷ citation potential in its subject area

PROS:

  • Allows for cross-discipline comparisons

CONS:

  • Normalization reduces transparency