Skip to main content Skip to navigation

Responsible Research Metrics

Responsible Research Metrics

This page introduces the issues around responsible research evaluation and showcases some of the activities being undertaken by the University of Warwick to improve our practices in this area.

Get Involved:

What are Responsible Metrics?

Essentially this is about the use of numbers and data analysis in research assessment, particularly at the individual level.

Familiar examples of ‘research metrics’ could include research grant income, citations, or journal impact factor (JIF).

If you are not familiar with JIF, this is a measure of the mean citations to recent articles in a given journal, for example, the journal Nature has a very high JIF, as does Science, Cell etc.

‘Responsible Use of Research Metrics’ is about setting out how we will use numerical measures and analysis appropriately and fairly, alongside more qualitative information, in research assessment to best support high quality research.

The SCOPE framework (from INORMS) for research evaluation is a five-stage model for evaluating responsibly. It is a practical step-by-step process designed to help research managers, or anyone involved in conducting research evaluations, in planning new evaluations as well as checking existing evaluations:

  • START with what you value
  • CONTEXT considerations
  • OPTIONS for evaluating
  • PROBE deeply, and,
  • EVALUATE your evaluation.

Benefits for Researchers

Metrics form part of an evolving and increasingly digital research environment, where data and analysis are important. However, the current description, production, and use of these metrics are experimental and open to misunderstanding. They can lead to negative effects and behaviours as well as positive ones.

Responsible metrics can be defined by the following key principles (outlined in The Metric TideLink opens in a new window ):

  • Robustness – basing metrics on the best possible data in terms of accuracy and scope.
  • Humility – recognising that quantitative evaluation should support, but not supplant, qualitative, expert assessment.
  • Transparency – that those being evaluated can test and verify the results.
  • Diversity – accounting for variation by research field, and using a range of indicators to reflect and support a plurality of research and researcher career paths across the system.
  • Reflexivity – recognising and anticipating the systemic and potential effects of indicators, and updating them in response.

Resources

Databases:

Alternative Metrics:

Other Tools:

Metrics Toolkit

The Metrics Toolkit provides evidence-based information about research metrics across disciplines, including how each metric is calculated, where you can find it, and how each should (and should not) be applied. You’ll also find examples of how to use metrics in grant applications, CV, and promotion packages.

Snowball Metrics

Snowball Metrics is, crucially, a bottom-up initiative. It is owned by research-intensive universities, to ensure that its outputs are of practical use to them, and are not imposed by organizations with potentially distinct aims such as funders, agencies, or suppliers of research information.

University of Warwick and Metrics

The University of Warwick recognises the need to improve how the outputs of scholarly research are evaluated. These outputs are many and varied, including but not limited to, research articles, reviews, books, monographs, data, reagents, software, intellectual property and trained young researchers. While institutions and funders need to be able to assess the quality and impact of research outputs, this must be measured accurately and evaluated wisely. There is a need to establish well-founded and academically supported criteria for evaluating primary research and other indicators of research, that transparently inform hiring, probation and promotion policies across the University.

The University of Warwick wants to support staff to continue to lead the highest international standards for research assessment and enable the world-class research that drives innovation and benefit for society.

Part of this is ensuring that we are using metrics responsibly. Central to this is basic good statistical practice: not relying on a single measure (where there’s a signal, there’s noise), recognising known and new limitations of available measurements (data quality and validity), and clearly communicating analysis undertaken (reproducibility).

Make sure metrics reflect the reach of your work

  • Check the WRAP to make sure all your research works are recorded there correctly
  • Maximise metrics, by maximising the visibility of your research, and plan how and where to share from the earliest stage in your research
  • Make your work Open Access as soon as possible
  • Include Open Data reporting/references in the article
  • Register for and use an ORCiD to get consistent, reliable attribution of your work
  • Use a mixture of metrics and qualitative evidence; metrics are not yet at the stage where they can replace peer review or analysis of an output. Using the two in conjunction presents an accurate picture of your work
  • Provide quantitative data in context and, where possible, appropriately normalised scores, which can give a better picture of what the number reflects. For example:
    • a ‘3’ is meaningless on its own
    • a ‘3’ in a field of 100 ‘20s’ is a vastly different thing to a ‘3’ in a field of 100 ‘0.1s’.