Guiding Principles (Do-s and Don't-s)

Metrics are limited and should be used only as an addition to a thorough expert assessment. Carefully selected metrics can provide supporting evidence in decision making as long as they are utilized in the right context and not in isolation.

Quick overview:
1. Never use journal-based metrics to evaluate individual outputs.
2. Never mix and match metrics from different providers.
3. Always use qualitative indicators such as peer review in conjunction with metrics.
4. Be aware of the caveats of each metric.
5. Be aware that indices pre-approve journals which covers impressive numbers of but certainly not all journals, and that's not always due to low quality.

When we use metrics, we should:

  • Use metrics related to the individual output (article-based metrics e.g. field weighted citation ratio) rather than the venue of publication (journal-based metrics e.g. Journal Impact Factor™, SJR or SNIP) or the author (e.g. h-index).
  • Be clear and transparent in the metric methodology we use. If a source does not give information about the origins of the dataset (such as e.g. Google Scholar), it isn't seen as reliable.
  • Be explicit about any criteria or metrics being used and make it clear that the content of the paper is more important than where it has been published.
  • Use metrics consistently - don't mix and match the same metric from different providers or products in the same statement.

For example: don't use article metrics from Scopus for one set of researchers and article metrics from Web of Science for another set of researchers. Why? Because the providers might well use different data sources to reach their numbers, so it would be comparing giraffes to penguins: both might be animals (metrics) but why they have the length of neck (citation count) they do has very different evolutionary (data source) reasons, and just comparing their necks out of context will tell you nothing but the fact that one is shorter than the other.

  • Compare Like with Like - an early career researcher's output profile will not be the same as that of an established professor, so raw citation numbers are not comparable.

For example: the h-index does not compare like-for-like as it favours researchers who have been working in their field for a long time with no career breaks.
Imagine evaluating football players solely based on the number of goals scored and the number of matches played. This assessment, akin to the h-index, looks at a player's impact by considering their scoring record (representing publications) and the number of matches played (indicating citations).
While this approach might give a broad indication of a player's contribution to the team's success, it overlooks crucial aspects of their skills, teamwork, and versatility on the field, as well as career. Just as a player might have a high goal count but lack defensive skills or teamwork, the h-index might highlight prolific publishing without reflecting the overall influence, diversity, or quality of a researcher's contributions within their field, and will disadvantage young players or those who had to take time healing from injuries.

  • Consider the value and impact of all research outputs, such as datasets, software, exhibtions, etc., rather than focussing solely on research publications, and consider a broad range of impact, such as influencing policy and other alternative metrics.

 

Which metrics should I use and why?

For an idea of the pros and cons of each metric, please visit the Guide to Metrics page.

 

Back to: Open Research

Using Metrics responsibly

What does this mean for me?

The responsible use of metrics is not a standalone activity. To achieve an inclusive and sustainable research environment the responsible use of metrics should become embedded in our everyday activity as researchers and as a university. This means that you will see information about responsible use of metrics across activity such as recruitment, PDR, grant application and research assessment.

Key considerations when using metrics

There are a few fundamental questions you should ask yourself when looking towards using metrics:

  • Think about what you are looking to measure: "Can it be measured by metrics, or am I using them as a proxy for something other than what they actually measure?" Examples include using a citation count as a measure of impact, without taking into account the context of those citations; or using a Journal Impact Factor as a proxy for the quality of research outputs published in that journal.
  • "Am I using the most appropriate metrics for the situation?" Considerations include the type of metric used, the sample size, and the age of what you are measuring, and the limitations inherent to any metric.  As examples, any averaging metric (such as FWCI) will be easily skewed by outliers when working on a small sample size (such as an individual); citation-based metrics are not best used for measuring current performance, as citations take time to accrue.
  • "Am I using an appropriate range of metrics and other means of analysis?" The question that we are usually looking to answer when assessing research is not one that can be answered by a single metric.  A 'basket of metrics' approach means that we can look at the question from a number of angles, providing context, and should be supported by other qualitative means of assessment including expert opinion.

In Recruitment activity

Academic job descriptions should be inclusive in their language and requirements, allowing for applicants from a range of backgrounds and experience to demonstrate their suitability for the post.  This may take the form of asking candidates to demonstrate the reach and impact of their research rather than relying on bibliometrics, and in particular journal-based metrics, as proxy indicators of quality.

When shortlisting applicants, staff should be aware of implicit and unconscious biases about what constitutes "excellent" or "quality" research.  In particular in relation to our commitment to DORA,

 

In PDR

Line managers of research staff can access a new section of the PDR training module available online in Canvas.  This short addition walks you through the principles of our responsible use of metrics policy and offers examples specific to the PDR discussion.  The training module is also suitable for PS staff who may prepare material, including metrics, in preparation for colleagues’ PDRs.