The number of publications or research outputs produced.
Raw citation count
Can be sourced from SciVal/Scopus, Web of Science, Dimensions, PubMed, etc. Most publishers' websites will display a citation count, either sourced from a provider such as Scopus or from their own databases.
The number of citations a paper or set of papers has received. Citation-based metrics should not be interpreted as a direct measure of research quality.
A common assumption is that a high amount of citations means that a paper is a good piece of research and/or has had a positive impact. However, this is not always the case for the following reasons:
- The citations can be negative, positive, neutral or even quite random and unless we examine all the references, we cannot know what their intent is.
- Citation practices vary across fields, and in particular fields can be strongly influenced by self-citations.
- Some authors deliberately cite their own or colleagues' articles even if the relevance is rather questionable. It has been shown that men tend to self-cite more than women (Maliniak, Powers, & Walter, 2013); therefore the metric can be artificially inflated and – when used for research assessment – furthers the disadvantage against non-male groups.
- Certain types of output tend to get more citations than others – for example, reviews are more cited than a traditional research journal article. Thus an individual output of a certain type, receiving citations out of keeping with the pattern of the author's other outputs, can both boost the author's overall citation count significantly and can lead to an impression that their other works have not done 'as well'.
- Citations are not reviewed and removed in the case where citing articles have been retracted.
All the above leads to skewing of the numbers.
Field Weighted Citation Impact
Sourced from SciVal using Scopus data.
FWCI is a citation (mean) average calculated as the ratio of the citations a paper or set of papers has received and the total citations that would be expected based on the average of the subject field (for documents of a similar age, discipline and type). An FWCI of 1.00 means that the output performs as expected for the global average; an FWCI of 1.44 means it is 44% more cited than expected. The citation window for inclusion in the calculation is 'received in the year in which an item was published and the following three years'.
FWCI is a better metric than a raw citation count or a journal-based metric because it measures the citation impact of the output itself, not the journal in which it is published, and it compares like-with-like (outputs of the same age and type as classed by Scopus). As an average however the FWCI is susceptible to skew from outlying values and can fluctuate often, and as such should be used against large and stable (>300 papers) datasets.
The Field Weighted Citation Impact is not a robust metric when applied to an individual researcher profile. The nature of mean average citation-based metrics is that there are a few highly cited outputs and many that receive no citations; the dataset has a heavily skewed distribution.
See more about FWCI at the SciVal Support Centre.
Field Citation Ratio
Sourced from Dimensions using Dimensions data.
FCR is a citation average similar to FWCI, but sources data from the Dimensions database. It is calculated by dividing the number of citations a paper has received by the average number received by documents published in the same year and in the same Fields of Research (FoR) category. It is calculated for all publications in the Dimensions database which are at least 2 years old and were published from 2000 onwards.
As with FWCI, the FCR is not a robust metric when applied to an individual researcher profile. The nature of mean average citation-based metrics is that there are a few highly cited outputs and many that receive no citations; the dataset has a heavily skewed distribution.
Note that the Fields of Research is also a Dimensions categorisation.
Publications in top percentiles of cited publications - field-weighted
Sourced from SciVal using Scopus data.
The number of publications of a selected entity that are highly cited, having reached a particular threshold of citations received (top 1%, 5%, 10% or 25%).
This metric counts and ranks the number of citations for all outputs worldwide covered by the Scopus dataset for its publication year. Percentile boundaries are calculated for each year, meaning an output is compared to the percentile boundaries for its publication year, and can be normalised by field.
Data are more robust as the sample size increases (comparing a unit to one of a similar size is more meaningful than comparing one researcher to another) and are normalised by field. It can be used to distinguish between entities where other metrics such as number of outputs or citations per output are similar.
When using this metric, ensure that you are working with field-weighted data, and with 'percentage of papers' rather than 'total value of papers', especially when benchmarking entities of different sizes.
Relative Citation Ratio
Sourced from Dimensions using Dimensions data for PubMed publications.
RCR is a citation-based measure of scientific influence of a publication. It is calculated as the citations of a paper, normalized to the citations received by NIH-funded publications in the same area of research and year. The RCR is not available for all outputs as it is calculated only for those which are listed in PubMed. Caution should therefore be applied to ensure that it's an appropriate metric for your dataset in terms of coverage.
The RCR is calculated for all PubMed publications which are at least 2 years old. Values are centered around 1.0 so that a publication with an RCR of 1.0 has received the same number of citations as would be expected based on the NIH-norm, while a paper with an RCR of 2.0 has received twice as many citations as expected.
As with FWCI, the RCR is not a robust metric when applied to an individual researcher profile. The nature of mean average citation-based metrics is that there are a few highly cited outputs and many that receive no citations; the dataset has a heavily skewed distribution.
Use of the h-index is to be avoided at the University of Liverpool. It may be found in external material, sourced from Scopus, SciVal, Web of Science or Google Scholar.
H-index is number of publications (n) by a researcher which have received at least that same number (n) of citations. An h-index of 10 indicates a researcher with 10 papers that have each received at least 10 citations; the researcher will reach an h-index of 11 when those papers reach 11 citations each and one other paper also reaches 11 citations.
There are a number of issues with the h-index.
- It is based on productivity, and therefore it favours those who have been in their career a long time and haven't taken a break. It discriminates against early career researchers, women and those with caring responsibilities, part-time researchers or those who have taken a career break.
- It can be manipulated by self-citations. It has been shown that men tend to self-cite more than women (Malimiak, Powers & Walter, 2013); therefore the metric can be artificially inflated and - when used for research assessment - furthers the disadvantage against non-male groups.
- It does not account for disciplinary differences.
- It favours senior researchers who get their name by default on articles published by their junior colleagues.
For an informative infographic on the key issues with the h-index, please see: https://www.leidenmadtrics.nl/articles/halt-the-h-index.