Bibliometrics provide helpful quantitative measures of citation impact but do not provide a complete picture of research impact on their own.
There is increasing recognition worldwide of the importance of responsible use of bibliometrics in research assessment. Several frameworks have emerged to assist with the move to the responsible use of research metrics, amongst them ‘The Metric Tide’, the ‘Leiden Manifesto for Research Metrics’, and most notably ‘The San Francisco Declaration on Research Assessment’ (DORA).
Therefore, it is essential that we use bibliometrics responsibly. Some things to keep in mind are:
Remember:
Several researchers and organizations have proposed standards and recommendations for best practices. Specific recommendations vary, but two principles are consistent:
San Francisco Declaration on Research Assessment (DORA) - recommendations for funding agencies, institutions, publishers, researchers and organizations that supply metrics along three themes:
The Leiden Manifesto for Research Metrics - 10 principles to guide research evaluation outlined in this video:
The Metric Tide: Final Report with Executive Summary by the Independent Review of the Role of Research Metrics in Research Measurement and Assessment, including these dimensions of responsible metrics (p. X):
Robustness: basing metrics on the best possible data in terms of accuracy and scope;
Humility: recognising that quantitative evaluation should support – but not supplant – qualitative, expert assessment;
Transparency: keeping data collection and analytical processes open and transparent so that those being evaluated can test and verify the results;
Diversity: accounting for variation by field and using a range of indicators to reflect and support a plurality of research and researcher career paths across the system;
Reflexivity: recognising and anticipating indicators' systemic and potential effects and updating them in response.
The DORA organization has released a guide for research institutions, "SPACE, a rubric for analyzing institutional conditions and progress indicators." The rubric covers:
at foundation, expansion and scaling levels. This recognizes that a researcher's affiliated organisation's culture, policies and practices can drive choices about what metrics are chosen and how they are used.
The Metrics Toolkit has definitions, scope, appropriate use cases, limitations, inappropriate use cases, transparency and more about indicators for authors, books, book chapters, datasets, journals, journal articles and software/code/scripts.