Bibliometrics
Bibliometrics is the statistical analysis of scholarly publications, which is primarily conducted using quantitative approaches. Bibliometric analysis enables researchers to measure, assess and evaluate their own publication output and scholarly performance.
Services Offered by the University Library
We advise and support you on the use of bibliometric tools and techniques and are also able to answer questions specific to your needs and academic field. Our services are available to individuals, research groups, institutes and faculties. We can answer questions such as:
- How has scholarly performance and research output developed over the last few years?
- Where to publish: which journals, conferences and books?
- How many times have publications been cited?
- What percentage of publications have been peer review publications?
- What percentage of publications have been published through Open Access publications?
Furthermore, various other bibliometric indicators can be used to gather and measure data and generate ranking tables. We are happy to advise you and work together with you in these areas.
Bibliometrics as a Tool in Research Evaluation
Bibliometric analysis was originally developed to improve and facilitate the retrieval of scientific literature from databases and to monitor the structure and development of individual disciplines.
The main aims of bibliometric analysis include:
- observing disciplines
- identifying new sub-disciplines
- identifying interdisciplinary activities
Bibliometric analysis is increasingly being used to evaluate scholarly performance. The main reason behind this is the need to assess scholarly performance as objectively as possible. A wide range of possible indicators can be used for this purpose, and they should always be selected according to the individual case and with specific objectives in mind.
Limitations of Bibliometrics
Informative value
Bibliometric data is not without its limitations. It is extremely difficult to assess scholarly performance, and in particular the quality of the research, through numbers and figures alone. For this reason, it is essential to take multiple indicators into consideration when conducting bibliometric analysis. Ideally, these indicators will be determined in accordance with the respective subject discipline and in agreement with the scientist in question.
Example: Number of Publications
The number of publications produced (research output) provides the research community with an overview of the productivity of an individual or institution. However, the number of publications does not automatically guarantee the quality of research output or scholarly performance. Regular publication activity does not always mean that good quality work is consistently being produced. The quality of a publication is reflected in its ‘impact’, which includes its informative value and its significance within its respective field. This cannot be conveyed through figures or through Journal Impact Factor rankings.
Database Variation
Stephen Hawking achieved an H-Index of 59 in the Web of Science database, compared with an index of 76 in Google Scholar; however, in Scopus, he obtained an index of 19.
At present, there is no robust bibliometric database which covers all research areas or which can be used consistently across subject areas. For this reason, researchers are advised to agree on a suitable database together before bibliometric analysis is conducted.
Interdisciplinary Comparisons
Bibliometric indicators are often applied to make interdisciplinary comparisons. However, it is not advisable to make direct comparisons across subject areas as scholarly publication cultures can differ greatly from field to field. Instead, the aim should be to customize indicators for use within each specific field.
Negative Effects
Over the last few years, a practice known as ‘strategic publication behavior’ has increased, and ‘bibliometric optimization’ has also been observed. Journal citation cartels (often referred to as the practice of ‘citation stacking’), ‘salami slicing’ (the splitting of research findings into multiple publications) and other forms of bibliometric manipulation are no longer isolated cases.
New regulations introduced by the German Research Foundation (Deutsche Forschungsgemeinschaft, or ‘DFG’ for short) illustrate the extent to which bibliometric manipulation has already progressed. The new DFG regulations have been in place since 2010 and are applied to funding proposals and final reports. The DFG’s aim is to reduce the flood of publications in research and the pressure on researchers to publish. The DFG President Matthias Kleiner, addressed these issues in a statement in February 2010:
“Whether in performance-based funding allocations, postdoctoral qualifications, appointments, or reviewing funding proposals, increasing importance has been given to numerical indicators such as the H-index and the impact factor. The focus has not been on what research someone has done but rather how many papers have been published and where. This puts extreme pressure upon researchers to publish as much as possible and sometimes leads to cases of scientific misconduct in which incorrect statements are provided concerning the status of a publication. This is not in the interest of science.”