Simple Framework helps to evaluate Researchers’ Performance

Quantitative bibliometric indicators need to be overhauled to show the most realistic picture about the performance of a researcher. Source: Pixabay.com
Quantitative bibliometric indicators need to be overhauled to show the most realistic picture about the performance of a researcher. Source: Pixabay.com

Is a researcher doing a good job when he or she publishes many articles? Or does the key factor lie somewhere else? The question of evaluating the performance of researchers is important because often the allocation of resources is decided on the basis of this and the future of the research and career of every academic depends on it.

Quantitative bibliometric indicators are widely used to evaluate the performance of researchers, but traditional indicators are not supported by the analysis of the processes intended to be measured and the practical goals of the measurement according to Endel Põder, senior research fellow at the Institute of Psychology, University of Tartu.

Recently, he proposed a simple framework for measuring and predicting an individual researcher’s performance, which considers the main regularities of publication and citation processes and the requirements of practical tasks. The statistical properties of the new indicator—a researcher’s personal impact rate—are illustrated by its application to a sample of Estonian researchers. 1,356 researchers and all their articles from the years 1983 to 2012 were analysed. Põder’s article about the framework was recently published in Trames: Journal of the Humanities and Social Sciences[1].

Everlasting Problem

“Bibliometrics is often used to evaluate researchers’ performance in the decision-making for allocating grants and academic positions, but it is often incorrect or biased and gives rise to wrong decisions,” Põder noted.

Põder discussed the main negative points of traditional bibliometrics already years ago in an academic journal[2] and in the Estonian cultural newspaper Sirp[3]. He said that using bibliometrics is biased because the co-authors of publications with many authors are counted in the same way as a single author of a paper, although the work done by the authors is obviously not equal in these two cases. He added that some people questioned the system already 40 years ago but nothing has changed.

Also, traditional bibliometric indicators are not designed for forecasting. “We have to estimate a researcher’s ability to produce new qualitative papers within some future period. This is fundamentally different from the prediction of one’s cumulative citation score or h-index, which is determined primarily on the basis of past performance,” Põder said and added that h-index is based on an amusing mathematical idea of combining publication and citation counts, which is, however, arbitrary and unsupported by any theory or data.

Power of Statistics

His framework described in the article shows how the publications and citations evolve and how this is connected with the productivity and talents of a researcher. “This relatively classical logic has been applied in behavioural studies, but rarely in bibliometrics. The results of the study support this approach to a degree,” Põder explained.

He added that he wants to give a clear message of what is right and what is wrong in his opinion. “Professional researchers of bibliometrics often make precautious, inconclusive and vague claims. This way, nobody can conclude anything and the simplest and most wrong method is chosen in real life,” Põder noted.

What is more, it is important to realize that predictive power, although a desired property of indicators, does not tell anything about the validity of an indicator. “Note that a wrong indicator can be as good a predictor of its own future values compared to a correct indicator (or even better), and it may also predict the future values of other (wrong) indicators in a better manner. The validity of the indicators proposed in my study relies upon transparent theoretical assumptions and logic implementation,” said Põder.

[1]The full article is available here: http://www.kirj.ee/public/trames_pdf/2017/issue_1/Trames-2017-1-3-14.pdf

[2] Põder, E. (2010) Let’s correct the small mistake. Journal of the American Society for Information Science and Technology Volume 61 Issue 12, pages 2593–2594. URL: http://onlinelibrary.wiley.com/wol1/doi/10.1002/asi.21438/full

[3] Põder, E.(2011) Teaduse mõõtmise raske probleem. Sirp, 03.11.2011 URL: http://www.sirp.ee/s1-artiklid/c9-sotsiaalia/teaduse-mootmise-raske-probleem/

This article was funded by the European Regional Development Fund through Estonian Research Council.