A Decision Support System for Public Research Organizations Participating in National Research Assessment Exercises
We are witnessing a rapid trend towards the adoption of exercises for evaluation of national research systems, generally based on the peer review approach. They respond to two main needs: stimulating higher efficiency in research activities by public laboratories, and realizing better allocative efficiency in government funding of such institutions. However the peer review approach is typified by several limitations that raise doubts for the achievement of the ultimate objectives. In particular, subjectivity of judgment, which occurs during the step of selecting research outputs to be submitted for the evaluations, risks heavily distorting both the final ratings of the organizations evaluated and the ultimate funding they receive. These distortions become ever more relevant if the evaluation is limited to small samples of the scientific production of the research institutions. The objective of the current study is to propose a quantitative methodology based on bibliometric data that would provide a reliable support for the process of selecting the best products of a laboratory, and thus limit distortions. Benefits are twofold: single research institutions can maximize the probability of receiving a fair evaluation coherent with the real quality of their research. At the same time, broader adoptions of this approach could also provide strong advantages at the macroeconomic level, since it guarantees financial allocations based on the real value of the institutions under evaluation. In this study, the proposed methodology has been applied to the hard science sectors of the Italian university research system for the period 2004-2006.
💡 Research Summary
The paper addresses the growing reliance on national research assessment exercises, which are typically based on peer review, and highlights the inherent subjectivity and inefficiencies associated with the selection of research outputs for evaluation. The authors argue that the step in which institutions choose which publications to submit is especially vulnerable to bias, leading to distorted rankings and misallocation of public research funds. To mitigate these problems, they propose a quantitative Decision Support System (DSS) that uses bibliometric data—specifically citation counts and journal impact factors—to objectively rank and select the most valuable research products from each institution.
The methodology proceeds as follows: all publications indexed in the Web of Science (WoS) for the 2004‑2006 triennium were extracted for the 82 Italian universities. For each paper, two bibliometric indicators were calculated: (1) the number of citations received, normalized by the average citations of all papers in the same scientific sector, and (2) the impact factor of the journal, normalized by the sector’s average journal impact factor. These two normalized values are combined (with a simple weighted average) into a single “Normalized Impact Score” ranging from 0 to 1, which reflects the relative quality of a paper within its field.
Institutions are required to submit a number of outputs proportional to their research staff (25 % of full‑time equivalents for universities, 50 % for public research labs). The DSS automatically selects the papers with the highest Normalized Impact Scores until the quota is met, thereby ensuring that the submitted set represents the institution’s best possible portfolio.
When applied to the Italian university system, the DSS revealed systematic shortcomings in the self‑selection processes used during the earlier VTR assessment. In several hard‑science disciplines—Agricultural and Veterinary Science, Industrial and Information Engineering, Mathematics and Computer Science—more than 25 % of the papers submitted by universities fell below the sector median impact factor. Similar patterns were observed in Earth Sciences, Physics, and Chemistry. These findings confirm that many institutions were not presenting their highest‑quality work, likely due to lack of objective selection tools and internal pressures such as seniority or departmental politics.
The authors acknowledge the limitations of bibliometric indicators: journal impact factors are journal‑level metrics and may not capture the intrinsic merit of individual articles; citations can be driven by negative attention or self‑citation. Nevertheless, they argue that bibliometrics provide a cost‑effective, non‑invasive, and scalable complement to peer review. By expanding the evaluated sample size and grounding selection in transparent quantitative criteria, the DSS reduces the risk of random or strategic distortions, improves the fairness of institutional rankings, and enhances the efficiency of public funding allocation.
Beyond the Italian case, the paper suggests that the DSS framework can be adapted to other national assessment schemes (e.g., the UK’s REF, Australia’s ERA) and to disciplines where research outputs include patents, books, or software, provided appropriate bibliometric or alt‑metric proxies are identified. The broader adoption of such a system could align funding more closely with actual research value, promote a culture of data‑driven decision making in academia, and ultimately strengthen the knowledge economy by ensuring that limited public resources support the most impactful scientific work.
Comments & Academic Discussion
Loading comments...
Leave a Comment