A measure of authorship by publications

A measure of authorship by publications
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Measuring publication success of a researcher is a complicated task as publications are often co-authored by multiple authors, and so, require comparison of solo publications with joint publications. In this paper, like \cite{price1981multiple}, we argue for an egalitarian perspective in accomplishing this task. More specifically, we justify the need for an ethical perspective in quantifying academic author by identifying certain ethical difficulties of some popular contemporary indices used for this purpose. And then we show that for any given dataset of research papers, the unique method satisfying the ethical notions of {\it identity independence} and performance invariance must be the egaliatarian E-index proposed by \cite{bps} and \cite{price1981multiple}. In our setting, this egalitarian method divides authorship of joint projects equally among authors and sums across all publications of each author.


💡 Research Summary

The paper tackles the longstanding problem of how to fairly quantify an individual researcher’s contribution when publications are frequently co‑authored. It begins by critiquing the most widely used bibliometric indicators— the h‑index and its variant the g‑index— highlighting two major shortcomings: (1) they treat all papers equally regardless of the number of co‑authors, thereby rewarding quantity over quality and potentially inflating the scores of prolific co‑authors; (2) they fail to incorporate the full citation information, especially the impact of highly cited papers, and they ignore ethical concerns such as identity bias. To address these issues, the authors introduce two normative axioms. “Identity independence” requires that a metric should not depend on the author’s name, affiliation, or any personal characteristic; “Performance invariance” demands that any increase in the number of papers or citations should never reduce an author’s score.

Building on the egalitarian ideas originally proposed by Price (1981) and later formalized by Bose et al. (2010), the authors define the E‑index as the sum over all papers of the paper’s citation count divided by the number of authors on that paper:

E(A) = Σ_{p∈A} (citations(p) / authors(p)).

They then prove a uniqueness theorem: any metric that satisfies both axioms must be mathematically equivalent to this E‑index. The proof proceeds by first establishing that the metric must be additive (linear) across papers, then showing that the only way to allocate credit for a joint paper without violating identity independence is to split it equally among co‑authors. Finally, performance invariance forces the allocation to be proportional to the raw citation count, leading directly to the 1/k weighting. Any alternative weighting (e.g., quadratic, logarithmic) would breach at least one axiom.

The paper situates this contribution within a broad literature, contrasting it with works that start from an existing index and then derive properties (e.g., Marc han t 2009, Szwagrzak & Treibich 2019) and with studies that propose more complex, often computationally intractable, scoring systems. The authors argue that the E‑index’s closed‑form expression makes it both theoretically sound and practically implementable.

An empirical illustration using a dataset of economics researchers demonstrates the index’s behavior. In a scenario where a solo author has three papers each with 100 citations, while another researcher has four joint papers each with 25 citations (two authors per paper), the h‑index incorrectly ranks the joint‑author higher (h = 4 vs. h = 3). In contrast, the E‑index yields 300 for the solo author versus 50 for the joint author, correctly reflecting the larger scholarly impact. The authors also discuss how the E‑index aligns with labor‑economics findings that citation returns diminish with additional co‑authors, supporting the egalitarian split.

The discussion acknowledges limitations: citations are an imperfect proxy for quality, disciplinary citation practices vary, and not all co‑author contributions are truly equal. The authors suggest possible extensions, such as incorporating contribution statements, weighting by author order, or integrating other impact measures (patents, software, datasets).

In conclusion, the paper establishes that the egalitarian E‑index is the unique metric satisfying identity independence and performance invariance, offering a transparent, ethically motivated alternative to h‑ and g‑indices for academic evaluation, hiring, promotion, and funding decisions. Future work is encouraged to refine the index for fields with distinct authorship norms and to explore hybrid metrics that combine egalitarian credit with nuanced contribution data.


Comments & Academic Discussion

Loading comments...

Leave a Comment