Do the technical universities exhibit distinct behaviour in global university rankings? A Times Higher Education (THE) case study
Technical Universities (TUs) exhibit a distinct ranking performance in comparison with other universities. In this paper we identify 137 TUs included in the THE Ranking (2017 edition) and analyse their scores statistically. The results highlight the existence of clusters of TUs showing a general high performance in the Industry Income category and, in many cases, a low performance on Research and Teaching. Finally, the global score weights were simulated, creating several scenarios that confirmed that the majority of TUs (except those with a world-class status) would increase their final scores if industrial income was accounted for at the levels parametrised.
💡 Research Summary
The paper investigates whether technical universities (TUs) display distinct behavior in global university rankings, using the 2017 Times Higher Education (THE) World University Rankings as a case study. The authors first define a TU as an institution whose name contains “technical”, “technology”, or “polytechnic”, or whose curricula consist of at least 40‑50% technical subjects. Applying this definition to the 981 institutions listed by THE, they identify 137 TUs (about 14 % of the total). For each TU they collect the five THE dimension scores—Teaching, Research, Citations, International Outlook, and Industry Income—and construct a simple “TU Ranking” (TU R) based on these components.
Descriptive statistics reveal that TUs score highest on Industry Income (mean ≈ 57.4) and Citations (≈ 44.9), while Research (≈ 26.8) and Teaching (≈ 30.5) are markedly lower. Pearson correlation analysis (α < 0.1) shows that Industry Income is largely uncorrelated with the other four dimensions, indicating that TUs’ strength in industry engagement is not captured by the traditional research‑centric metrics.
To explore heterogeneity among TUs, the authors apply k‑means clustering with five pre‑specified clusters, each intended to correspond to one of the THE dimensions. Using the Determinant (W) criterion, random initialization, and ten repetitions, they identify distinct profiles: an “industry‑centric” cluster (high Industry Income, low Research/Citations), a “research‑centric” cluster, a “balanced” cluster, and others. Outlier analysis (Dixon test) finds that only a handful of institutions (e.g., Caltech, MIT, Imperial College) are extreme in multiple dimensions, while the majority of TUs cluster around the industry‑centric pattern.
The core contribution lies in the simulation of alternative weighting schemes for the overall THE score. The current THE weighting allocates 30 % each to Research, Citations, and Teaching, 7.5 % to International Outlook, and 2.5 % to Industry Income. The authors devise three scenarios:
- Soft scenario – Research and Citations reduced to 27.5 % each, Industry Income increased to 7.5 %.
- Strong scenario – Research and Citations reduced to 25 % each, Industry Income raised to 12.5 %.
- Tech scenario – Citations reduced to 27.5 % (Research unchanged), Industry Income set to 5 % (mirroring the subject‑specific weighting used by THE for engineering & technology).
Re‑computing overall scores under each scenario shows that the majority of TUs improve their rankings, especially under the strong scenario. Only world‑class universities (WCUs) already excelling across all dimensions experience little change. This demonstrates that the current weighting penalizes TUs by undervaluing their industrial income, and that modest re‑balancing can better reflect their true performance.
In the discussion, the authors argue that global rankings, by emphasizing research outputs and citations, create incentives for universities to prioritize basic research over industry collaboration, thereby disadvantaging institutions whose mission includes technology transfer, patents, spin‑offs, and other entrepreneurial activities. They suggest that ranking bodies should either increase the weight of industry‑related metrics or develop dedicated “technical university” rankings that capture the full spectrum of TU missions.
Limitations acknowledged include reliance on a single year’s data, potential inconsistencies in the Industry Income metric across regions, and the somewhat arbitrary choice of five clusters. Nonetheless, the study provides empirical evidence that TUs constitute a distinct group within global rankings and that alternative weighting schemes can substantially alter their perceived standing.
Overall, the paper contributes to the ongoing debate about the fairness and relevance of university rankings, highlighting the need for more nuanced, mission‑aware evaluation frameworks that recognize the diverse roles of technical institutions in the knowledge economy.
Comments & Academic Discussion
Loading comments...
Leave a Comment