Palash Deb | Evolve the right parameters to judge varsities
Some of the criticism against Indian universities seems misplaced if we consider the complexities of global university rankings
Recent media reports have highlighted that with the leading Indian Institutes of Technology (IITs) opting out of the Times Higher Education World University Rankings 2021, only two Indian universities feature among the world’s best 400 universities. At a time the National Education Policy 2020 seeks to transform India into a global knowledge superpower by establishing world-class universities, the reluctance of established IITs to be associated with these rankings has drawn its fair share of criticism.
In a society still conscious of hierarchy, and where a growing sense of national pride demands global recognition in every sphere, conversations about rankings evoke opinionated arguments, with the media, policymakers, academics, and everyone else for that matter, weighing in on the purportedly sad state of affairs in Indian higher education. Indeed, it has rightly been said that university rankings have not just captured the imagination of higher education, but in many ways hold hostage higher education itself. Yet, some of the criticism against Indian universities seems misplaced if we consider the complexities of global university rankings, and the imperfect correlation rankings have with a university’s real performance.
University rankings are often a source of national pride and soft power. It appears that as more and more countries enter the global race to build so-called ‘world-class universities’, no self-respecting nation can afford to fall behind. There are about 20 global university league tables, of which the most popular are The Times Higher Education World University Rankings, The Academic Ranking of World Universities compiled by the Shanghai Jiao Tong University, and The Quacquarelli Symonds (QS) World University Rankings. Other notable rankings include the US News and World Report, the U-Multirank (which allows users to generate their own rankings based on their preferences), the Ranking Web or Webometrics Ranking developed by the Cybermetrics Lab, and the Universitas 21 (which ranks countries, rather than universities, on the strength of their higher education systems). These rankings differ widely in the parameters they use (such as research output or reputation among academics and employers) and the weights they assign to the various parameters.
Some ranking parameters, such as university reputation, are based on perception, in addition to being opaque (a main allegation against the Times rankings) and procedurally faulty (the QS ranking allows universities to nominate who ranks them). Even supposedly objective metrics that capture the quality of research and teaching, such as publications in select journals, citation count, student-faculty ratio, and graduation rates, can be misleading. For one thing, these metrics cannot capture such intangible aspects of university education such as campus culture or student experience. The measures chosen are often based on ease of availability, and metrics such as student-faculty ratio or graduation rates are susceptible to manipulation by university officials.
Rankings tend to favour research over teaching, creating perverse incentives for faculty members in research-intensive universities, many of whom treat teaching as a chore. Publishing in the best journals in a field is not always a transparent process, and one cannot easily brush aside the role of such external factors as courting editors or choosing esoteric research topics that are well-received by academic journals but have no real relevance to society. Rankings also favour the citation-heavy hard sciences, and prefer peer-reviewed articles over books or conference presentations. All institutions, irrespective of mission, are judged by the same parameters, which reduces institutional diversity by forcing all of them to prioritise certain common metrics. Rankings also do not come cheap, as universities need to provide substantial research support, develop infrastructure, and spend large sums on marketing and brand promotion. Finally, rankings can foster social inequalities as the lion’s share of government grants goes to a select few universities that are often attended by students from well-to-do families.
Given these limitations of global university rankings, one could possibly argue that institutional excellence is better gauged using socio-economic metrics such as the impact of faculty research on government policy, court rulings, or the development of critical indigenous technologies. One must recognise though that these may not necessarily improve a university’s global rank. But if we are forced to choose between the two, our social and economic priorities must always take precedence over university rankings. The main goals of Indian higher education policy should be human capital development, innovation, social equity, and economic growth. Rankings will eventually follow, but until they do, we should not lose our sleep over them.
This implies that we must rethink our priorities. For instance, a scheme such as Institutes of Eminence, which is inspired by excellence initiatives in other countries such as the Double First Class University Plan in China, Project 5-100 in Russia, and the Exzellenzinitiative in Germany, must be reset to prioritise socio-economic objectives over ranking aspirations. Faculty incentives must also be changed accordingly. Eventually, we must develop our own global rankings that will employ unique metrics that measure a university’s contribution to solving problems faced by developing countries such as ours. Unless our universities focus on their broader purpose instead of chasing the chimera of global rankings, India will continue to remain, to borrow a phrase from Philip G. Altbach, a gigantic periphery in the international knowledge system.