This Article is From Jun 29, 2020

The Far From Magnificent Obsession With Ranks At IITs

The National Institutional Ranking Framework (NIRF) ranks have just been released and there's much excitement in the air. Heads of educational institutions have released statements saying how they will work harder or how proud they are to be at the given rank; some have complained about how they lost out because of incorrect data. It is ironic that institutions are obsessing so anxiously about their ranks when they themselves advise students not to worry about marks and the rat(e) race and focus instead on learning.

IIT-Madras is No 1 among top 200; IIT-Kanpur is No 4.

It is worth pondering how institutions have become participants in a race that reduces their multifaceted existence to a score and a rank which cannot capture the "character" of institutions, most notably their quality, creativity, and learning ethos.

Rankings are based on bean counting, typically including: how many teachers per student, how many with PhD degrees, how many papers were published and how many citations they got, and a numerical value to the perception of reputation of an institution, as obtained from targeted surveys in academia and industry. The weights given to different factors vary according to the organization (NIRF, Times Higher Education (THE), Quarcquarelli Symonds (QS)) but they are all quite similar in that they only count what can be counted. Even "perception" is now countable.

Having more teachers does not necessarily mean that teaching is better, or that the teachers are good. Having a larger fraction of students graduating does not imply that their degree is truly worth something. It can also imply that the university has set very low standards to pass students. Publishing more papers does not tell us much about the quality of research. In fact, the correlation can sometimes be inverse. Too many publications may suggest a lot of incremental work, while fewer papers may signal that these have something significant to say. A very impactful paper will have many citations but a large number of citations does not imply that a paper is great. This is because research communities have a spread across quality and we often have a situation where a large amount of mediocre, incremental research simply cites similar research. In metrics-based calculations, an institution that publishes a large number of low -quality papers will almost always win against one that publishes a few high-quality papers.

The point is that numbers do not properly capture quality or originality. To moderate this weakness, most of the ranking schemes give weightage to "perception". This is a double-edged sword in that it can also be gamed (e.g. institution specifies people who will participate in a survey). Given all this, a highly-ranked institution may actually be quite pedestrian, whereas a creative one may get tagged with a low rank. Administrators and bureaucrats who love these schemes often end up reaching the wrong conclusion, like not being able to differentiate between a prolific publisher or self-cited versus an actually superb researcher.

In the end, we get a ranked list which simply reflects the arbitrariness of the chosen criteria and their weightage, but which is unable to incorporate the complexity of the "best-ness" it is attempting to represent.

Complex ranking schemes obviously lead to some other strange outcomes. First, the rankings have very poor granularity. This means that the entities ranked, say, 1 and 4 are different from each other in insignificant ways. For instance, the seven old IITs, in an overall sense, are not significantly different from each other. Therefore, those who rejoice at one IIT being rated first and those who rue that their IIT has been rated fourth are all reacting unnecessarily. Sure, there is likely to be a difference between entities at ranks 1 and 20, but It is impossible to estimate this granularity - that difference in ranks which starts reflecting some "real" difference(s).

This brings us to the second caveat that the data on which these ranks are allotted varies highly in terms of reliability and integrity. Because the data submitted is voluntary and mostly unverified, there will be institutions who provide very honest data versus those who will dress it up in clever ways; in some cases, the badness of the data is simply an outcome of the way it is collected.

Third, there are many comparisons that occur between apples and oranges. The Indian Institute of Science (IISc) is much closer in terms of "type" to the IITs and the IISERs, yet it is put in the category of "Universities"; the Institute of Chemical Technology appears in "Universities" and also in "Engineering" - it is not a university and even in engineering, it is restricted to "chemical". The IISERs make an appearance only in the "Overall" listing even though some of them have engineering departments. Similarly, there will be problems in comparing institutions focused on undergraduate versus those on postgraduate programmes.

Fourth, there are also lemon to melon comparisons, like comparing institutions with a few hundred or a few thousand students to those that have student populations in the tens of thousands. The comparison between the new IITs and the old ones is a case in point. Last year, IIT Ropar and IIT Indore beat all the old IITs in the Times Higher Education rankings. While these new IITs may have done well over the last decade, comparing them with old IITs makes no sense because of the much larger number of faculty and students in the latter.

Similar arguments can be made across the spends per student. IISc, TIFR, IITs, IISERs are all funded handsomely as compared to even the most prestigious central universities. How fair does that make the comparisons?

Fifth, one important thing that "overall" institutional rankings obscure is the possible existence of schools and departments within the university that have a reputation for excellence. In most contexts, when students need to make a choice, it is the school or department rank that matters and not so much the rank of the institution. In IIT Bombay, for instance, the Chemical Engineering department is ranked 50th globally, whereas the Institute's rank is 152, in QS lists of 2020.

Lastly, many private institutions, some of dubious quality (e.g. with low NAAC rating or unrated) and solely focused on making profits, appear in these lists. This inclusion in the list itself grants them a certain legitimacy, and bragging rights for their advisements. Every institution that exists does not deserve to be ranked.

Employers do not use rankings to hire students. They use their judgement on the quality of the students they have hired in previous rounds. The same is the case with universities abroad.

In India, students seeking admission into public institutions have very little use for institutional ranks. Their choice of the institution, and often the subject of study, is dictated by their own ranks in the entrance examinations like the JEE and NEET, and the accumulated "wisdom" of what their seniors did in the past few years. Many admissions are driven by affirmative action schemes such as the concept of "deprivation points" in JNU admissions - metrics that help students from disadvantaged backgrounds. For students having an interest in private institutions, the dominant criterion for choosing one over another are the tuition and living costs. In fact, whenever there is choice, other things being equal, students like to look at placement statistics - the sort of salaries landed by graduating students - or which universities gave them admissions and scholarships for further studies.

When we did not do well on the global ranking lists, we created our own. Though, why and how this would help us climb the international ranking ladder, and why this was important, is a mystery.

The low global rankings of our institutions should have triggered some soul searching and analysis of the measures needed to solve long-festering problems and issues. For example, those of crumbling infrastructure, faculty shortage, disinterested undergraduate students, research scholars without the necessary fundamentals and background, and an antiquated academic structure. I have written about how these factors play out in the IITs in these very columns (1, 2). The situation in other institutions, including reputed central and state universities, is likely to be even more difficult.

Instead, a significant amount of attention has been focused on "managing the rank". A good example is our recent enthusiasm for courting "international" faculty and students because global rankings gave this factor some weight. Many global institutions cannot survive without foreign students and foreign faculty, because they do not have enough domestic candidates. We are not in the same boat, yet we are trying to blindly emulate them.

The attempt to maximize numbers has led to a preponderance of low quality publications, and incremental research. Tragically, we continue to be the number one country for publishing in predatory/fake journals.

Rankings are not needed to make institutions better. Rather, their specific problems need to be addressed. It should be a cause for worry that commercial international ranking agencies are able to nudge institutional behavior when we shouldn't have bothered with them in the least. Once "we get into the game", we are then forced to manage the image that it creates for us.

The world's best institutions do not worry about rankings but in ensuring that they are respected by their peers for the quality of their graduates and research. They have not reached the top by trying to be "first". We should remember what we tell our students: if we study well , we are likely to get a good grade, but getting a good grade does not necessarily mean we actually know a lot.

(Anurag Mehra is a Professor of Chemical Engineering and Associate Faculty at the Center for Policy Studies, at IIT Bombay.)

Disclaimer: The opinions expressed within this article are the personal opinions of the author. The facts and opinions appearing in the article do not reflect the views of NDTV and NDTV does not assume any responsibility or liability for the same.

.