Comparing apples and oranges

A year ago, the Union Ministry of Human Resource Development (MHRD) announced a National Institutional Ranking Framework (NIRF) to assign ranks to institutions of higher education and research (HE&R) in the country. The first round of results were published by April this year — in itself quite an achievement considering the size and complexity of the task. Even at the low Gross Enrolment Ratio (GER) in higher education (HE) of under 25 per cent, India has a massive HE system that turns out close to 8-10 million graduates a year from over 50,000 institutions coming under some 800 universities and employing over 1.5 million teachers. That such a vast enterprise should have its own system of institutional ranking ought to be beyond argument, and it is good that a beginning has been made in the matter.

A new ranking

This initiative, however, seems to have been strongly motivated by the fact that even our Centrally funded elite institutions like the IITs were not doing well in any of the accepted global ranking schemes such QS World University Rankings, Times Higher Education World University Rankings, Shanghai (ARWU) Ranking, etc. It has been a matter of quite some heartburn, and it is only recently that we have discovered the ‘unfairness’ of being compared with the institutions in the West and decided to have our own ‘India-specific’ ranking systems that will do ‘justice’ at least to our elite institutions.

Like any ranking system, the NIRF scheme is also open to questioning and criticism, and it certainly has its share of limitations. For example, it is quite puzzling as to how institutions with a sharp focus on narrow areas like Information Technology or Space Technology have qualified to be called “universities”, whereas many institutions with a broad base in all domains of engineering, sciences and humanities are listed as “Engineering Institutions”. Also, what is the point in giving a national rank to institutions that have barely graduated three or four batches?

Perhaps a matter of far more concern is that the entire NIRF exercise was carried out largely by members from the Central government institutions; there was not much effort made to elicit views from over 90 per cent of the institutions that belong to the State system (including those in the affiliation framework). That would have forcefully brought out the need to normalise performance of an institution with respect to the funding, resources and freedom available to it. Interestingly, the NIRF document does mention the care that has been taken to see that only apple-to-apple comparisons happened, but given the huge difference in funding and resources, how does comparing an IIT with a State university become apple-to-apple? Equally interestingly, there is a suggestion in the NIRF document in the context of newly emerging private institutions, “ see how some of these institutions would perform on ‘outputs’ and ‘outcomes’ on a per Rupee investment scale”. Surely the same performance-per-rupee logic must also apply while comparing Central and State university systems, both of which are public-funded?

Coming back to the ranking process and its first round results, there are hardly any surprises here: Of the top 25 “Engineering Institutions” and “Universities” each, about three-fourths are Centrally funded institutions, and the rest divided equally between private and State institutions. Even without such rankings, the Central government and corporate sector have already been dividing institutions into Tier-I, Tier-II, etc(DOT) categories, and eligibility for many Central funding schemes are already being restricted to “IITs, NITs, IIITs, ISERs and Central Universities”. The ranking would simply formalise and reinforce this perception. Caught in the vicious circle of low funding, poor performance, low ranks, lower funding, poorer performance, lower ranks, the State-level institutions are only going to be going down further in comparison with the Central and private universities. A ranking system where it is already known that a well-defined section amounting to over 90 per cent of the total has no chance of doing well is both unethical and unscientific. This would simply demoralise them as against motivating and encouraging them to do better, which is what a healthy ranking process should attempt.

Constructive solutions

It is suggested here that, given the huge resource gap between them, a common ranking across Central and private institutions on the one hand and State-level institutions on the other does not make much sense in our country — it is simply not apple-to-apple! It can however make sense if the performance index values for an institution are normalised with respect to the investments and resources that have gone into that institution. Alternatively, the present NIRF scheme could be retained for Central and private autonomous institutions, and another suitable scheme should be evolved for the State-level institutions, each being made compulsory for its category. Being in comparable conditions, each institution will be motivated to do better within its category, which should in any case be the primary goal of any ranking exercise. Any State-level institution should also be free to join the elite ranking scheme additionally if it so chooses.

C.N. Krishnan is a retired professor of Anna University, Chennai.

Our code of editorial values

This article is closed for comments.
Please Email the Editor

Printable version | Jul 25, 2021 10:37:15 PM |

Next Story