Reputation Race in Higher Education is Getting Bigger, but is it Getting Better?

The university ranking system is getting bigger every year. More institutions are added, more heart burns are caused, more browbeating happens, more student lives are touched, and so on. But the question is: Is it getting better?

The year-on-year rankings have created a race, maybe some sort of an anxiety syndrome, where institutions strive to climb up the charts as several of them spring up from different corners. Academic Ranking of World Universities (ARWU) from Shanghai that has been published since 2003 has emerged as the most cited ranking. Last year SIR World Report 2011, which ranked 3042 research institutions from 104 countries, created some flutter. From India, CSIR ranks 73 and IISc ranks 364 in the SIR list.

Most of these rankings look at research output and select their own variables. As a consequence, the rankings vary.

In the latest issue of Current Science, two Belgrade University academics publish another ranking system using a so-called I-distance method which tries to integrate both quantitative and qualitative parameters. What’s interesting is that the authors, Zoran Radojicic and Veljko Jeremic, use the SIR World Report data of 2011 and show a ranking that has disrupted positions of the rich and famous universities. For example, Harvard, Stanford and Imperial College London which rank 4th, 19th and 36th in the SIR World Report, move up to ranks 1, 7 and 18 in the I-distance ranking.

I-distance method has been proposed many times in the past. Jeremic says he has extensively published it in world’s leading journals. This time though, he says, he performed the I-distance and evaluated elusively higher education institutions so as to compare results with official SIR rankings and to point out potential inconsistencies with the ARWU ranking.

ARWU method has in particular two variables ‘Alumni’ and ‘Award’ which measure the number of Nobel prizes and Fields medals won by a university’s alumni, or faculty members who worked at an institution at the time of winning the prizes (‘Award’). This is more “glorious past oriented approach”, than SIR’s “current performance evaluation” approach, he argues.

Even though these rankings publicize the achievements of universities in specific ranges of activities, they have clear weaknesses. At a deeper level they foster a culture of spotting success stories which others want to emulate. Many even call it the homogenization race. But I’d argue that these rankings are missing a key aspect: since they do not rank disciplines, they are probably not helping students in selecting the best institutions. What if a university is great in neuroscience but weak in computer science, ranks high in metallurgy but low in mathematics? These ranks shed no light on such aspects.

In his paper, Jeremic draws a comparison between Chinese and Indian higher education institutions. From India, 111 institutions appear in the SIR 2011 list, of which 85 (77%) are in higher education. From China, 285 institutions make the list, and of which 240 (84%) are in higher education. These statistics, says Jeremic, indicate that the Chinese higher education system is nearly three times the size of the Indian system. In total, Indian scientific output is smaller than the Chinese output. (See the table.)

Incidentally, the Indian Institute of Science (IISc) tops theIndiarank list in both the SIR and I-distance method. But more importantly, the Tata Institute of Fundamental Research (TIFR) which is placed 14th in SIR is ranked second by I-distance.

Given these inconsistencies, the European Commission has devised yet another system – U-Multirank method that is a multi-dimensional, (unlike research based existing systems) user-driven ranking tool that uses yet another set of variables: research, education, knowledge exchange, regional engagement and international orientation. U-Multirank is slated for roll out in EU in 2013.

This has particularly emerged from the growing concern that the current ranking system brings distortion in research priorities and local, or regionally relevant, research gets unnoticed. At the recent Euro Science Open Forum (July 11-15) in Dublin, Ireland, Ellen Hazelkorn, head of the Higher Education Policy Research Unit at Dublin Institute of Technology said these rankings have created a “knowledge hierarchy” in which certain types of knowledge are considered more important than others. She implied that disciplines like life, physical and medical sciences get more weightage over arts, humanities and social sciences.

Many university ranking systems, including India’s NAAC, are flawed as they also give emphasis to input indicators and not just output/outcome indicators, says Gangan Prathap, director of NISCAIR and former vice-chancellor of CochinUniversity. He frequently publishes on this subject and has earlier analysed SIR 2011 data in Current Science in a new X-ranking.

“I’m not convinced that the I-distance is fault-free. It emphasizes the quality indicators over the quantity indicators. I think to evaluate performance, you need both,” he says. He however agrees that it’d be a good idea to do a discipline-wise ranking.

But the director of India’s top-ranking institution, IISc, Prof P Balaram is critical of these rankings. He believes Indian institutions slip owing to their small size.

Prof Balaram is right, says Prathap, as the current ranking systems are based on a composite value multiplying quality and quantity indicators and so favour large universities. Typically, a world-class university in North America or Europe is five times the size of IISc and they have budgets which are 20-100 times IISc’s, he says.

All these arguments aside, I have yet another issue with the ranking system. Why rank, when the world is moving to rating. Why not rate the institutions?

Ranking of universities is a very important issue, but selection of variables is the issue than can hardly get a consensus of all interested parties, says Jeremic. “I completely agree that some form of “rated” universities approach could be an interesting alternative to the quantitative ranking approach.”

 

 

 

 

 

 

2 comments to “Reputation Race in Higher Education is Getting Bigger, but is it Getting Better?”

You can leave a reply or Trackback this post.
  1. Akshat Rathi says: -#1

    I don’t buy into the rankings hype, but I don’t see how you could come up with a fault-free (or even less faulty) rating method. Who will rate those institutes? The people who graduated or worked there will tend to have biased views, the people who are outside won’t get the right picture. In that sense a mix of quality and quantity, as is the case in some rankings, seems like a better idea. No?

    • Seema Singh says: -#2

      @Akshat: I don’t have any good idea ‘how’ these can or should be rated. All I know is that ratings optimise for different variables and there are diff variables here which sometimes are in tension with each other. Who will rate? I think it will depend on what specific challenge or problem the rating agencies and institutions want to address, how do they define success, etc. I haven’t asked many institution heads how they benefit from ranking but I can see that students don’t benefit as much as they should. Perhaps, along with ranking they need a rating system too.

Write a Reply or Comment

Your email address will not be published.