The GEM Report 2017 will be looking at how we can improve accountability in education. Hoping to engage people in the types of issues our Report will address, we are running a series of twitter polls to accompany our online consultation. The third in our twitter poll series asked whether people felt that international rankings of universities make high education more accountable. The answers sat more firmly on the fence than with our previous two polls:
Ever since they first appeared in the 1980s, university rankings continue to grow in number and popularity as basic reference points of tertiary education institution performance. In 2015, eleven global ranking systems produced updated lists.
The initial idea of establishing university rankings was twofold: first to help keep university programs relevant and second, to push universities to provide the best quality education, by creating a form of competition. Students were then meant to use these rankings to explore the higher education options that exist beyond their own countries’ borders and to compare key aspects of schools’ research and teaching missions. In design, therefore, they are set up to help students and governments hold universities to account; in reality, they are hotly contested. Why?
Rankings attract attention because they are simple to understand. However, they have methodological flaws. First, they exclude the vast majority of universities around the world and collect information only on universities whose faculties have produced at least a few hundred publications in the prior year. This creates a near obsession with the status of the top 100 universities over others, none of which are in Africa, Latin America or the Arab World, for instance.
In addition, governments frequently allocate resources according to these rankings. This leads to the question as to whether the assessments result in universities thinking strategically about partnerships, programmes, exchanges and academic disciplines, rather than about just providing a quality higher education. With resources associated to them, the rankings may also reinforce or strengthen divides, leaving lower ranked universities struggling to receive funds that will help them improve. Indeed, many believe the rankings end up being about resources, rather than having anything to do with the provision of quality education.
Opponents also argue that, despite methodological improvements, university rankings are still primarily marketing tools that rely heavily on institutional reputation and faculty publications. As currently designed, many feel that rankings are not based on indicators of teaching quality or student learning that are reliable, valid, standardised and internationally comparable. Others feel that the rankings should at the very least also reflect important differences in the national or regional context in which universities offer specialized degree programs.
Weighted rankings undermine the likelihood of collaboration between high income and low income country universities. They also serve to encourage the international migration of educated young scholars (brain drain) and, ultimately, increase inequity. What might have been set up to increase accountability, therefore, many argue is doing the exact opposite.
It’s a provocative debate. What do you think? Join the consultation