Higher education is not like football, and universities are not like football clubs.
The comparison would be risible were it not for the myriad of ways the press, policy-makers and universities themselves behave as if learned institutions are playing in a beggar-thy-neighbour league table. As we enter the annual flurry of competing university ranking tables, there is a collective obsession with who’s up and who’s down. Every institution highlights carefully selected measures on which they can claim to have outplayed their competitors or at least to have improved their relative standing. Their latest position in international rankings is then widely paraded as evidence of the successes and/or failings (often both) of government policies towards higher education.
There is a thriving international business in formulating and publicising university rankings, with at least 16 competing international ranking products, each spawning numerous spin-offs and a plethora of commercial conferences and publications. Universities and national governments have been only too willing to join the game by prioritising improved league table rankings in their strategic plans and policy goals. But as the university system faces unprecedented criticism over its relevance and value to 21st century lives, it is time to ask whether the obsession with rankings has fuelled the problems facing the sector. Instead of holding a mirror up to the variety of universities and their missions, have rankings worked to homogenise their priorities and offers?
Football league tables are based on the principle that similar teams compete in the same game, with the same measures of success or excellence. While different university ranking schemes may choose slightly different measures, they each assume that all institutions are playing in the same reputation game, and can validly be judged and compared on the same set of criteria and weightings.
Even if this was true, the validity of such rankings would be undermined by the serious methodological problems around data specifications and verification and whether the chosen metrics really measure excellence. Then there is the scope for gaming the input data and the inherent bias of such systems towards the established winners.
But these are not the main reasons for wanting to end the tyranny of rankings. Management guru Peter Drucker is widely credited for observing that “'what gets measured gets managed” - adding “even when it's pointless to measure and manage it, and even if it harms the purpose of the organisation to do so”. The depressing similarities in the targets set out in most universities’ published strategies suggest that they have been drawn into the trap identified by Drucker. They are implicitly allowing those doing the rankings to set their institutional agendas around measures which mean little to those in the outside world and on which many struggle to succeed.
Want to find out more about our work in the education sector?
A particular problem created by most ranking schemes is the assumption that the primary business of universities is the production of academic research papers. Their reputation among academic peers (including citations) is the dominant measure of excellence in all the league tables. The quality of the education provided to the students is both judged on and outweighed by research factors. As Bahram Bekhradnia, President of the Higher Education Policy Institute, has observed, “the only way of improving performance in international rankings is to improve research performance” – with much the same true for domestic comparisons. In consequence, university priorities and national policies are too often focused on academic research reputation. Ironically, university strategies that prioritise those needs, for example by emphasising access opportunities and practical solutions, are likely to be penalised in their league table standings.
The introduction of the Teaching Excellence Framework was intended as an antidote to universities’ preoccupation with academic research, but has already generated yet another hierarchy of winners (gold rated) and losers (bronze rated) based on arbitrary and generic metrics.
Comparative rankings and league tables perpetuate an outmoded and inward-looking view of universities and ignore the external demands made on them to be relevant to the huge diversity of individual, corporate and societal needs. The obsession with inter-university rankings only feeds the view of university education and research as an inward-looking and exclusive club, for which there are an increasing number of more cost-effective alternatives.
It is a widely held view that the fixation on league table success at all costs has led to professional football ‘losing its way’, neglecting its fans and community roots in the pursuit of short-term results and financial benefits. So perhaps the comparison with universities may not be so risible after all?
Mike Boxall is a higher education expert at PA Consulting Group