Rankings of higher education institutions are everywhere these days and almost every diligent international student or anxious parent will refer to them. But such lists been controversial from the get go...
Eddie Levisman is an educational counsellor and international education consultant, who specialises in helping students to make the transition from high school to university.
Last week, during his inauguration of the 136th ordinary sessions of Congress, President Mauricio Macri forcefully suggested that the education system in Argentina should implement some sort of ranking system for its institutions. “We must be able to know how the schools our children attend are performing,” the president declared.
The proposal immediately drew objections and criticism from many sides, and although this particular call for rankings is quite different from say, university rankings around the world, two things are evident. Firstly, that people want to be able to measure and quantify the value of activities in which they invest time and resources. Naturally, we want to know if we are getting a good return on our investment, and education is no exception. Second, it’s clear that the very issue of ranking educational institutions is, by definition, controversial and complex.
Ranking universities is a difficult task because of the sheer quantity and diversity of factors involved. This may sound irrelevant in Argentina, where the number of higher-education institutions is approximately 100, and there is something approaching a consensus about the relative quality of such places. Consider, however, that in somewhere like the United States there are close to 4,000 post-secondary institutions! Comparing and rating so many institutions, which come to represent an enormous amount of diversity – in terms of size, location, weather, their respective curriculums, resources, age, philosophy and what not – is, some say, an impossible task to accomplish objectively and with some degree of utility. The debate about rankings is as old as rankings themselves yet it hasn't stopped them thriving. Data backs this up: in the month that such rankings come out, for example, the US News & World Report website records more than 10 million visitors!
Those who support ranking systems generally hold to the idea that it is impossible to place a value on a product without a scientific basis for comparison regarding its competition. Those who oppose that notion, however, maintain that ranking institutions is like comparing apples and oranges, because each one is unique, just as each student is.
The main reason ‘ranketeers’ are popping up everywhere nowadays is that in today’s market, there is so much educational supply that choice is growing by the day. People have become smart consumers when it comes to selecting higher-education opportunities. Both parents and their kids today are technologically adept and they are quick to research options, compare data and make decisions – based on analysis, not just intuition. The purpose of listing institutions therefore is to provide hard data to the consumers, to aid them in making a choice.
The question that remains, of course, is not whether rankings should exist but rather whether they accomplish what they are set out to do. Are rankings reliable? Are they accurate? Are they free of methodological bias or intentional self-interest distortions? Which company should you trust? What about scams ranking publications that lack professional credibility that the consumer is not aware of? All these questions and many more still linger and, I dare say, will always be with us.
The methodology for creating such lists vary among companies, which is one of the problems to start with. The largest, if not the oldest, ranking company – The US News & World Report – relies on seven weighted variables in this order: undergraduate academic reputation (22.5 percent), graduation and freshman retention rates (20 percent), faculty resources (20 percent), student selectivity (15 percent), financial resources (10 percent), graduation rate performance (7.5 percent), alumni donations (5 percent).
Without going into a too detailed an analysis, let me point out, by way of example, how rankings can be distorted by two of these variables. The first is academic reputation. The way this variable is measured is by surveying the presidents of universities across the land, asking them to rate institutions on a variety of categories. Now here is the question: what does any given university president really know about the qualities of another institution, other than what they hear or read in the media (which you can also do for yourself)? The second example involves the variable of student selectivity. The ranking considers the ratio of number of applications to number of students admitted, the lower the ratio – the higher the rank. Two questions come to mind at once: first, what is really the relationship between low admission numbers to quality of education? The answer is, probably not much. Second, can universities manipulate this ratio by investing in massive advertising campaigns, thereby raising the number of applicants while maintaining the number of admitted students and thereby lowering the coveted ratio? The answer, of course, is a huge yes!
Status and hierarchy
David Larabee, a historian specialised in education at Stanford University, provides four basic rules that affect the status and hierarchy of an institution’s ranking.
The first rule asserts that age trumps youth, which is why it is no surprise that the oldest institutions in the United States are overrepresented at the top 20 percent of rankings. Some years ago I have had the honour of lunching with the Dean of Admissions at Harvard University who offered a memorable statement: “Even if – and that’s very unlikely - Harvard became a second-rate school,” he said, “it will take the world 100 years to notice it.” The second rule states that the strongest rewards always go to those at the top of the list, thus generating a self-perpetuating loop. The third rule, a clever observation, points to the fact that when lesser schools try to imitate the top ones (which they do often), they may gain some points but, more so, they perpetuate the very things the top schools already have! Finally, the fourth rule observes that the system expands by adding institutions rather than increasing enrolment numbers in existing institutions, thus creating lesser schools and, again, perpetuating the leadership of those already at the top.
“Essentially, then, the US News rankings simply tell us what we already know intuitively: status comes with age; rewards go to the best situated; schools will always be jockeying (sometimes successfully) for position up to a point; and the system of higher education will always act to protect this hierarchy by creating ways to take the pressure off it.”
In other words, the rankings nourish the myth that the richest, most selective colleges have cornered the market on superior education, they do not adequately recognise public institutions that prioritise access and affordability and do not recognise the particular virtues of individual campuses. As the writer Malcolm Gladwell once put it: “Who comes out on top, in any ranking system, is really about who is doing the ranking.”
My advice is to adopt a middle ground when looking at rankings. These can be useful instruments to support your decision-making process, but they should not be the single factor to consider in your quest. Use rankings as a guideline, expand your search to more specialised lists such as majors, quality of life, international student support and as many other factors that are part of what you value.
A student can be happy and thrive in many universities, but equally, there are many factors to consider when making those determinations – ones that you will not find in any list of rankings.