Startpage for Swedish National Agency for Higher Education

 
 
Please note! The Swedish National Agency for Higher Education was closed down on 31 December 2012. Instead two new agencies have been established: the Swedish Council for Higher Education and the Swedish Higher Education Authority. This website will continue to operate as the new agencies will have links to information it contains.  

Report 2009:27 R

Ranking of universities and higher education institutions for student information purposes?

The report (813 kB)
Ranking of universities and other higher education institutions has become increasing common in recent years. The purpose of rankings is frequently to make complex circumstances more transparent, especially for students. In this report, we describe the ranking phenomenon in general and, more specifically, a number of existing ranking systems. We also discuss the prerequisites for and the pros and cons of ranking in terms of information for students. This report is a presentation of an assignment by the Government to Högskoleverket [the Swedish National Agency for Higher Education] to survey and analyse
the ranking of universities and other higher education institutions. The report comprises three main sections. In the first chapter, we describe the ranking phenomenon in general terms, in the second we survey a number of existing ranking systems, and in the third and final chapter we discuss ranking in terms of information to students.

The ranking phenomenon has expanded in line with the supply of higher education. Rankings are sometimes described as a way of simplifying and clarifying a complex scenario for students and other interested parties. Even if the ranking concept may cover many aspects, most rankings have a common factor in that they present indicators of quality — explicit or implicit — that are weighted to produce an outcome which, in its turn is ranked in comparison with all other such results. In other words, ranking is an attempt to measure the quality of higher education or research, but it differs from many other forms of quality assessment because it is relative — there are no absolute criteria or norms for what may be regarded as “minimum quality". This is partly because the ranking designers are often commercial entities — newspapers or magazines — with no direct responsibilities for the higher education sector. There is extensive discussion of the ranking concept. Ranking is frequently defended on the grounds that it is useful for students and for other (less wellinformed) stakeholders. There is, however, widespread criticism of what factors are measured and the way measurements are carried out. It is hard to determine the extent of the impact of rankings on institutes of higher education and on students. The — relatively few — studies that have been conducted do not provide any unequivocal information. There are both positive and negative effects on higher education institutions. Some students use the rankings when making their choices, but most do not. The international agenda involves, for example questions regarding the quality of rankings, European challengers versus the international ranking system, and multidimensional, interactive rankings.

Universities and other higher education institutions are ranked all over the world. A few rankings are international, in the sense that they rank higher education institutions in several countries. Such international rankings are perhaps subject to the toughest criticism, but they have considerable impact. We have decided to study a number of national rankings that are designed to provide information for students in more detail. There is a wide range of ranking systems in the United States with a relatively long tradition, but considerable protests against ranking have also been voiced. This also applies to Canada. Australia has a long tradition of extensive student questionnaires, which have ultimately become both a source of ranking information and a basis for the allocation of resources. In Great Britain, higher education institutions are ranked by several major newspapers, with the object of informing
students before they make their choices. Germany represents perhaps the most well-known example of multidimensional, interactive ranking. Sweden has a rather short and very limited history of ranking universities and other higher education institutions.

Students invest both time and money in their education. As a result, it is important for potential students to have access to comprehensive and relevant information about higher education before they choose. Based on our survey of rankings in general, and a couple of specific ranking systems, we conclude that most rankings do not provide all-round, relevant or reliable information about the quality of higher education.

Weighted rankings of entire higher education institutions are particularly problematical. Quite simply, they provide too little information. They also assume that all the potential target groups have a common and very precise definition of quality — a definition which may be seriously questioned in terms of indicators, methods, reliability and — in particular — relevance. Multidimensional rankings, or rather student information systems, in which students or other stakeholders are themselves permitted to define quality, are a more attractive alternative. Quality and the selection of indicators also present problems in this context too, however.

There is a great deal of information about higher education in Sweden, and much of this information is of satisfactory quality. But the quality of information to students does not merely depend on quality-assured data sources and the reliability of the information. It is also important that the information is used for the purpose for which it was intended. Very little quality-assured information is suitable for ranking of the kind that we have studied.
It is obviously desirable that the information is transparent and accessible for students who are interested in comparing different higher education institutions or different programmes, and there is certainly potential for further development in this area. But further investigation of a number of issues is required, however. The question of what the target group — the students themselves — want is also crucial. An awareness that simplification of information
also involves a loss of information is also required, and that information simplified to the extent called for in weighted ranking lists is hardly information worth the name, in the context of the quality of higher education.

Last updated:
Contact: , Email: firstname.lastname@hsv.se