Научная статья на тему 'Academic Rankings — Where Are They Heading?'

Academic Rankings — Where Are They Heading? Текст научной статьи по специальности «СМИ (медиа) и массовые коммуникации»

CC BY-NC-ND
118
19
i Надоели баннеры? Вы всегда можете отключить рекламу.
Ключевые слова
universities / rankings / regional rankings / national rankings / global rankings / multiple ranking systems / rankings by subject / university’s social mission

Аннотация научной статьи по СМИ (медиа) и массовым коммуникациям, автор научной работы — Waldemar Siwinski

The global ranking system is in a state of violent transformation. We can already see the emerging contours of a new ranking system with the four distinguished elements: regional systems, customer-centered systems, multi-league systems and disciplinebased systems. To reflect regional characteristics, including language and culture, global ranking systems should become regional ranking systems. To satisfy readers’ different expectations towards rankings, ranker-centered systems should become customer-centered systems. To reflect different institutional missions; size, locations, current unified ranking systems, they should become multiple ranking systems. Institutional ranking systems should become discipline-based ranking systems in order to reflect disciplinary differences. One of the most significant directions of change in rankings is the search for a way to include missions other than research in the international rankings; especially important here are such aspects as excellence in teaching and the so called third mission or the university’s social mission.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «Academic Rankings — Where Are They Heading?»

Academic Rankings—Where Are They Heading?

Waldemar Siwinski

Received in Siwinski Waldemar

November 2016 President of Perspektywy Education Foundation, Vice President of IREG Observatory on Academic Ranking and Excellence. Address: Nowogrodzka 31, 00511 Warsaw, Poland. E-mail: w.siwinski@ perspektywy.pl

Abstract. The global ranking system is in a state of violent transformation. We can already see the emerging contours of a new ranking system with the four distinguished elements: regional systems, customer-centered systems, multi-league systems and discipline-based systems. To reflect regional characteristics, including language and culture, global ranking systems should become regional ranking systems. To satisfy readers' different expectations towards rankings, ranker-centered systems should become customer-cen-

tered systems. To reflect different institutional missions; size, locations, current unified ranking systems, they should become multiple ranking systems. Institutional ranking systems should become discipline-based ranking systems in order to reflect disciplinary differences. One of the most significant directions of change in rankings is the search for a way to include missions other than research in the international rankings; especially important here are such aspects as excellence in teaching and the so called third mission or the university's social mission.

Keywords: universities, rankings, regional rankings, national rankings, global rankings, multiple ranking systems, rankings by subject, university's social mission.

DOI: 10.17323/1814-9545-2017-1-158-166

If you asked me whether, in the global higher education landscape there is a place for a new ranking, my answer would be: YES. The global ranking system is in a state of violent transformation. Both researchers and experts on higher education and those involved in ranking see it.

In order to illustrate my thesis, I refer to the findings of two researchers in the field of higher education: Jung Shin (South Korea) and Robert Toutkoushian (US). You can find their analysis in their book: "University Rankings. Theoretical Basis, Methodology and Impacts on Global Higher Education" (Springer 2011). They point out to two different but complementary approaches to the quality in higher education.

The first, egalitarianism (from French: égalité) is closely linked to the phenomenon of massification of higher education. Since up to 50 per cent of high school graduates in many countries continue edu-

cation at a higher level, it is necessary to assure at least a minimum quality of their education at this level. This can be achieved by the introduction of a quality assurance system through the process of accreditation. But accreditation now is, in fact, a kind of certificate confirming that a given higher education institution meets only the minimum required standards. It sets, in other words, a "bottom line" for the level of quality in higher education.

The other, elitism (French: elitisme) aims to stimulate the highest quality and answers the call for excellence. Rankings have become a tool that stimulates quality. The combination of a simple message and effectiveness contributes to the rankings growing in presence and popularity. The rankings simplicity is often described by critics as an oversimplification and a shortcoming.

All in all, the positive sides of rankings, I think, overcome their shortcomings and limitations.

Comparing these two approaches, we can clearly see that accreditation alone does not solve the issue of quality in higher education. Despite the existence of several dozen accreditation committees and organisations in Europe, the European Commission has just recently sounded the alarm. The Commission has realized that the gap between European universities and American and Asian universities is widening and hence some radical efforts are required. This means that accreditation has failed—it is efficient, but only for establishing the minimum quality level.

Accreditation does not assure competitiveness. Also, the accreditation system suffers from a good deal of inertia. Rankings do not have such limitations.

In fact, ranking provides a fuller picture of universities, since it takes into account more factors and indicators, and analyzes them even deeper. Rankings, updated annually, are also more up to date.

The analysis done by Shin and Toutkoushian show that, at the moment, we are in a period of transition. We can already see the emerging contours of a new ranking system with the four distinguished elements:

Regional systems. To reflect regional characteristics, including language and culture, global ranking systems should become regional ranking systems.

Customer-centered systems. To satisfy readers' different expectations towards rankings, ranker-centered systems should become customer-centered systems.

Multi-league system. To reflect different institutional missions; size, locations, current unified ranking systems, they should become multiple ranking systems.

And the most interesting element today is the discipline-based system. Institutional ranking systems should become discipline-based ranking systems in order to reflect disciplinary differences.

I am referring to the interesting analysis of these two researches because I believe their findings have been confirmed in the last years.

As an author of rankings, an analyst and a person involved in the practical side of the ranking process let me present some trends I see emerging in the rankings.

It is a fascinating how rankings and their role have grown. It must be remembered that rankings are still rather young. Interestingly, their timing seems to be correlated with other innovations of our era.

The first professional national ranking, the famous US News Best Colleges, appeared in 1983, at the same time as the Internet emerged. The first global ranking, the Shanghai Ranking, 2003, is a contemporary of Facebook.

The ranking family is growing fast; on average, every year one new international, two regional and three national rankings are published. The growth is impressive.

Analysis of national rankings shows a striking increase in numbers. During the past 15 years, 45 new national rankings have appeared. You will find all these rankings on the IREG Observatory website under "IREG Inventory of National Rankings". The Inventory is constantly updated.

All these new rankings—national, regional and global—try to distinguish themselves from each other through a modified methodology. This generates strong activity in the field of methodology.

Of course, the changes would not be possible if not for the new, ever-improving databases. The availability of electronic databases, especially the Web of Science offered by Clarivate Analytics (formerly Thomson Reuters) and Scopus by Elsevier have created new possibilities. The very existence of these databases and easy access to them have radically altered the system of information on science and higher education. They facilitate the process of ranking.

Another example: the IREG List of International Academic Awards published by IREG Observatory on Academic Ranking and Excellence. This is an attempt to go beyond the narrow group of Noble Prize and Field Medal winners. The first edition of the List includes 99 awards with an international character and is an instrument to be used by ranking organizations worldwide.

Changes in the ranking methodologies are also a reflection of the expectations of various groups of stakeholders.

For prospective students rankings serve as a tool in making educated choices in respect of an institution and a field of study.

Researchers use rankings to compare where they stand against researchers in other institutions or countries.

For university managers, rankings are a tool to implant a culture of competitiveness into the staff. It also helps monitor the progress of implementation of the reforms.

Employers expect that rankings will tell them which universities to look to for the best future employees.

Politicians, with the help of rankings, hope to limit the risk of their investment decisions.

Rankings also play an important role in creating the image and prestige of the country. The number of universities a country has in its top group serves as one of the key indicators of the country's standing in the international community. This is why the struggle to occupy high positions in rankings has a special meaning for politicians.

Let's spend a moment on the expectations of politicians and universities. Politicians want to have a simple tool to evaluate institutions and monitor the implementation of reforms. Accreditation cannot serve as such a tool since by its very nature it is a slow and bureaucratic process. Its other disadvantage comes from the fact that accreditation allows for establishing only the lowest acceptable level of the quality of teaching.

And here comes the growing role of the annually published rankings. They offer a handy tool for monitoring reform. They also mobilize and motivate institutions to be better, to be the best!

But rankings have limitations too; they cannot embrace the entire complexity of a higher education institution. They have their weak sides. They can even be harmful—brandishing tremendous power while suffering from substantial, though unavoidable, simplification.

We are also witnessing a race in methodology that brings about some interesting results. It should be mentioned that the improvement and perfection of ranking methodology is, in considerable measure, linked to the needs of the so called Excellence Initiatives that the governments in a number of countries created to accelerate the development of a select group of institutions.

Jamil Salmi and Isak Froumin, international experts in the field of higher education, calculated that since 2000 over 30 such excellence programs have been launched in 20 countries. Their total cost exceeds 40 billion US dollars. As a consequence, a group of so called "Accelerated" World-Class Universities has emerged. These institutions received additional funding to speed up—not unlike booster rockets used in take-off by military jets. Many Excellence Initiatives, including Russia's 5-100 Project, consider rankings a useful tool for monitoring the implementation of the reforms. The Excellence Initiatives have forced rankings adapt changes in their methodologies. These changes were discussed at the International Conference on Excellence Initiatives organized at the initiative of Prof. Froumin in St Petersburg last June.

Here are the main directions of change in rankings:

Trend # one. The academic community in many countries have, for a long time, been suggesting that rankings should include a larger group of institutions. For the first decade of their existence, the international rankings had been operating within the magic circle of "Top-100", "Top-200" or, at best, "Top-500". At the same time, there

are close to 20.000 higher education institutions in the world. Analysis of a group of the leading 100 (0.5% of the total number of institutions) may very well be a subject of great interest for higher education experts and the press but it is grossly unfair on a large number of universities as well as the countries where these universities function.

The limit in the number of institutions that are ranked is a result of the methodology these rankings are based on. This is particularly true in the case of the Shanghai Ranking. However, there has been the appearance of rankings, such as the University Ranking by University Performance (URAP) of the Middle East Technical University in Ankara, that have overcome this limitation. The URAP ranking has included 2000 institutions.

Thanks, in great extent, to pressure by the Russian universities of the 5-100 Project, some of the main players such as Times Higher Education and QS have also significantly increased the number of institutions in their ranking. This year THE published a list of 900 universities (it started off with 200). QS published a list covering 800 universities, doubling the original number. The US News Global Universities Ranking published a list of the 1,000 best universities earlier this year.

This trend will only strengthen. In one year's time, the ranking of 1,000 universities will be standard, and in three years international rankings will cover up to 2,000 institutions, or 10 per cent of all higher education institutions. This, I think, will satisfy the ranking ambitions of many countries and their universities.

Trend # two. The emergence and development of rankings "by subject". The benefits of rankings "by subject" seem to be so obvious that it is hard to understand why the main ranking institutions ignored this group of rankings. It is quite natural that in every university there are some strong and some weaker departments. In the overall rankings these differences get lost. Several months ago, I published an article at University World News under the title: "The Era of Rankings by Subject is Coming". I am glad my prediction appears to have been accurate.

Two questions emerge here. How many disciplines and how many universities should we analyze?

We can note, with satisfaction, that the number of subjects has been growing fast. This year QS has published a ranking of 43 subjects, URAP Ranking—41, and US News Global Ranking—27. Even the Shanghai Ranking has increased the number of ranked disciplines from 5 (mathematics, physics, chemistry, computer science and economics/business) to include, for the first time, 7 engineering disciplines.

THE this year published a ranking in eight broad fields but it has already announced its intention to publish a ranking in the future that will include several dozen fields of study.

I think it is realistic to assume that in the next few years rankings will include a minimum of 50 subjects and they will cover no less than 500 institutions or faculties.

In spite of the progress in the sphere of rankings, there is still much to be done, especially regarding rankings "by subject".

The main challenge facing the authors of rankings by subject is how to define the critical characteristic of a given discipline and then to find indicators that best reflect these characteristics.

The professional literature on quality in higher education is in agreement that international rankings are only doing well in the area of "science". This is quite natural and intuitive as the results in this area are in the form of publications. By comparing the number of publications and calculating the Hirsh Index, it is possible to compare institutions or faculties in such fields as mathematics, physics, chemistry or others falling into the "Science" group.

The use of indicators based on publications as the main criterion to assess the quality in other fields of research seems to be less obvious. Especially when we prepare rankings addressed to prospective students.

If we want to build the house of our dreams and are looking for a good architect, we do not ask him for a number of citations or his Hirsh index. We would rather ask him to show us what he has already built, and we would ask people if they are comfortable living in these houses.

The same is true in medicine. In looking for a good hospital, we are not interested in the publications and Hirsh index of the doctors. Instead, we want to know the patients' opinions and any assessment made by a professional medical association.

Such examples can easily be multiplied, but what matters most is a conclusion that each discipline has its own hierarchy of values. Building a new ranking "by subject" will not be easy, but if we want rankings "by subject" to meet the expectations, we absolutely have to do it.

Trend # three. There are more and more regional rankings coming. This is quite understandable as both student and staff mobility and academic cooperation primarily take place within a region.

Most attractive, from a marketing point of view, are the regional rankings of Asian and Arab institutions. Also interesting is the Latin American region; less so Africa.

The main problem the rankings of current generation have has to do with the methodology. Their regional rankings look like the twin brothers of the global rankings as, in practical terms, they are extracts from the global rankings on which they are based. I find it difficult to consider them as autonomous, self standing rankings.

Trend # four. Worth noting is the renaissance of the national rankings. Every year a few new national rankings appear. One such ranking has

recently been published in India. Their strength comes from the fact that they can practically cover all of the institutions in the country. In addition, institutions can be evaluated through criteria and indicators that can be more accurately selected since all the institutions function in the same cultural and legal environment. There are some attempts on the way to build "bridges" between national and global rankings.

Trend # five. New dimensions. A search for a way to include other missions in the international rankings i. e. other than research; especially important here are such important aspects as excellence in teaching and the so called third mission or the university's social mission.

This is, perhaps, the biggest challenge ahead for the rankings. So far, we do not see any easy answers here. There are no internationally agreed standards either. However, some attempts to find possible solutions are being made.

I am fully aware that this unappreciated or missing dimension of the rankings is particularly hurting Russian universities. Ranking organizations addressing this dimension mostly dance around the numbers related to teaching staff. The search is on for a new approach to the problem.

Speaking of the search for ways to properly reflect the third mission in the rankings, it is worth mentioning that on the initiative of the European Commission an interesting project called the Third Mission Ranking Project E3M has been carried out. The project did not lead to a new ranking but a number of findings and conclusions gathered in the "Green Paper" are worth studying. More information on the project can be found at: www.e3mproject.eu and http://he-ranking.blog-spot.com

The task taken up by the Russian academic community represented by the Union of Rectors to create a new ranking that will, in significant measure, reflect the Third Mission goes well alongside the global trend in ranking. It also offers a chance, though not risk free, to widen the range of criteria now used in rankings.

i Надоели баннеры? Вы всегда можете отключить рекламу.