Научная статья на тему 'INTERNATIONAL UNIVERSITY RANKINGS: REVIEW AND FUTURE PERSPECTIVES'

INTERNATIONAL UNIVERSITY RANKINGS: REVIEW AND FUTURE PERSPECTIVES Текст научной статьи по специальности «СМИ (медиа) и массовые коммуникации»

CC BY
45
10
i Надоели баннеры? Вы всегда можете отключить рекламу.
Ключевые слова
UNIVERSITY RANKINGS / PERFORMANCE EVALUATION / HIGHER EDUCATION MANAGEMENT

Аннотация научной статьи по СМИ (медиа) и массовым коммуникациям, автор научной работы — Tandilashvili Nino, Tabatadze Marina

International university rankings have become particularly important since couple of decades. When only national university rankings existed, they were popular and important in some countries, while others did not pay much attention to them. But with the introduction of international rankings since early 2000s (Shanghai Jiao Tong University ranking in 2003 for example), the importance of rankings grow significantly worldwide. The present article reviews the international university rankings’ methodologies and proposes their typology. It also analysis the main critics that the academia has expressed towards them.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «INTERNATIONAL UNIVERSITY RANKINGS: REVIEW AND FUTURE PERSPECTIVES»

the business and content of social responsibility cannot but influence their perceptions of the objects of CSR. The results show that the views of the business community regarding the objects of CSR are adequate to changes in economic conditions and the ongoing economic crisis.

The local industrial companies prefer to contribute to social development on a voluntary basis, although they would not waive access to tax alleviations and soft loans that the state would provide to boost CSR. And the companies that have been granted such access as a result of establishing good relations with the authorities operate successfully in conditions of stability and manage to succeed more easily. The business has clearly declared their position on making a commitment to the social partnership with the state.

Thus, in terms of understanding the nature and content of CSR, it is characteristic of our domestic businesses to mix the priorities of social responsibility directed to the internal CSR with the emphases on the complex nature of corporate social activity, which determines the specifics of the Bulgarian model.

REFERENCES

1. Kotler, F., Kartadzhaya, H., Setiavan, A. Marketing 3.0: from products to customers and further to the human soul- Moscow: ЭКСМО, pp.2011 - 240.

2. Nikolova N., I.Todorova, N.Nenova, Corporate Social Responsibility as a strategic priority of the company's competitive behavior, Compendium of Research Papers MATHTECH 2014, Bishop Konstantin Preslavski University of Shumen, Vol. 1, 2014.

3. Nikolova H., I.Yordanova, Methodological Issues and Approaches to Verification and Assessment of Corporate Social Responsibility in Bulgaria, Compendium of Research Papers of the Annual Scientific Session of USB Plovdiv, October 31, 2014.

INTERNATIONAL UNIVERSITY RANKINGS: REVIEW AND

FUTURE PERSPECTIVES

1PhD in Management Nino Tandilashvili, 2Professor of Economics Marina Tabatadze

1France, Paris, Paris Ouest Nanterre La Defense Universty;

2Georgia, Tbilisi, Ivane Javakhishvili Tbilisi State University.

Abstract. International university rankings have become particularly important since couple of decades. When only national university rankings existed, they were popular and important in some countries, while others did not pay much attention to them. But with the introduction of international rankings since early 2000s (Shanghai Jiao Tong University ranking in 2003 for example), the importance of rankings grow significantly worldwide.

The present article reviews the international university rankings' methodologies and proposes their typology. It also analysis the main critics that the academia has expressed towards them.

Keywords: University rankings, performance evaluation, higher education management.

1. Introduction

Evaluation and performance appraisal in higher education have always been contentious. Ranking authors claim that rankings are objective means to judge university quality and improve transparency and allow students to make informed choices. The main advantages of university rankings for university management seem to be (a) information on the performance of universities for students and junior scientists; (b) a comparative assessment of universities, on the national and international level; and (c) an account of the universities, which are being given more and more autonomy (Hazelkom, 2011). With this point of view, university managements assume that "high rankings can boost an institution's "competitive position in relationship to government" (Hazelkorn, 2011, p. 91). However, critics argue that the results of rankings are not that neutral and they strongly depend on the choice of indicator. They also blame that rankings do not address the various important functions of higher education and that the indicators used in rankings measure distant proxies rather

than quality itself. (Van Raan, 2005, Brooks 2005, Salmi and Saroyan, 2007, Marginson, 2007, Hazelkorn, 2011,EUA 2011, Ter Bogt and Scapens 2012).

However, as the modern society likes to see everything arranged in a neat league tables, as it is for sport teams, artists, products, organizations, in order to make a choice between them, rankings have been largely implemented in higher education. Literature is rich in describing the reasons of introduction of rankings. One of the main reasons concerns the increasing demands of external accountability and public transparency. (Allen & Bresciani, 2003; Schneider, 2002; Shavelson & Huang, 2003). To answer this demand, number of national and international actors got involved in production of different types of assessments. The need for accountability and responsible behavior was stressed through a greater emphasis on output controls (Hood 1991). Another reason is that the idea of reputation gained importance and got translated into a good position in different classifications, rather than original contribution to knowledge (Willmott 1995).

Higher education institutions (HEI), as other commercial organizations, are now competing to attract funding and customers (Gioia and Thomas 1996). Evaluate their performance includes the judgment of the quality of teaching and research performed by the institution. And as "quality" is very subjective and highly debatably concept, university rankings have turned into the topic of discussions. Hetzel P. (2009) even defines ranking as a "system of values", which is more then a simple communication or decision-making tool. It is more a "domination of the ideas of those who contribute to the production and diffusion" of rankings1. The external assessments that are conducted by a third party, as it is for international rankings, rely on "questionable, amendable, transposable and explicit methods". It becomes an effective vehicle for stimulating reflection by professionals and experts on teaching and research activities and power politics in fundamental debates channel.

The report of European Commission defines ranking as an indicator of strategic positioning in a competitive environment. "Europe is no longer setting the pace in the global race for knowledge and talent while emerging economies are rapidly increasing their investment in higher education....too few European higher education institutions are recognized as world class in the current, research oriented global university rankings... And there has been no real improvement over the past years." (European University Association, 2011) Competition is in the heart of European HE objective. "The Union has today set itself a new strategic goal for the next decade: to become the most competitive and dynamic knowledge-based economy in the world capable of sustainable economic growth with more and better jobs and greater social cohesion2".

2. Historical background of international university rankings

Many authors identify two periods of university rankings: before 1980s and after. Ter Bogt and Scapens (2012) discuss about the shift from traditional Performance Management Systems (PMS), which had developmental role, to the systems of rankings and accreditations, which are more judgmental type. Modern university rankings are example of judgmental evaluation as they seeks to evaluate the universities in a quantitative way, vis-à-vis other universities. Another difference between earlier assessments methods and modern ones is that before assessments were mostly focused on one of the evaluation method: reputation, faculty research or student experience, and were more subjective, as were based on opinion surveys. More recent assessments cover more multidimensional areas of evaluation and tend to be more objective by introducing quantitative methods (especially on research activities). However, because of lack of sufficient theoretical clarity or methodological precision, "many assessments have fallen short of their larger goal of measuring quality". (Brooks, 2005, p. 4).

The very first university classification had already been compiled in the US in 1870, when The Commission of the US Bureau of Education began publishing an annual report of statistical data, classifying institutions. However, more widely accepted version of the origin of university classification quality is the 1920s when some academia themselves initiated some comparisons between HEI in the US. The most widely quoted first study was made in 1925 by Raymond Hughes, president of Miami University of Ohio. Hughes asked his faculty for the names of distinguished scholars in 20 fields to create a list of respondents to his reputational survey. From this survey he created a ranking of the 38 top Ph.D.-granting institutions out of the 65 then in existence. In 1934, he prepared a similar ranking of graduate departments at 59 institutions. Each of his studies was based on the opinions of 20 to 60 faculty members. In 1959 Hayward Keniston also used a reputational survey to determine how the University of Pennsylvania was ranked compared to other 25 leading institutions

1 Hatzel p. 200ç, Rapport de Senat n577 2009-2010, p. 50.

2 Lisbon European Council 23 and 24 March 2000, Presidency Conclusions.

of that time. He asked the chairmen of 24 departments at each of the institutions analyzed to develop his ranking of the top departments (Cartter, 1966; Roose & Andersen, 1970).

As we can see, in the beginning assessments were more reputational types (Boorks, 2005). It served to see how one institution was placed in the ranking according to other according to its reputation. The methodologies of these assessments were very week according to today's standards (Cartter, 1966): scopes, population, frame of references, etc. were insignificant. In 1966, the American Council on Education began more systematic assessments of graduate programs (Cartter, 1966). By expanding the number and ranks of faculty members interviewed for its opinion survey, the assessment aimed to surmount methodological weaknesses of the previous studies. In 1970 Kenneth Roose and Charles Andersen expanded Cartter's methodology by increasing the quantity of fields and programs rated to provide updated results for comparison with his analysis. These studies tried to create more comprehensive assessment in parallel of a change in the selection of survey participants. However, the ranking system remained unchanged: the assessments were initiated by the academia for internal decision making purposed, used mostly opinion based surveys and aimed to analyze reputation of institutions and programs.

More important change in raking practices started from the 1980s in the US when the National Research Council (NRC) produced the next reputational assessment of research-doctorate programs, evaluating 2,699 programs in 32 disciplines (Jones, Lindzey, & Coggeshall, 1982). This was the first time that ranking was initiated from outside the academia and thus, aimed to combine reputational survey with more objective, quantitative measures of quality (program size, characteristics of the graduates, support available for research and publication numbers). Since 1982 the rankings began to be extended to undergraduate education.

International university rankings as we know them today appeared from the 1980s. Since the 1980s and 1990s indicator-based evaluations have been implemented for the evaluation of research and teaching (Daniel, Mittag, & Bornmann, 2007, Hazelkorn 2011). One of the first international university rankings so-called Shanghai ranking was published in 2003 and was quickly followed by further, large-scale, indicator-based assessments of universities, which were published either as a rankings (individual institutions ranked according to certain criteria) or as a ratings (individual institutions assessed according to certain criteria).

Because of these modern international rankings1 have the intention to objectively assess universities' quality (Lukman et al. 2010), they are accepted as institutionalized mechanisms. They contribute to forming and diffusing an abstract-model (Strang and Meyer 1993) of international HEIs by setting criteria that evaluates organizations and individuals. Thus, institutionalization of norms and values plays an important role in developing and legitimize an abstract-model (Strang and Meyer 1993).

3. Ranking typology

For Hetzel P. (2009) there are two main types of international rankings: one type that aims to help students to make a choice between HEIs, and other type that measures the intensity of the research made by HEI. Study of European University Association (EUA) proposes another typology per the main tendencies in international ranking methodology and goals2:

• University rankings whose main purpose is to produce league tables of top universities only - the Shanghai Academic Ranking of World Universities (ARWU) ranking, mainly based on research indicators; the Times Higher Education (THE) ranking; the Russian Reitor ranking, etc.

• University rankings concerning research performance only - with or without league tables - the Leiden Ranking with no composite score, the Taiwan Higher Education Accreditation Evaluation Council university ranking (HEEACT) with a league table based on a composite score, and the EU Assessment of University- Based Research (AUBR) which is a research assessment methodology targeted at transparency for various purposes, rather than a ranking.

• University rankings and classifications using a number of indicators with no intention of producing composite scores or league tables - the original German Centre of Higher Education Development (CHE) university ranking was designed to help potential students choose a university according to their requirements, the EU U-Map classification to allow them to find and compare

1 We will refer to these modern indicator-based international university rankings as International rankings or Rankings from now on in this paper.

2 European University Association, Rauhvargers, Andrejs; (2011), "Global university rankings and their impact". p. 12

universities with similar profiles, and the EU U-Multirank ranking to compare the performance of universities in various aspects of their activities.

• Rankings that benchmark universities according to the actual learning outcomes demonstrated by students - OECD Assessment of Higher Education Learning Outcomes (AHELO)

• Rankings of universities only according to their visibility on the web - Webometrics.

Most global league tables also publish lists concerning the 'performance' of countries. These

comparisons are made by counting each country's universities in the list of top universities, usually assigning a different number of points depending on whether the university appears in the Top 100, Top 100-200 or following top hundreds. The leading countries in the published lists then are the USA, the UK, Germany and France. However, if the published lists are 'normalized' by dividing the number of top universities by the number of inhabitants, new leaders appear, such as Switzerland, Sweden, Finland and Denmark (Salmi, 2010).

4. Main criticisms of rankings' methodologies

Criticism of rankings mainly touches its methodology. According to many author, rankings neglect other missions then research and thus do not evaluate all the universities with the same conditions. (Brooks, 2005, Salmi and Saroyan, 2007, Marginson, 2007, Hazelkorn, 2011, Ter Bogt and Scapens 2012). Global university rankings reflect university research performance far more accurately than teaching. The bibliometrics indicators, which are used for measuring research performance in most rankings, also have their biases and flaws, but they still are direct measurements. Furthermore, these indicators do not show any direct link with the teaching and research quality of universities. One method is measuring the quality of education by the number of Nobel Prize winners among the universities graduates (ARWU) - this indicator can be considered as being linked to the quality of education, but in a very special and rather indirect way. Judging teaching quality using staff/student ratios alone without examining teaching/learning itself (THE-QS) is another extreme. As for measuring the quality of research of all disciplines with the similar bibliometrics indicators, is also questionable due to important differences in research traditions between social and natural sciences (EUA report, 2011). The table below indicates the criteria, indicators and weights used in the ARWU (Shanghai) Ranking.

Table 1. Evaluation methodology of Shanghai Jiao Tong University ranking

Criteria Indicator Weight

Quality of Education Alumni of an institution winning Nobel Prizes and Fields Medals 10%

Quality of Faculty Staff of an institution winning Nobel Prizes and Fields Medals 20%

[Top 200] highly cited researchers in 21 broad subject categories 20%

Research Output Papers published in Nature and Science Papers indexed in Science Citation Index-expanded and Social Science Citation Index 20% 20%

Per Capita Performance Per capita academic performance of an institution, calculated with previous notes and size 10%

Total 100%

Quality of research is generally measured by the number of publications in ranked journals. The quality of teaching is measured by number of students and quality of education provided. Many authors have criticized the importance given to publication number as a research performance measurement indicator. This practice is mainly criticized because the top ranked international journals are mostly North American and thus, narrow the research fields and methodologies. (Hetzel, 2009, Lukka, 2010, Merchant, 2010, Ter Bogt and Scapens, 2012). Merchant (2010) argues that the emphasis on the mainstream in the USA is "essentially closing the door on many potentially important research undertakings". He goes on to demonstrate that research outside the mainstream is being increasingly marginalized in the top North American journals. He argues that the consequence of this marginalization of non-mainstream research is a further decline in the number of researchers working outside the mainstream, which inevitably reinforces the dominance of mainstream research and creates a tendency towards homogenization in (North American) accounting research". (Merchant, 2010, p. 116).

Privileging bibliometric indicators to measure the quality of research, favors natural and medical sciences more than human and social sciences. Different scientific fields have different publication and citation cultures. There are more publications and more citations per publication in natural sciences and

especially in medicine, in particular because the main citation databases little coverage of books. On the contrary, social sciences have the tradition of publishing books more hen articles.

One of the characteristics of these international rankings is that the most popular global rankings (ARWU, THE-QS and THE-Thomson Reuters, US News and World Report Ranking (USNWR), HEEACT, Reitor and others) concern the world's top universities only. First of all, they include roughly 1% to 3% of universities (200-500 universities) out of approximately 17,000 universities in the world (EUA report on rankings, 2011). Secondly, it is important to note that the rankings use methodologies that simply cannot produce stable results for more than 700-1200 universities and just around 300 universities in subject area rankings. Jamil Salmis' (2010) rhetorical question "How many universities can be among the top 500?" and his answer "five hundred" clears this point.

Fig. 1. University representation in the international rankings Source: EUA report on rankings, 2011, p. 13

There is also an issue with language. It has been noted that since the publication of the first international rankings favor universities from English-language nations, non-English language work is, both, published and cited less. A recent study by the Leiden Ranking team has shown that the citation impact of publications of French and German universities in French or German, respectively, was smaller than the citation impact of publications of the same universities published in English (van Raan et al., 2010).

Attempts have been made to compensate lacunas of ranking methodology, for example the "crown indicator" of the Leiden Ranking and, more recently, the mean normalized citation number (MNCS). However, both have aroused number of new critics on the calculation method. Besides attempts to improve assessment methodologies, authors still blame international rankings that these improvements are more technical than conceptual (Brooks 2005, van Raan et al., 2010, EUA 2011).

5. Conclusion

The importance of international rankings grew fast in the era of globalization. Students started to pay more and more attention to rankings in decision-making. Later, media started referring to different rankings and thus, participated in advertising them to the society at large. After that governments also took rankings into account while elaborating public policies. At last, universities themselves started to take into account ranking results and even included them in their administrative and scientific management.

The scientific literature is rich with criticism of ranking methods, according to which rankings give stable results for only 5 % of the world's universities. The reason of this is traced in the methodology of rankings: privileging big, scientific, research universities, English language, etc. Besides this, most of the universities are striving for improving their positions in the rankings, result of which, makes universities strongly tempted to improve their performance specifically in those areas which are measured by ranking indicators. Example can be traced in resent strategies undertaking by some French, German, Italian, Spanish, Dutch and other, particularly European, universities. We can bring number of cases where instead of improving performance, data have been manipulated, for instance: merging universities just to get onto league tables by increasing size, hiring a Nobel Prize professor. Growth of interest in the results of rankings has changed the context in which universities function: for a university to be seen as 'successful' it has now become necessary to be present in international rankings. As a result, HE in in in the World have been largely influenced by rankings directly and indirectly.

REFERENCES

1. Aquillo, I., Ortega, J., Fernandez, M. (2008). Webometric Ranking of World Universities: Introduction, Methodology, and Future Developments. Higher Education in Europe, Vol. 33,

2. ARWU. (2009). Methodology. Retrieved on 12 Jan. 2011, http : //www .arwu.org/ARWUMethodology2009.jsp

3. Baty, P. (2010a). THE University rankings, 25 March 2010.Berlin, 7 October 2010.

4. Boulton, G. (2010). University rankings: diversity, excellence and the European initiative. Advice paper, No. 3, League of European Research Universities (LERU). Retrieved 31 Dec.

5. Casey J., Gentile P. and Bigger S., 1997, "Teaching appraisal in higher education", Higher Education, Volume 34, Issue 4, pp 459-482

6. Daniel, H.-D., Mittag, S. & Bornmann, L. (2007). The potential and problems of peer evaluation in higher education and research. In: A. Cavalli (Ed.), Quality Assessment for Higher Education in Europe (p. 71-82). London, UK: Portland Press.

7. Doneckaja, S. (2009). Российский подход к ранжированию ведущих университетов мира. ЭКО, No. 8, pp. 137-150. http://www.sibupk.su/stat/publicat_nir/smk_4.pdf

8. Drucker-Godard C., Gollety M., Fouque T., Le Flanchec A. (2012b), « Paroles d'enseignants chercheurs : entre passion et mal être », Colloque AIRMAP, 5/6 décembre, Paris.

9. European University Association, Rauhvargers, Andrejs; (2011), "Global university rankings and their impact".

10. Griffith, A. and Rask, K. (2007). The influence of the US News and World Report collegiate rankings on the matriculation decision of high-ability students: 1995-2004, Economics of Education Review 26: 244-255

11. Hazelkorn, E. (2011). Rankings and the reshaping of higher education. The battle for world-class excellence. New York: Palgrave Macmillan.

12. Horstschrâe J. (2011), University Rankings in Action? The Importance of Rankings and an Excellence Competition for University Choice of High-Ability Students), Discussion Paper No. 11-061

13. IREG. (2010). IREG-Ranking Audit: Purpose, Criteria and Procedure, Draft version,

14. Leiden ranking. (2008). http://www.cwts.nl/ranking/LeidenRankingWebSite.html

15. Marginson, S., and Wan der Wende, M. (2007). To Rank or To Be Ranked: The Impact of Global Rankings in Higher Education. Journal of Studies in International Education, Vol. 11, No. 3/4, Fall/Winter 2007, pp. 306-329. No. 2/3, July-October 2008, pp. 233-244.

16. Monks, J. and Ehrenberg, R. G. (1999). The Impact of U.S. News & World Report College Rankings on Admissions Outcomes and Pricing Policies at Selective Private Institutions, NBER Working Paper 7227.

17. Musselin CH, (2008), « Les politiques d'enseignement supérieur », in Borraz O. et Guiraudon V., 2008, Politiques Publiques, 1. La France dans la gouvernance européenne, Paris, Presses de Sciences Po, p. 147-172

18. Naszályi P. 2010, « « Lorsque le sage montre la lune, l'imbécile regarde le doigt... ou le classement de Shanghai» (proverbe chinois) », La Revue des Sciences de Gestion, 2010/3-4 (n°243-244)

19. OECD. (2008). Measuring Improvements in Learning Outcomes: Best Practices to Assess the Value-Added of Schools. ISBN: 9789264050228.

20. Salmi, J., and Saroyan, A. (2007). League tables as policy instruments: Uses and misuses. Higher. Education Management and Policy, Vol. 19, No. 2, p. 31-68.

21. Sanderson, I. (2001). 'Performance Management, Evaluation and Learning in 'Modern' Local Government', Public Administration, p. 79, 2, 297-313

22. Shattock,M. (2003). Managing Successful Universities. Maidenhead: Society for Research into Higher Education & Open University Press.

23. Ter Bogt, Henk J.; Scapens, Robert W. (2012), "Performance Management in Universities: Effects of the Transition to More Quantitative Measurement Systems", European Accounting Review. Vol. 21 Issue 3, p451-497.

24. THE (2009b). Rankings 09: Talking points.

25. U-Map. (2009b). Overview of dimensions and indicators.

26. van Raan A. F.J., van Leeuwen T.N., and Visser M.S., (2011), "Germany and France are wronged in citation-based rankings" Scientometrics 88 (2), p.495-498

i Надоели баннеры? Вы всегда можете отключить рекламу.