Maria Angela Ferrario1, Zoltán Bajmócy2, 3, Will Simm1, Stephen Forshaw1. 1 Lancaster University, School of Computing and Communications, Lancaster, UK 2 University of Szeged
, Faculty of Economic Research Centre, Szeged, Hungary 3 Community-based Research for Sustainability Association (CRS), Szeged, Hungary Corresponding author email:
Harnessing the Power of Business Webs, Harvard Business school Press; ISBN: 1578511933;(May 2000) James Moore, Death of Competition:
, tiger Small organizations, universities, chambers o f commerce Basic e-services, Simple services Accounting sys, Payment sys, Groupware sys. Group
building a virtual learning community with training and competence center, a shared knowledge base, e-learning modules, benchmarking,
and by mobilising all local players including local authorities, innovation and research centres, universities, consumers and trade associations, NGOS.
The research and innovation centers, the universities The entrepreneur community and small organizations through their representative organizations The local government and the public administration.
Creation of local competence centers on e-business and on the local sectors of activities (e g. for improving quality) building virtual learning communities sharing e-learning and e training modules knowledge basis including models
universities, research organizations, innovation centers enterprises (in particular SMES and enterprise organizations; government and of public administration The regions (or local areas) which succeed in the application of digital sectorial ecosystems,
Stanford CA 94305, USA b Edinburgh University Business school, Edinburgh, Lothian EH8 9js, UK c Department of Management, Birkbeck College, University of London
650-725-2166 Abstract This paper introduces the concept of Triple Helix systems as an analytical construct that systematizes the key features of university-industry-government (Triple Helix) interactions into aninnovation
university-industry-government interaction; innovation systems; regional innovation strategies..2 Introduction Recent decades have seen a shift from an earlier focus on innovation sources confined to a single institutional sphere,
In this paper, we introduce the Triple Helix systems as a novel analytical concept that systematizes the key features of university-industry-government interactions,
i) components (the institutional spheres of University, Industry and Government, with a wide array of actors;(
From this analytical framework, empirical guidelines for policy-makers, university and business managers can be derived, in order to strengthen the collaboration among Triple Helix actors
The concept of the Triple Helix of University-Industry-Government relationships developed in the 1990s by Etzkowitz (1993) and Etzkowitz and Leydesdorff (1995
encompassing elements of precursor works by Lowe (1982) and Sábato and Mackenzi (1982), interprets the shift from a dominating industry-government dyad in the Industrial Society to a growing triadic relationship between university
universities, research institutes and other Swedish innovation actors-a mission adopted in the early 2000s, shortly after the agency's inception,
public universities and research centres, allows grants to innovative firms, the setup of private firms'incubation facilities in public universities and the shared use of university infrastructure.
University-industry-government cooperation has a central role also in European union (EU) innovation policies, such as the Innovation Union flagship initiative of the Europe 2020 Strategy,
and is perceived as a solution to the innovation emergency that Europe now faces (European commission, 2011;
higher education and training institutions to develop educational material, the European Institute of technology, which supports the full integration of the Knowledge Triangle (education,
which looks at university, industry and government as co-evolving sub-sets of social systems that interact through market selections,
and provide the analytical foundation for a new vision of university-industry-government interactions. The paper is organized as follows:
CONCEPTUAL FRAMEWORK The Triple Helix thesis is that the potential for innovation and economic development in a Knowledge Society lies in a more prominent role for the university and in the hybridisation of elements from university,
but also the creative renewal that arises within each of the three institutional spheres of university, industry and government,
The enhanced role of the university in the Knowledge Society arises from several specific developments.
the recent addition of the universitythird mission'-involvement in socioeconomic development, next to the traditional academic missions of teaching and research, is the most notable,
This is to a large extent the effect of stronger government policies to strengthen the links between universities and the rest of society, especially business,
but also an effect of firms'tendency to use universities'research infrastructure for their R&d objectives,
which provides a large part of university funding (Slaughter and Leslie, 1997). Collaborative links with the other Triple Helix actors have enhanced the central presence of universities in the production of scientific research over time (Godin and Gingras,
2000) disproving former views that increasing diversification of production loci would diminish the role of universities in the knowledge production process (Gibbons et al. 1994).
Secondly, the university's continuous capacity to provide students with new ideas, skills and entrepreneurial talent has become a major asset in the Knowledge Society.
Students are not only the new generations of professionals in various scientific disciplines, business, culture etc. but they can also be trained
and encouraged to become entrepreneurs and firm founders, contributing to economic growth and job creation (see, for example Startx, Stanford's student start-up accelerator, which in less than a year 6 trained 90 founders and 27 companies4,
or the Team Academy-the Entrepreneurship Centre of Excellence of JAMK University of Applied sciences in Jyväskylä, Finland, where students run their own cooperative businesses based on real-life projects5).
Universities are also extending their capabilities of educating individuals to educating organizations, through entrepreneurship and incubation programmes and new training modules at venues such as interdisciplinary centres, science parks, academic spin-offs, incubators (Etzkowitz, 2008;
Almeida, Mello and Etzkowitz, 2012. Thirdly, universities'capacity to generate technology has changed their position, from a traditional source of human resources and knowledge to a new source of technology generation and transfer,
with ever increasing internal organizational capabilities to produce and formally transfer technologies rather than relying solely on informal ties.
and comparative historical analyses that explore different configurations arising from the positioning of the university, industry and government institutional spheres relative to each other and their movement and reorientation, with one as a gravitational centre around
university acting mainly as a provider of skilled human capital, and government mainly as a regulator of social and economic mechanisms.
whereby university and other knowledge institutions play an increasing role, acting in partnership with industry and government and even taking the lead in joint initiatives,(Etzkowitz, 2008).
Through this creative process, the relationships among the institutional spheres of university, industry and government are reshaped continuously in an endless transition to enhance innovation (Etzkowitz and Leydesdorff,
which sees the University, Industry and Government as co-evolving sub-sets of social systems. Interaction between them occurs through an overlay of recursive networks and organizations
and an institutional one, between private and public control at the level of universities, industries and government,
such as industrial liaison 8 offices in universities or strategic alliances among companies, creating new network integration mechanisms (Leydesdorff and Etzkowitz, 1998).
the institutional spheres of University, Industry and Government, each encompassing a wide-ranging set of actors;(
Helix literature focuses on the institutional spheres of university industry and government as holistic,block'entities, without going deeper to the level of specific actors within each sphere,
albeit not always harmonious coexistence of tacit and codified knowledge and is translated in different modes of learning and innovation, e g. the Science, Technology and Innovation (STI) mode, based on the production and use of codified scientific and technical knowledge,
o R&d innovators can be found in each of the University, Industry and Government institutional spheres,
In universities, key R&d performers are the academic research groups and interdisciplinary research centres; in the business sector, the company R&d divisions or departments;
and can also be found in various forms in the Government and University spheres, as well as in the nonprofit sector.
be it University or Industry or Government (e g. education 7 For example, the members of The Kitchen in New york city's Soho District invent new forms of conceptual art,
The fashion department of the Antwerp Academy in Belgium encourages students to create and explore innovative forms,
etc. o Multi-sphere (hybrid) institutions operate at the intersection of the University, Industry and Government institutional spheres and synthesize in their institutional design elements of each sphere,
Technology transfer offices in universities, firms and government research labs, industrial liaison offices, business support institutions (science parks, business and technology incubators), financial support institutions (public and private venture capital firms
Also, institutional boundaries are more permeable (Etzkowitz, 2012) as the single institutional spheres of University,
For example, in the 1930's New england, MIT's President Compton was the Innovation Organizer who played a key role in getting support for a new model of knowledge-based economic development relying heavily on university-originated technologies.
In 2011, New york's Mayor Bloomberg re-took the Innovation Organizer role with an initiative to attract leading technological universities to the city to fill the gap in the region's innovation environment8.
as in the case of Birmingham University's consortium of Triple Helix actors who projected the post-Rover,
Similarly, universities, in addition to their teaching and research activities, often engage in technology transfer and firm formation, providing support
Industry can also take the role of the university in developing training and research, often at the same high level as universities.
such as universities and firms, may come forward to set forth a future achievable objective (playing an Innovation Organizer role,
for example when vocational training institutions take the lead over universities in engaging into joint initiatives with local firms (especially with low-tech,
low/non-R&d small firms) that prefer the more practical, shorter-term oriented opportunities of the vocational training institutions to the more complex, long-term programmes of the university (Ranga et al. 2008).
depending on the network's age, 15 scope, membership, activities and visibility in the public domain (e g. the Association of University Technology Managers AUTM, the European Technology Platforms and Joint Technology Initiatives,
) o Attraction of leading researchers through the foundation of a science-based university, as in San diego, where a new branch of University of California was gestated in the 1950s
and eventually became the basis for a leading high-tech complex. The attraction of leading researchers in fields with commercial potential, like molecular biology, was recognized early as an economic development strategy by the coalition of academic,
The strategy of the University of California San diego campus was replicated by the Merced campus which has recently been established as an entrepreneurial university to promote high-tech development in an agricultural region.
The strategy aimed to create and then leverage location-specific knowledge assets to induce new investment
and create new value. 17 o Creation of new university resources to support the development of new industries or raise the existing ones to a higher level.
rather than simply training support personnel for existing firms as it might have happened in an undergraduate campus. In Norkopping,
and decided to create a university campus with advanced academic research groups in order to revive paper industry-one of the local traditional industries (Svensson, Klofsten and Etzkowitz, 2011).
o Virtual congregation of geographically dispersed groups from university and industry around common research themes, with government support,
This strategy is exemplified in Sweden by the founding of the Stockholm School of Entrepreneurship as a joint initiative of Stockholm University
and more recently also including the Royal Art College. The Oresund project linking southern Sweden (Skane)
and Copenhagen included the creation of Oresund University, an organisation that encourages collaboration and joint projects between universities on both sides of the strait that previously divided this cross-border region.
Karolinska Institute initiated a university-building strategy of incorporating a series of small schools in the biological sciences,
nursing and other loosely related field scattered across Sweden and even across the Norwegian border
or for military or other specific purposes, to encouraging 18 university, industry and government institutional spheres to work more closely together to promote innovation.
of which were oriented to the older universities and traditional academic disciplines. The foundations changed a rigid innovation system both by providing alternative sources of funds
o Creation of a university in a region without higher education capacity, as a means of raising the technological level of existing clusters or as a source of new ones.
MIT is the classic instance of a university founded to raise the technological level of existing clusters.
In the 1950's, the regional leadership of San diego deployed this explicit model of a science-based entrepreneurial university as a strategy for creation of a new science-based industry in a region that was heretofore known as a naval base
With a charter for a new campus of the University of California, leading scientists were recruited in emerging area of polyvalent knowledge, with both theoretical and practical potential,
o Building an integrated environment for university technology transfer and entrepreneurship activities. When a university establishes a liaison
or technology transfer office, it soon realizes that a much broader range of services and support structures are required
A good example of this approach to building an innovation Space is the Flemish Catholic University of Leuven (KUL)
universities and local government actors begin to see themselves as part of a larger whole, or in some cases of a newly-invented identities like Oresund (linking Copenhagen in Denmark and Skane in Southern Sweden) or the Leuven-Aachen-Eindhoven Triangle,
An example in this sense is the 1930's New england Council representing university, industry and government leadership in the region,
which universities would play a greater role, moving on from the position of R&d labs for industry they had played earlier.
i) First stage-Formation of a stem cell space through interaction of the university, industry and government spheres 23 Triple Helix spheres get closer together in a gradual process
showing four configurations of the transition from independent to overlapping spheres that are equivalent to the transition from the laissez-faire to the balanced model represented previously in Fig. 1. This is a simplified representation of the interaction among the university,
and comparative advantage-its high concentration of academic resources, including MIT, Harvard and a wide range of other academic institutions,
where many successful firms had outgrown their university links, or were spinoffs of an early generation of firms
Indeed, by this time, many of the Valley's high-tech firms tended to view themselves as a self-generated phenomenon, a cluster of interrelated firms, rather than as part of a broader university-industry-government complex.
or reconnect to academic institutions and local government in order to move the region forward. A new organization Joint venture Silicon valley, was established for this purpose
In 2002, the IT-university is opened as a joint venture between the Royal Institute of technology KTH and the University of Stockholm,
In 2010, Kista Science City counted over 1, 000 ICT companies and over 5, 000 ICT students and scientists, a high concentration of expertise, innovation and business opportunities within ICT
p=1546&t=h401&l=en. 27 6. RELEVANCE OF TRIPLE HELIX SYSTEMS FOR KNOWLEDGE-BASED REGIONAL INNOVATION STRATEGIES Regional innovation policies have focused traditionally on the promotion of localized learning processes
Other priorities included enhancing interactions between different innovation stakeholders, such as firms, universities and research institutes,
and better work conditions to attract distinguished researchers rather than develop young researchers. 16 The Brazilian popular cooperative incubator model was invented bottom-up by a university incubator
The large-scale research programmes in data mining funded by the Defence Advanced Research Projects Agency (DARPA) at Stanford and a few other universities provided the context for the development of the Google search algorithm that soon became the basis
and economic growth in evolutionary systems where institutions and learning processes are of central importance (Freeman, 1987,1988;
The concept was refined asnational innovation systems'(NIS) delineated by a set of innovation actors (firms, universities, research institutes, financial institutions, government regulatory bodies, etc.
and localised learning (Lundvall, 1992), but became increasingly blurred due to business and technology internationalisation extending technological capabilities beyond national borders,
a set of regional actors aiming to reinforce regional innovation capability and competitiveness through technological learning (Doloreux and Parto, 2005),
the former through thesingle-sphere'andmulti-sphere'(hybrid) organizational formats associated with the university,
8. CONCLUSIONS AND POLICY IMPLICATIONS This paper introduced the concept of Triple Helix systems as an analytical construct that systematizes the key features of university-industry-government (Triple Helix) interactions into aninnovation
2. Assessing the performance of Triple Helix systems by means of hybrid indicators that capture dynamic processes at the intersection of the university, industry and government institutional spheres rather than within single spheres.
For example, among the 25 indicators of the 2011 Innovation Union Scoreboard20, only one-public-private publications21-captures the effect of collaboration between the university and industry spheres,
The OECD Science, 20 See details at http://ec. europa. eu/enterprise/policies/innovation/files/ius-2011 en. pdf. 21 This indicator is part of the University-Industry
Research Collaboration Scoreboard produced by Leiden University, which provides an internationally comparative framework based on co-publications of at least one university
and one private sector organization that are usually business firms in manufacturing and services or for-profit contract research organizations.
and average citations received per patent cited (industry-university interface). Also, the design of indicators that characterize the specific dynamics of each space may be a challenging process, especially for the Innovation and Consensus spaces.
For example, the number of spin-offs graduated from university incubators could be a relevant indicator for the Innovation space,
mapping regional/national actors (public and private research labs, firms, universities, arts and cultural organizations, etc.
Paper for The Elgar Companion to Neo-Schumpeterian Economics (downloaded on 9 april 2012 from http://faculty. weatherhead. case. edu/carlsson/documents/Innovationsystemssurveypaper6. pdf) 38 Casas, R
Improving Industry Science Links through University Technology Transfer Units: An Analysis and A Case, Research Policy 34,321-342.
The Triple Helix of University-Industry-40 Government Relations. Social science Information 42,293-338. Etzkowitz, H. 2008.
University-Industry-Government Innovation in Action. Routledge, London. Etzkowitz, H. 2012. Triple Helix Clusters: Boundary Permeability at University-Industry-Government Interfaces as a Regional Innovation Strategy.
Environment & Planning C: Government and Policy. In Press Etzkowitz, H.,Klofsten, M. 2005. The Innovating Region:
University-Industry-Government Relations: A Laboratory for Knowledge-Based Economic Development. EASST Review 14,14-19.
A"triple helix"of university-industry-government relations. Minerva 36,203-208. Etzkowitz, H.,Leydesdorff, L. 2000.
from National Systems and"Mode 2"to a Triple Helix of university-industry-government relations.
Pathways to the Entrepreneurial University: Towards a Global Convergence. Science and Public Policy 35,1-15.41 Etzkowitz, H.,Mello, J. M. C.,Almeida, M. 2005.
The place of universities in the system of knowledge production. Research Policy 29,273-278. Hamilton, W. B. 1966.
The evolution of university industry government relationships during transition. Research Policy 33,975 995 Jensen, M. B.,Johnson, B.,Lorenz, E.,Lundvall B. A. 2007.
The new communication regime of university industry government relations, in: Etzkowitz, H.,Leydesdorff, L. Eds), Universities and the Global Knowledge Economy:
A Triple Helix of University Industry Government Relations. Cassell Academic, London. Leydesdorff, L. 2000. The triple helix:
an evolutionary model of innovations. Research Policy 29: 243 255. Leydesdorff, L. 2003. The mutual information of university-industry-government relations:
An indicator of the Triple Helix dynamics. Scientometrics 58,445-467 Leydesdorff, L. 2006. The Knowledge-Based Economy:
Emergence of a Triple Helix of University-Industry-Government Relations. Science and Public Policy 23,279-86.
Localized Learning and industrial Competitiveness. Cambridge Journal of Economics 23,167-185. Mason, C. and Harrison, R. 1992.
Stanford university Business school and Joint venture Silicon valley. Interview with Henry Etzkowitz. Morris, M. H. 1998. Entrepreneurial intensity:
an experience-based perspective, Working Paper SACSJP, University of Aveiro. Rubin, H. 2009. Collaborative Leadership:
The emerging role of universities in socioeconomic development through knowledge networking. Science and Public Policy 38,3-6. Sábato, J.,Mackenzi, M.,1982.
Politics, Policies and the Entrepreneurial Universities. Johns hopkins university Press, Baltimore. Spittle, A. 2010.The changing nature of work'(downloaded on 9 april from http://andrewspittle. net/2010/02/18/the-changing nature-of-work/)Steinmueller, W. E. 1994.
, University of Eastern Finland (Kuopio Campus), Kuopio, Finland Abstract Purpose The purpose of this paper is to examine the information sourcing practices of small-to medium-sized enterprises (SMES) associated with the development of different types of innovation (product/process/market/organizational).
which is a typical starting point in many of the related approaches, such as learning regions or innovative milieus.
Insignificant to 5 Very important) REGKNOWA Sum-variable measuring the importance of regional knowledge organizations for innovation University of Kuopio Savonia University of Applied sciences Organizations of vocational education
Within SIS, the creation, selection and transformation of knowledge takes place within a complex matrix of interactions between different actors (firms, universities and other research organizations, educational organizations, financial organizations, public support
and related external relations is associated sometimes with sophisticated skills acquired through formal education, technical and vocational qualifications are often more important with this respect (Gray, 2006).
Over 58 percent of the entrepreneurs participating in this study had not been educated beyond elementary school. However, about 57 percent of the entrepreneurs had been educated in vocational school.
Only 6 percent of the entrepreneurs had no vocational training at all. 3. 2 Variables and measures 3. 2. 1 Dependent variable.
Antonelli, C. and Que're',M. 2002), The governance of interactive learning within innovation systems, Urban Studies, Vol. 39 Nos 5-6, pp. 1051-63.
a new perspective on learning and innovation, Administrative Science Quarterly, Vol. 35 No. 1, pp. 128-52.
Towards a Theory of Innovation and Interactive Learning, Pinter, London. Macpherson, A. and Holt, R. 2007), Knowledge, learning and small firm growth:
a systematic review of the evidence, Research Policy, Vol. 36 No. 2, pp. 172-92.
Malmberg, A. and Maskell, P. 2006), Localized learning revisited, Growth and Change, Vol. 37 No. 1, pp. 1-18.
and Action, Harvard Business school Press, Boston, MA, pp. 288-308. Nonaka, I. 1991), The knowledge-creating company, Harvard Business Review, Vol. 69 No. 6, pp. 96-104.
Appropriateness of knowledge accumulation across growth studies, Entrepreneurship Theory & Practice, Vol. 33 No. 1, pp. 105-23.
Tidd, J.,Bessant, J. and Pavitt, K. 2002), Learning through alliances, in Henry, J. and Mayle, D. Eds), Managing Innovation and Change, 2nd ed.,Sage
Utterback, J. M. 1994), Mastering the Dynamics of Innovation, Harvard university Business school Press, Boston, MA. Vega-Jurado, J.,Gutie'rrez-Gracia, A.,Ferna'ndez-de-Lucio,
About the authors Miika Varis, after graduating from the University of Kuopio, acted as a research and teaching assistant in SME management (2001-2003) and in entrepreneurship and local economic development (2003-2005),
and lecturer in entrepreneurship (2005-2009) at the Department of Business and Management, University of Kuopio, Finland,
and from 2009 as a lecturer in entrepreneurship at the Department of health Policy and Management, University of Kuopio, Finland (1. 1. 2010 Department of health and Social Management,
University of Eastern Finland, Entrepreneurial SMES 153 Downloaded by WATERFORD INSTITUTE OF TECHNOLOGY At 04:12 03 july 2015 (PT) Kuopio Campus). He is currently finishing his doctoral dissertation on regional systems of innovation.
Varis@uef. fi Hannu Littunen, after graduating from the University of Jyva skyla, was a researcher at the University of Jyva skyla, School of business and Economics, Centre for Economic Research, Finland,
and a professor of entrepreneurship and regional development at the Department of Business and Management, University of Kuopio, Finland (2003-2009) and from 2009 a professor of entrepreneurship and regional development at the Department of health Policy and Management
, University of Kuopio, Finland (1. 1. 2010 Department of health and Social Management, University of Eastern Finland, Kuopio Campus). He completed his doctoral thesis in leadership
and management entitled The birth and success of new firms in a changing environment in the year 2001.
Prior to starting work at the University, he worked in various organizations in both public and private sectors in Finland.
how innovation shapes perceptions about universities and public research organisations. The Journal of Technology Transfer 39,454-471.
Journal of Vocational education & Training 65,256-276. Crossref 18. Murat Atalay, Nilgün Anafarta, Fulya Sarvan. 2013.
Design and Testing the Feasibility of a Multidimensional Global university ranking Final Report Frans van Vught & Frank Ziegele (eds.
Consortium for Higher education and Research Performance Assessment CHERPA-Network June 2011 2 CONTRACT-2009-1225/001-001 This report was commissioned by the Directorate General for Education
The CHERPA Network In cooperation with U multirank Project team Project leaders Frans van Vught (CHEPS)* Frank Ziegele (CHE)* Jon File (CHEPS)* Project co
Jiao Tong University) Simon Marginson (Melbourne University) Jamil Salmi (World bank) Alex Usher (IREG) Marijk van der Wende (OECD/AHELO) Cun-Mei Zhao (
U multirank final report 9 Table of contents Tables...13 Figures...14 Executive Summary...17 1 Reviewing current rankings...
23 1. 1 Introduction 23 1. 2 User-driven rankings as an epistemic necessity 23 1. 3 Transparency, quality and accountability in higher education 24
1. 4 Impacts of current rankings 33 1. 5 Indications for better practice 35 2 Designing U multirank...
Methodological standards 43 2. 4. 1 User-driven approach 44 2. 4. 2 U-Map and U multirank 45 2. 4. 3
Grouping 46 2. 4. 4 Design context 46 2. 4. 5 3 Constructing U multirank: Selecting indicators...
49 3. 1 Introduction 49 3. 2 Stakeholders'involvement 49 3. 3 Overview of indicators 52 Teaching and learning 52 3. 3
3. 3. 5 4 Constructing U multirank: databases and data collection tools...79 4. 1 Introduction 79 10 4. 2 Databases 79 Existing databases 79 4. 2. 1 Bibliometric databases 80 4
questionnaire 89 Student survey 90 4. 3. 2 Pretesting the instruments 91 4. 3. 3 Supporting instruments 94 4. 3. 4 4. 4 A concluding
perspective 94 5 Testing U multirank: pilot sample and data collection...97 5. 1 Introduction 97 5. 2 The global sample 97 5. 3 Data collection 102 Institutional self-reported data 103
115 6 Testing U multirank: results...119 6. 1 Introduction 119 6. 2 Feasibility of indicators 119 Teaching & Learning 122 6. 2. 1 Research 124 6. 2
. 2 Knowledge transfer 127 6. 2. 3 International orientation 129 6. 2. 4 Regional engagement 131 6. 2. 5 6. 3
Feasibility of data collection 133 Self-reported institutional data 133 6. 3. 1 Student survey data 135 6. 3. 2 Bibliometric
and patent data 135 6. 3. 3 6. 4 Feasibility of up-scaling 137 11 7 Applying U multirank:
combining U-Map and U multirank 141 7. 3 The presentation modes 143 Interactive tables 143 7. 3. 1 Personalized ranking tables 146
151 8 Implementing U multirank: the future...153 8. 1 Introduction 153 8. 2 Scope: global or European 153 8. 3 Personalized and authoritative rankings 154 8. 4 The need for international data systems 156 8. 5 Content and organization of the next
and models of implementation 161 8. 7 Towards a mixed implementation model 167 8. 8 Funding U multirank 169 8. 9 A concluding perspective 176 9 List
Classifications and rankings considered in U multirank...26 Table 1-2: Indicators and weights in global university rankings...
30 Table-2-1: Conceptual grid U multirank...42 Table 3-1: Indicators for the dimension Teaching & Learning in the Focused Institutional and Field-based Rankings...
54 Table 3-2: Primary form of written communications by discipline group...61 Table 3-3:
Indicators for the dimension Research in the Focused Institutional and Field-based Rankings...62 Table 3-4:
Data elements shared between EUMIDA and U multirank: their coverage in national databases...84 Table 4-2:
Availability of U multirank data elements in countries'national databases according to experts in 6 countries (Argentina/AR, Australia/AU, Canada/CA, Saudi arabia/SA, South africa/ZA
Teaching & Learning...122 Table 6-2: Field-based ranking indicators: Teaching & Learning (departmental questionnaires...
123 Table 6-3: Field-based ranking indicators: Teaching & Learning (student satisfaction scores...124 Table 6-4:
Focused institutional ranking indicators: Research...125 Table 6-5: Field-based ranking indicators: Research...126 Table 6-6:
U multirank data collection process...104 Figure 5-2: Follow up survey: assessment of data procedures and communication...
Combining U-Map and U multirank...142 Figure 7-2: User selection of indicators for personalized ranking tables...
Assessment of the four models for implementing U multirank...166 Figure 8-2: Organizational structure for phase 1 (short term...
169 15 Preface On 2 june 2009 the European commission announced the launching of a feasibility study to develop a multidimensional global university ranking.
Its aims were to look into the feasibility of making a multidimensional ranking of universities in Europe,
transparent and comparable information would make it easier for students and teaching staff, but also parents and other stakeholders, to make informed choices between different higher education institutions and their programmes.
It would also help institutions to better position themselves and improve their quality and performance.
"and ignore the performance of universities in areas like humanities and social sciences, teaching quality and community outreach.
While drawing on the experience of existing university rankings and of EU-funded projects on transparency in higher education, the new ranking system should be:
In a first phase running until the end of 2009 the consortium would design a multidimensional ranking system for higher education institutions in consultation with stakeholders.
In a second phase ending in June 2011 the consortium would test the feasibility of the multidimensional ranking system on a sample of no less than 150 higher education and research institutions.
and business studies and should have a sufficient geographical coverage (inside and outside of the EU) and a sufficient coverage of institutions with different missions. 16 In undertaking the project the consortium was assisted greatly by four groups that it worked closely with:
Education and Culture but other experts drawn from student organisations, employer organisations, the OECD, the Bologna Follow-up Group and a number of Associations of Universities.
ranking and transparency instruments in higher education and research. The international panel was consulted at key decision making moments in the project.
Stakeholder workshops were held four times during the project with an average attendance of 35 representatives drawn from a wide range of organisations including student bodies, employer organisations, rectors'conferences, national university associations and national representatives.
The consortium members benefitted from a strong network of national higher education experts in over 50 countries who were invaluable in suggesting a diverse group of institutions from their countries to be invited to participate in the pilot study.
This is the Final Report of the multidimensional global university ranking project. Readers interested in a fuller treatment of many of the topics covered in this report are referred to the project web-site (www. u multirank. eu) where the project's three Interim Reports can be found.
The web-site also includes a 30 page Overview of the major outcomes of the project. 17 Executive Summary Executive Summary Executive Summary Executive Summary Executive Summaryexecutive Summary Executive
Summary The need for a new transparency tool in higher education and rthe need for a new transparency tool in higher education
and rthe need for a new transparency tool in higher education and rthe need for a new transparency tool in higher education
and rthe need for a new transparency tool in higher education and rthe need for a new transparency tool in higher education
and rthe need for a new transparency tool in higher education and rthe need for a new transparency tool in higher education
and rthe need for a new transparency tool in higher education and rthe need for a new transparency
tool in higher education and r The need for a new transparency tool in higher education and r The need for a new transparency tool in higher education and rthe need for a new transparency tool in higher education
and rthe need for a new transparency tool in higher education and rthe need for a new transparency tool in higher education
and rthe need for a new transparency tool in higher education and r The need for a new transparency tool in higher education and r The need for a new transparency tool in higher education and rthe need for a new transparency tool in higher education
and rthe need for a new transparency tool in higher education and rthe need for a new transparency tool in higher education
and rthe need for a new transparency tool in higher education and r The need for a new transparency tool in higher education and rthe need for a new transparency tool in higher education
and rthe need for a new transparency tool in higher education and rthe need for a new transparency tool in higher education
and r The need for a new transparency tool in higher education and rthe need for a new transparency tool in higher education
and rthe need for a new transparency tool in higher education and rthe need for a new transparency tool in higher education
and r The need for a new transparency tool in higher education and rthe need for a new transparency tool in higher education
and rthe need for a new transparency tool in higher education and rthe need for a new transparency
tool in higher education and rthe need for a new transparency tool in higher education and rthe need for a new transparency tool in higher education
and rthe need for a new transparency tool in higher education and rthe need for a new transparency tool in higher education
and r The need for a new transparency tool in higher education and rthe need for a new transparency tool in higher education
and rthe need for a new transparency tool in higher education and rthe need for a new transparency tool in higher education
and rthe need for a new transparency tool in higher education and r The need for a new transparency tool in higher education and rthe need for a new transparency tool in higher education
and rthe need for a new transparency tool in higher education and r The need for a new transparency tool in higher education and rthe need for a new transparency tool in higher education
and rthe need for a new transparency tool in higher education and r esearch esearch esearch esearch esearch esearch esearch The project encompassed the design and testing of a new transparency tool for higher education and research.
More specifically, the focus was on a transparency tool that will enhance our understanding of the multiple performances of different higher education
and research institutions across the diverse range of activities they are involved in: higher education and research institutions are multipurpose organisations
and different institutions focus on different blends of purposes and associated activities. Transparency is of major importance for higher education and research worldwide
which is expected increasingly to make a crucial contribution to the innovation and growth strategies of nations around the globe.
Obtaining valid information on higher education within and across national borders is critical in this regard, yet higher education and research systems are becoming more complex and at first sight less intelligible for many stakeholders.
The more complex higher education systems become, the more sophisticated our transparency tools need to be.
Sophisticated tools can be designed in such a way that they are user-friendly and can cater to the different needs of a wide variety of stakeholders.
An enhanced understanding of the diversity in the profiles and performances of higher education and research institutions at a national, European and global level requires a new ranking tool.
Existing international transparency instruments do not reflect this diversity adequately and tend to focus on a single dimension of university performance research.
The new tool will promote the development of diverse institutional profiles. It will also address most of the major shortcomings of existing ranking instruments, such as language and field biases, the exaggeration of small differences in performance and the arbitrary effects of indicator weightings on ranking outcomes.
We have called this new tool U multirank as this stresses three fundamental points of departure: it is multidimensional,
recognising that higher education institutions serve multiple purposes and perform a range of different activities; it is a ranking of university performances
(although not in the sense of an aggregated league table like other global rankings); and it is driven user (as a stakeholder with particular interests,
you are enabled to rank institutions with comparable profiles according to the criteria important to you). 18 The design and keythe design and keythe design and keythe design and keythe design and keythe design and keythe design and keythe design
and key The design and keythe design and keythe design and keythe design and key The design
and key characteristics of Ucharacteristics of Ucharacteristics of Ucharacteristics of U characteristics of Ucharacteristics of U characteristics of U characteristics of Ucharacteristics of Ucharacteristics of U characteristics of U characteristics of U multirank Multirank
Multirank Multirankmultirank On the basis of a carefully selected set of design principles we have developed a new international ranking instrument that is user-driven,
multidimensional and methodologically robust. This new on-line instrument enables its users first to identify institutions that are sufficiently comparable to be ranked and, second,
to design a personalised ranking by selecting the indicators of particular relevance to them. U multirank enables such comparisons to be made both at the level of institutions as a whole and in the broad disciplinary fields in
which they are active. The integration of the already designed and tested U-Map classification tool into U multirank enables the creation of the user-selected groups of sufficiently comparable institutions.
This two-step approach is completely new in international and national rankings. On the basis of an extensive stakeholder consultation process (focusing on relevance) and a thorough methodological analysis (focusing on validity, reliability and feasibility),
U multirank includes a range of indicators that will enable users to compare the performance of institutions across five dimensions of higher education and research activities:
Teaching and learning Research Knowledge transfer International orientation Regional engagement On the basis of data gathered on these indicators across the five performance dimensions,
U multirank could provide its users with the on-line functionality to create two general types of rankings:
Focused institutional rankings: rankings on the indicators of the five performance dimensions at the level of institutions as a whole Field-based rankings:
rankings on the indicators of the five performance dimensions in a specific field in which institutions are active U multirank would also include the facility for users to create institutional
and field performance profiles by including (not aggregating) the indicators within the five dimensions (or a selection of them) into a multidimensional performance chart.
At the institutional level these take the form ofsunburst charts 'while at the field level these are structured asfield-tables'.
'19 In the sunburst charts, the performance on all indicators at the institutional level is represented by the size of the rays of thesun':
'a larger ray means a higher performance on that indicator. The colour of a ray reflects the dimension to which it belongs.
This personalised interactive ranking table reflects the user driven nature of U multirank. 20 Table 1:
In order to be able to apply the principle of comparability we have integrated the existing transparency tool the U-Map classification into U multirank.
It is driven a user higher education mapping tool that allows users to select comparable institutions on the basis ofactivity profiles'generated by the U-Map tool.
These activity profiles reflect the diverse activities of different higher education and research organisations using a set of dimensions similar to those developed in U multirank.
The underlying indicators differ as U-Map is concerned with understanding the mix of activities an institution is engaged in
while U multirank is concerned with an institution's performance in these activities (how well it does
Integrating U-Map into U multirank enables the creation of user-selected groups of sufficiently comparable institutions that can then be compared in focused institutional
or field based rankings. student staff ratiograduation ratequalification of academic staffresearch publication outputexternal research incomecitation index%income third party fundingcpd courses offeredstartup firmsinternational academic
staff%international studentsjoint international publ. graduates working in the regionstudent internships in regional co-publicationinstitution 4institution 8--Institution 3-Institution 5-Institution 1--Institution
of the UTHE findings of the U The findings of the UTHE findings of the UTHE findings of the U multirank pilot study Multirank pilot study Multirank pilot study Multirank pilot studymultirank pilot studymultirank pilot studymultirank pilot studymultirank
pilot study Multirank pilot studymultirank pilot studymultirank pilot study Multirank pilot studymultirank pilot study U multirank was tested in a pilot study involving 159 higher education institutions drawn from 57 countries:
and faculties performing very differently across the five dimensions and their underlying indicators. The multidimensional approach makes these diverse performances transparent.
While indicators on teaching and learning, research, and internationalisation proved largely unproblematic, in some dimensions (particularly knowledge transfer
and it is clear that U multirank is based a Europe project, this represents a strong expression of interest.
organisational and financial challenges, there are no inherent features of U multirank that rule out the possibility of such future growth.
and 22 operational feasibility we have developed a U multirankVersion 1. 0'that is ready to be implemented in European higher education
d implementation of Ud implementation of Ud implementation of U-Multirankmultirankmultirankmultirank Multirank Multirankmultirank The outcomes of the pilot study suggest some clear next steps in the further development of U multirank and its implementation
The refinement of U multirank instruments: Some modifications need to be made to a number of indicators and to the data gathering instruments based on the experience of the pilot study.
Roll out of U multirank across European countries: Given the need for more transparent information in the emerging European higher education area all European higher education
and research institutions should be invited to participate in U multirank in the next phase. Many European stakeholders are interested in assessing
and comparing European higher education and research institutions and programmes globally. Targeted recruitment of relevant peer institutions from outside Europe should be continued in the next phase of the development of U multirank.
Developing linkages with national and international data-bases. The design of specific authoritative rankings: Although U multirank has been designed to be driven user,
this does not preclude the use of the tool and underlying database to produce authoritative expert institutional and field based rankings for particular groups of comparable institutions on dimensions particularly relevant to their activity profiles.
In terms of the organisational arrangements for these activities we favour a further two year project phase for U multirank.
In the longer term on the basis of a detailed analysis of different organisational models for an institutionalised U multirank our strong preference is for an independent nonprofit organisation operating with multiple sources of funding.
This organisation would be independent both from higher education institutions (and their associations) and from higher education governance and funding bodies.
Its noncommercial character will add legitimacy as will external supervision via a Board of trustees. 23 1 Review Review Reviewing current rankings current rankings current rankings current rankings current rankings
classifications, and rankings-from the point of view of the information they could deliver to assist different stakeholders in their different decisions regarding higher education and research institutions.
The conceptual frameworks behind sports league tables are accepted usually well: rules of the game define who the winner is
changing the tactics in the game (more attacks 1 See www. u multirank. eu 24 late in a drawn match),
and sparking off debates among commentators of the sport for and against the new rule. 1 In university rankings,
what isthe best university'.'But different to sports, there are no officially recognised bodies that are accepted as authorities that may define the rules of the game.
that e g. the Shanghai ranking is simply a game that is as different from the Times Higher game as rugby is from football.
The issue with the some of the current university rankings is that they tend to be presented
of being guided by a (nonexistent) theory of the quality of higher education. We do not accept that position.
Our alternative to assuming an unwarranted position of authority is to reflect critically on the different roles that higher education
to define explicitly our conceptual framework regarding the different functions of higher education institutions, and in turn to derive sets of indicators from this framework.
In this sense, we want to democratise rankings in higher education and research. Based on the epistemological position that any choice of sets of indicators is driven by their makers'conceptual frameworks,
quality and accountability in higher education It is recognized widely that although the current transparency tools especially university league tables are controversial,
they seem to be here to stay, and that especially global university league tables have a great impact on decision-makers at all levels in all countries,
including in universities (Hazelkorn, 2011). They reflect a growing international competition among universities for talent and resources;
at the same time they reinforce competition by their very results. On the positive side they 1 http://en. wikipedia. org/wiki/Three points for a win 25 urge decision-makers to think bigger and set the bar higher,
especially in the research universities that are the main subjects of the current global league tables.
Yet major concerns remain as to league tables'methodological underpinnings and to their policy impact on stratification rather than on diversification of mission.
Let us first define the main concepts that we will be using in this report. Under vertical stratification we understand distinguishing higher education and research institutions asbetter'orworse'in prestige or performance;
horizontal diversification is the term for differences in institutional missions and profiles. Regarding the different instruments, transparency tool is the most encompassing term in our use of the word,
it denotes all manners of providing insight into the diversity of higher education. Transparency tools are instruments that aim to provide information to stakeholders about the efforts and performance of higher education and research institutions.
A classification is a systematic, nominal distribution among a number of classes or characteristics without any (intended) order of preference.
Classifications give descriptive categorizations of characteristics intending to focus on the efforts and activities of higher education and research institutions, according to the criterion of similarity.
Rankings are intended hierarchical categorizations to render the outputs of the higher education and research institutions according to the criterion of best performance.
Most existing rankings in higher education take the form of a league table A league table is a single-dimensional,
Quality assurance, evaluation or accreditation, also produces information to stakeholders (review reports, accreditation status) and in that sense helps to achieve transparency.
As the information function of quality assurance is not very elaborate (usually only informing if basic quality,
e g. the accreditation threshold, has been reached) and as quality assurance is too ubiquitous to allow for an overview on a global scale in this report,
of which information they could deliver to assist users in their different decisions regarding higher education and research institutions.
Classifications and rankings considered in U multirank Type Name Classifications Carnegie classification (USA) U-Map (Europe) Global League tables and Rankings Shanghai Jiao Tong University's (SJTU
) Academic ranking of world universities (ARWU) Times Higher education (Supplement)( THE) QS (Quacquarelli Symonds Ltd) Top Universities Leiden Ranking National League tables and Rankings US News & World Report (USN≀
/University ranking (CHE; Germany) Studychoice123 (SK123; The netherlands) Specialized League tables and Rankings Financial times ranking of business schools and programmes (FT;
global) Businessweek (business schools, USA+global) The Economist (business schools; global) The major dimensions along which we analysed the classifications, rankings and league tables included:
Level: e g. institutional vs. field-based Scope: e g. national vs. international Focus: e g. education vs. research Primary target group:
e g. students vs. institutional leaders vs. policy-makers Methodology and producers: which methodological principles are applied and
what sources of data are used and by whom? We concluded from our review that different rankings
implying but often not explicating different conceptions of quality of higher education and research. Most are presented as league tables;
especially the most influential ones, the global university rankings are all league tables. The relationship of indicators collected
and their weights in calculating the league table rank of an institution are not based on explicit let alone scientifically justifiable conceptual frameworks.
2010) and even already anticipating the current U multirank project, the situation has begun to change: ranking producers are becoming more explicit and reflective about their methodologies and underlying conceptual frameworks.
while most rankings give only a single ranking The problem of ignoring diversity within higher education and research institutions:
ignoring education and other functions of higher education and research institutions (practice-oriented research, innovation,third mission')The problem of composite overall indicators:
ignoring that they are about different dimensions and sometimes use different scales The problem of league tables:
most rankings are presented as league tables, assigning each institution at least those in the top-50, unique places, suggesting that all differences in indicators are valid and of equal weight (equidistant positions).
while practically all informed the design of U multirank. We already mentioned some of them. The full list includes:
The Berlin Principles on Ranking of Higher education institutions (International Ranking Expert Group, 2006), which define sixteen standards
and focusing on performance Rankings for students such as those of CHE and Studychoice123, which have a clear focus based on a single target group,
user-oriented manner enabling custom-made rankings rather than dictating a single one Focused institutional rankings, in particular the Leiden ranking of university research, also with a clear focus,
and with a transparent methodology Qualifications frameworks and Tuning Educational Structures, showing that at least qualitatively it is possible to define performances regarding student learning
thus strengthening the potential information base for other dimensions than fundamental research Comparative assessment of higher education student's learning outcomes (AHELO):
this feasibility project of the OECD to develop a methodology extends the focus on student learning introduced by Tuning and by national qualifications frameworks into an international comparative assessment of undergraduate students,
much like PISA does for secondary school pupils. Recent reports on rankings such as the report of the Assessment of University-Based Research Expert Group (AUBR Expert Group, 2009) which defined a number of principles for sustainable collection of research data,
such as purposeful definition of the units or clusters of research, attention to the use of non-obtrusive measurement e g. through digital repositories of publications,
to ensure that in the development of the set of indicators for U multirank we would not overlook any dimensions,
The global rankings that we studied limit their interest to several hundred preselected universities, estimated to be no more than 1%of the total number of higher education institutions worldwide.
The criteria used to establish a threshold generally concern the research output of the institution;
Although it could be argued that world-class universities may act as role models (Salmi, 2009), the evidence that strong institutions inspire better performance across whole higher education systems is so far mainly found in the area of research rather than that of teaching (Sadlak & Liu,
2007) if there are positive system-wide spill overs at all (Cremonini, Benneworth & Westerheijden, 2011). From our overview of the indicators used in the main global university rankings (summarised in Table 1-2) we concluded that they focus indeed heavily on research aspects of the higher education institutions (research output,
impact as measured through citations, and reputation in the eyes of academic peers) and that efforts to include the education dimension remain weak
Similarly, the EUA in a recent overview also judged that these global rankings provide anoversimplified picture'of institutional mission, quality and performance,
as they focus mainly on indicators related to the research function of universities (Rauhvargers, 2011). 30 Table 1-2:
Indicators and weights in global university rankings HEEACT 2010 ARWU 2010 THE 2010 QS 2011 Leiden Rankings 2010 Research output Articles past 11 years (10
faculty member (20%)Two versions of size-independent, field-normalized average impact('crown indicator'CPP/FCSM,
and alternative calculation MNCS2) Size-dependent'brute force'impact indicator (multiplication of P with the university's field-normalized average impact):
P*CPP/FCSM Citations-per-publication indicator (CPP) Quality of education Alumni of an institution winning Nobel prizes and Fields Medals (10%)Phds awarded per staff (6%)Undergraduates admitted per staff
(4. 5%)Income per staff (2. 25%)Ratio Phd awards/bachelor awards (2. 25%)Faculty student ratio (20%)31 HEEACT 2010
and Fields Medals (20%)Highly cited researchers in 21 broad subject categories (20%)Reputation Peer review survey (19.5+15=34.5%)International staff score (5%)International students score (5
staff and students (5%)Industry income per staff (2. 5%)International faculty (5%)International students (5%)Website http://ranking. heeact. edu. tw
/en-us/2010/Page/Indicators http://www. arwu. org/ARWUMETHODOLOGY2010. jsp http://www. timeshighereducation. co. uk/world-university rankings/2010-2011/analysis
-methodology. html http://www. topuniversities. com/university rankings/world-university rankings http://www. socialsciences. leiden. edu/cwts/products-services/leiden-ranking-2010
National databases on higher education and research institutions cover different information based on national, different definitions of items and are
Self-reported data collected by higher education and research institutions participating in a ranking. This source is used regularly though not in all global rankings, due to the lack of externally available and verified statistics (Thibaud, 2009.
The drawback is high expense for the ranking organisation and for the participating higher education and research institutions.
Surveys among stakeholders such as staff members, students, alumni or employers. Surveys are strong methods to elicit opinions such as reputation or satisfaction,
Student satisfaction and to a lesser extent satisfaction of other stakeholders is used in national rankings, but not in existing global university rankings.
Reputation surveys are used globally, but have been proven to be very weak cross-nationally (Federkeil, 2009) even if the sample design and response rates were acceptable,
which is not often the case in the current global university rankings. Manipulation of opinion-type data has surfaced in surveys for ranking
U-Map, has testedpre-filling'higher education institutions'questionnaires, i e. data available in national public sources are entered into 3 The beginnings of European data collection as in the EUMIDA project may help to overcome this problem for the European region in years to come. 33 the questionnaires sent to higher education institutions for data
gathering. This should reduce the effort required from higher education institutions and give them the opportunity to verify thepre-filled'data as well.
The U-Map test withpre-filling'from national data sources in Norway appeared to be resulted successful
and in a substantial decrease of the burden of gathering data at the level of higher education institutions. 1. 4 Impacts of current rankings According to many commentators,
encouraging higher education and research institutions to improve their performance. Impacts may affect amongst other things:
Student demand. There is evidence that student demand and enrolment in study programmes rises after positive statements in national, student-oriented rankings.
Both in the US and Europe rankings are used not equally by all types of students (Hazelkorn, 2011:
less by domestic undergraduate entrants, more at the graduate and postgraduate levels. Especially at the undergraduate level, rankings appear to be used particularly by students of high achievement and by those coming from highly educated families (Cremonini, Westerheijden & Enders, 2008;
Heine & Willich, 2006; Mcdonough, Antonio & Perez, 1998. Institutional management. Rankings strongly impact on the management in higher education institutions.
The majority of higher education leaders report that they use potential improvement in rank to justify claims on resources (Espeland & Saunder, 2007;
Hazelkorn, 2011. In institutional actions to improve ranking positions, they tend to focus on targeting the indicators in league tables that are influenced most easily, e g. the institution's branding,
institutional data and choice of publication language (English) and channels (journals counted in the international bibliometric databases).
Moreover, there are various examples of cases in which leaders'salary or their positions were linked to their institution's position in rankings (Jaschik, 2007).
In nations across the globe, global rankings have prompted the desire forworld-class universities'both as symbols of national achievement and prestige and supposedly as engines of the knowledge economy (Marginson, 2006.
if redirecting funds to a small set of higher education and research institutions to make themworld class'benefits the whole higher education system;
research on this question is lacking until now. 34 The higher educationreputation race'.'The reputation race (van Vught, 2008) implies the existence of an ever-increasing search by higher education and research institutions and their funders for higher positions in the league tables.
In Hazelkorn's survey of higher education institutions, 3%were ranked first in their country, but 19%wanted to get to that position (Hazelkorn, 2011).
The reputation race has costly implications. The problem of the reputation race is that the investments do not always lead to better education and research,
and that the resources spent might be used more efficiently elsewhere. Besides, the link between quality in research and quality in teaching is not particularly strong (see Dill & Soo, 2005.
Quality of higher education and research institutions. Rankings'incomplete conceptual and indicator frameworks tend to get rooted as definitions of quality (Tijssen, 2003.
This standardization process is likely to reduce the horizontal diversity in higher education systems.Matthew effect'.
i e. a situation where already strong institutions are able to attract more resources from students (e g. increase tuition fees),
'Institutional leaders are under great pressure to improve their institution's position in the league tables.
Most of the effects discussed above are rather negative to students, institutions and the higher education sector.
Similarly, rankings may provide useful stimuli to students to search for the best-fitting study programmes
and to policy-makers to consider where in the higher education system investment should be directed for the system to fulfil its social functions optimally.
but that the current rankings and league tables seem to invite overreactions on too few dimensions
For some target groups, in particular students and researchers, information has to be based field; for others, e g. university leaders and national policy-makers, information about the higher education institution as a whole has priority (related to the strategic orientation of institutions;
a multilevel set of indicators must reflect these different needs. In rankings comparisons should be made between higher education and research institutions of similar characteristics, leading to the need for a pre-selection of a set of more or less homogeneous institutions.
Rankings that include very different profiles of higher education and research institutions are non-informative and misleading.
Rankings have to be multidimensional. The various functions of higher education and research institutions for a heterogeneity of stakeholders and target groups can only be addressed adequately in a multidimensional approach.
There are neither theoretical nor empirical reasons for assigning fixed weights to individual indicators to calculate a composite overall score;
Rankings should not use league tables from 1 to n but should differentiate between clear and robust differences in levels of performance.
These general conclusions have been an important source of inspiration for how we designed U multirank, a new, global, multidimensional ranking instrument.
multidimensional global ranking tool that we have calledU multirank'.'First, we present the general design principles that to a large extent have guided the design process.
Finally, we outline a number of methodological choices that have a major impact on the operational design of U multirank. 2. 2 Design Principles U multirank aims to address the challenges identified as arising from the various currently existing ranking tools.
when designing and constructing U multirank. Our fundamental epistemological argument is that as all observations of reality are driven theory (formed by conceptual systems) anobjective ranking'cannot be developed (see chapter 1). Every ranking will reflect the normative design and selection criteria of its constructors.
Higher education and research institutions are predominantly multipurpose, multiple-mission organizations undertaking different mixes of activities (teaching and learning, research, knowledge transfer, regional engagement,
It makes no sense to compare the research performance of a major metropolitan research university with that of a remotely located University of Applied science;
or the internationalization achievements of a national humanities college whose major purpose is to develop
and preserve its unique national language with an internationally orientated European university with branch campuses in Asia.
The fourth principle is that higher education rankings should reflect the multilevel nature of higher education. With very few exceptions, higher education institutions are combinations of faculties, departments and programs of varying strength.
Producing only aggregated institutional rankings disguises this reality and does not produce the information most valued by major groups of stakeholders:
students, potential students, their families, academic staff and professional organizations. These stakeholders are interested mainly in information about a particular field.
the production of league tables and the denial of contextuality. In addition it should minimise the incentives for strategic behaviour on the part of institutions togame the results'.
'These principles underpin the design of U multirank, resulting in a user-driven, multidimensional and methodologically robust ranking instrument.
In addition, U multirank aims to enable its users to identify institutions and programs that are sufficiently comparable to be ranked,
For the design of U multirank we specify our own conceptual framework in the following section. 2. 3 Conceptual framework A meaningful ranking requires a conceptual framework
We found a number of points of departure for a general framework for studying higher education and research institutions in the higher education literature.
First, a common point of departure is that processing knowledge is the general characteristic of higher education and research institutions (Clark 1983;
or its transfer to stakeholders outside the higher education and research institutions (knowledge transfer) or to various groups oflearners'(education).
Of course, a focus on the overall objectives of higher education and research institutions in the three well-known primary processes
or functions ofteaching and learning, research, and knowledge transfer'is a simplification of the complex world of higher education and research institutions.
These institutions are, in varying combinations of focus, committed to the efforts to discover, conserve, refine, transmit
which higher education and research institutions are involved. The three functions are a useful way to describe conceptually the general purposes of these institutions
The second conceptual assumption is that the performance of higher education and research institutions may be directed at differentaudiences'.
'In the current higher education and research policy area, two main general audiences have been prioritised, the first through the international orientation of higher education and research institutions.
which a higher education institution operates. In reality theseaudiences'are combined of course often in the various activities of higher education
and research institutions. 40 It is understood that the functions higher education and research institutions fulfil for international and regional audiences are manifestations of their primary processes,
i e. the three functions of education, research and knowledge transfer mentioned before. What we mean by this is that there may be educational elements
A major issue in higher education and research institutions, as in many social systems, has been that the transformation from inputs to performances is not self-evident.
One of the reasons why there is so much criticism of league tables is exactly the point that from similar sets of inputs,
different higher education and research institutions may reach quite different types and levels of performance. We make a general distinction between theenabling'stages of the overall creation stages on the one hand
Ranking information is produced to inform users about the value of higher education and research, which is necessary as it is not obvious that they are easily able to take effective decisions without such information.
Higher education is not an ordinarygood'for which the users themselves may assess the value a priori (using, e g.,
Higher education is to be seen as an experience good (Nelson 1970: the users may assess the quality of the good only
while or afterexperiencing'it (i e. the higher education program), but suchexperienceis ex post knowledge.
whether the educational program meets their standards or criteria. Ex ante they only can refer to the perceptions of previous users.
Some even say that higher education is a credence good (Dulleck and Kerschbamer 2006: the value of the good cannot be assessed
If users are interested in the value added of a degree program on the labor market, information on how well a class is taught is not relevant.
They need information on how the competences acquired during higher education will improve their position on the career or social ladder.
Some users are interested in the overall performance of higher education and research institutions (e g. policy-makers) and for them the internal processes contributing to performance are of less interest The institution may well remain ablack box'for these users.
Other stakeholders (students and institutional leaders are prime examples) are interested precisely in what happens inside the box.
For instance, students may want to know the quality of teaching in the field in which they are interested.
as they may consider this as an important aspect of their learning experience and their time in higher education (consumption motives).
Students might also be interested in the long-term impact of taking the program as they may see higher education as an investment
Users engage with higher education for a variety of reasons and therefore will be interested in different dimensions
and performance indicators of higher education institutions and the programs they offer. Rankings must be designed in a balanced way
For different dimensions (research, teaching & learning, knowledge transfer) and different stakeholders/users the relevance of information about different aspects of performance may vary.
Filtering higher education and research institutions into homogeneous groups requires contextual information rather than only the input
Contextual information for higher education and research institutions relates to their positioning in society and specific institutional appearances.
A substantial part of the relevant context is captured by applying another multidimensional transparency tool (U-Map) in preselecting higher education
Conceptual grid U multirank Stages Functions & Audiences Enabling Performance Input Process Output Impact Functions context Teaching & Learning Research Knowledge Transfer Audiences
International Orientation Regional Engagement Using this conceptual framework we have selected the following five dimensions as the major content categories of U multirank:
Teaching & Learning Research Knowledge Transfer International Orientation Regional Engagement In chapter 3 we will discuss the various indicators to be used in these five dimensions.
An important factor in the argument against rankings and league tables is the fact that often their selection of indicators is guided primarily by the (easy) availability of data rather than by relevance.
This often leads to an emphasis on indicators of the enabling stages of the higher education production process, rather than on the area of performance, largely because governance of higher education and research institutions has concentrated traditionally on the bureaucratic (in Weber's neutral sense of the word) control
budgets, personnel, students, facilities, etc. Then too, inputs and 43 processes can be influenced by managers of higher education and research institutions.
They can deploy their facilities for teaching, but in the end it rests with the students to learn and,
after graduation, work successfully with the competencies they have acquired. Similarly, higher education and research institution managers may make facilities and resources available for research,
but they cannot guarantee that scientific breakthroughs are created'.'Inputs and processes are the parts of a higher education
and research institution's system that are documented best. But assessing the performance of these institutions implies a more comprehensive approach than a narrow focus on inputs
and processes and the dissatisfaction among users of most current league tables and rankings is because they often are interested more in institutional performance
while the information they get is largely about inputs. In our design of U multirank we focused on the selection of output
and impact indicators. U multirank intends to be a multidimensional performance assessment tool and thus needs to imply indicators that relate to the performances of higher education
and research institutions. 2. 4 Methodological aspects There are a number of methodological aspects that have a clear impact on the way a new,
multidimensional ranking tool like U multirank can be developed. In this section we explain the various methodological choices made when designing U multirank.
Methodological standards 2. 4. 1in addition to the content-related conceptual framework, the new ranking tool and its underlying indicators must be based also on methodological standards of empirical research, validity and reliability
in the first instance. In addition, because U multirank is an international comparative transparency tool, it must deal with the issue of comparability across cultures and countries and finally,
in order to become sufficiently operational, U multirank has to address the issue of feasibility. Validity (Construct) validity refers to the evidence about
whether a particular operationalization of a construct adequately represents what is intended by the theoretical account of the construct being measured.
When characterizing, e g. the internationality of a higher education institution, the percentage of international students is a valid indicator
only if scores are influenced not heavily by citizenship laws. Using the nationality of the qualifying diploma on entry has
therefore a higher validity than using citizenship of the student. Reliability Reliability refers to the consistency of a set of measurements or measuring instrument.
A measure is considered reliable if repeatedly applied in the same population, it would 44 always arrive at the same result.
This is particularly an issue with survey data (e g. among students, alumni, staff) used in rankings.
National higher education systems are based on national legislation setting specific legal frameworks, including legal definitions (e g
. what/who is a professor). Additional problems arise from differing national academic cultures. Indicators, data elements and underlying questions have to be defined
if we know that doctoral students are counted as academic staff in some countries and as students in others,
we need to ask for the number of doctoral students counted as academic staff in order to harmonise data on academic staff (excluding doctoral students).
Feasibility The objective of U multirank is to design a multidimensional global ranking tool that is feasible in practice.
The ultimate test of the feasibility of our ranking tool has to be empirical: can U multirank be applied in reality
and can it be applied with a favourable relation between benefits and costs in terms of financial and human resources?
We report on the empirical assessment of the feasibility of U multirank in chapter 6 of this report.
User-driven approach 2. 4. 2to guide the readers'understanding of U multirank, we now briefly describe the way we have worked methodologically out the principle of being driven user (see section 2. 2). We propose an interactive web-based approach,
2. choose whether to focus the ranking on higher education and research institutions as a whole (focused institutional rankings) or on fields within these institutions (field-based rankings;
Compared to existing league tables we see this as one of the advantages of our approach.
U-Map and U multirank 2. 4. 3the principle of comparability (see section 2. 2) calls for a method that helps us in finding institutions the purposes
can be found in the connection of U multirank with U-Map (see www. u-map. eu). U-Map,
describes(maps')higher education institutions on a number of dimensions, each representing an aspect of their activities.
U-Map can prepare the ground for U multirank in the sense that it helps identify those higher education institutions that are comparable and for which,
therefore, performance can be compared by means of the U multirank ranking tool. A detailed description of the methodology used in this classification can be found on the U-Map website (http://www. u-map. eu/methodology doc/)and in the final report of the U-Map project,
U multirank focuses on the performance aspects of higher education and research institutions. U multirank shows how well the higher education institutions are performing in the context of their institutional profile.
Thus, the emphasis is on indicators of performance, whereas in U-Map it lies on the enablers of that performance the inputs and activities.
U-Map and U multirank share the same conceptual model. The conceptual model provides the rationale for the selection of the indicators in both U-Map and U multirank, both
of which are complementary instruments for mapping diversity, horizontal diversity in classification and vertical diversity in ranking.
Grouping 2. 4. 4u-Multirank does not calculate league tables. As has been argued in chapter 1, league table rankings have severe flaws
As an alternative U multirank uses a grouping method. Instead of calculating exact league table positions we will assign institutions to a limited number of groups.
Design context 2. 4. 5in this chapter we have described the general aspects of the design process regarding U multirank.
we have described the conceptual framework from which the five dimensions of U multirank are deduced, and we have outlined a number of methodological approaches to be applied in U multirank.
Together these elements form the design context from which we have constructed U multirank. The design choices made here are in accordance with both the Berlin Principles and the recommendations by the Expert Group on the Assessment of University-based Research.
The Berlin Principles4 emphasize (a o.)the importance of being clear about the purpose of rankings and their target groups,
of recognising the diversity of institutional profiles, 4 http://www. ireg-observatory. org/index. php?
Based on our design context, in the following chapters we report on the construction of U multirank. 5 Expert Group on Assessment of University-Based Research (2010),
Assessing Europe's University-Based Research, European commission, DG Research, EUR 24187 EN, Brussels 3 Constructing U Constructing U Constructing U Constructing U multirank:
Selecting indicatorsmultirank: Selecting indicators Multirank: Selecting indicators Multirank: Selecting indicators Multirank: Selecting indicators Multirank: Selecting indicators Multirank:
Selecting indicators 3. 1 Introduction Having set out the design context for U multirank in the previous chapter,
we now turn to a major part of the process of constructing U multirank: the selection and definition of the indicators.
These indicators are assumed to enable us to measure the performances of higher education and research institutions both at the institutional and at the field level,
teaching & learning, research, knowledge transfer, international orientation, regional engagement. This chapter provides an overview of the sets of indicators selected for the five dimensions,
The other important components of the construction process for U multirank are the databases and the data collection tools that allow us to actuallyfill'the indicators.
These will be discussed further in chapter 4 as we explain the design of U multirank in more detail.
In chapters 5 and 6 we report on the U multirank pilot study during which we analysed the data quality
Various categories of stakeholders (student organizations, employer organizations, associations and consortia of higher education institutions, government representatives, international organizations) have been involved in an iterative process of consultation to come to a stakeholder-based assessment of the relevance
presented to them as potential items in the five dimensions of U multirank (see 3. 3). In addition,
we invited feedback from international experts in higher education and research and from the Advisory board of the U multirank project.
the indicator focuses on the performance of (programs in) higher education and research institutions and is defined in such a way that it measuresrelative'characteristics (e g. controlling for size of the institution).
The required data to construct the indicator is either available in existing databases and/or in higher education and research institutions,
the indicators selected for the pre-test phase in U multirank (see 6. 2) then were grouped into three categories:
The outcome of the pre-test was used then as further input for the wider pilot where the actual data was collected to quantify the indicators for U multirank at both the institutional and the field level.
the five subsections that follow present the indicators for the five dimensions (teaching & learning, research, knowledge transfer, international orientation, regional engagement).
Teaching and learning 3. 3. 1education is the core activity in most higher education and research institutions.
education comprises all processes to transmit knowledge, skills and values to learners (colloquially: students). ) Education can be conceived as a process subdivided in enablers (inputs, 6 process7) and performance (outputs and outcomes8.
Teaching and learning ideally lead to the impacts or benefits that graduates will need for a successful career in the area studied and a successful, happy life as an involved citizen of a civil society.
Career and quality of life are complex concepts, involving lifelong impacts. Moreover, the pace of change of higher education and research institutions means that long-term performance is of low predictive value for judgments on the future of those institutions.
All we could aspire to in a ranking is to assessearly warning indicators'of higher education's contribution,
i e. outcomes and outputs. Students'learning outcomes after graduation would be a good measure of outcomes.
However measures of learning outcomes that are internationally comparable are only now being developed in the AHELO project (see chapter 1) 9. At this moment such measures do not exist,
but if the AHELO project succeeds they would be a perfect complementary element in our indicator set.
in order to reflect performance in the teaching and learning dimension. Teaching & learning can be looked at from different levels and different perspectives.
As one of the main objectives of our U multirank project is to inform stakeholders such as students,
their perspective is important too. From their point of view the output to be judged is the educational process,
and student quality and quantity. 7 The process of education includes design and implementation of curricula, with formal teaching, self study,
peer learning, counselling services, etc. 8 Outputs are direct products of a process, outcomes relate to achievements due to the outputs. 9 http://www. oecd. org/document/22/0, 3343, en 2649 35961291 40624662 1 1 1 1,
00. html. 53 Another approach to get close to learning outcomes lies in assessing the quality of study programs.
even if they have become almost ubiquitous in this world's higher education, are too diverse to lead to comparable indicators (see chapter 1:
some quality assurance procedures focus on programs, others on entire higher education institutions; they have different foci, use different data,
The qualifications frameworks currently being developed in the Bologna process and in the EU may come to play a harmonising role with regard to educational standards in Europe,
but they are not yet effective (Westerheijden et al.,2010) and of course they do not apply in the rest of the world.
Besides, measures of students'progressing through their programs can be seen as indicators for the quality of their learning.
'indicators for quality can be sought in student and graduate assessments of their learning experience. The student/graduate experience of education is conceptually closer to
what those same students learn than judgments by external agents could be. Students'opinions may derive from investment or from consumption motives
but it is an axiom of economic theories as well as of civil society that persons know their own interest (and experience) best.
Therefore we have chosen indicators reflecting both. An issue might be whether student satisfaction surveys are prone to manipulation:
do students voice their loyalty to the institution rather than their genuine (dissatisfaction? This is not seen as a major problem as studies show that loyalty depends on satisfaction (Athiyaman, 1997;
Brown & Mazzarol, 2009; OECD, 2003. Nevertheless we should remain vigilant to uncover signs of university efforts to manipulate their students'responses;
in our experience, including control questions in the survey on how and with which additional information students were approached to participate gives a good indication.
Non-plausible student responses (for instance an extremely short time to complete the online questionnaire) could be eliminated.
Another issue about using surveys in international comparative studies concerns differences in culture that affect tendencies to respond in certain ways.
however, that student surveys can give valid and reliable information in a European context. One of the questions that we will return to later on in this report is whether a student survey about 10 http://www. eurostudent. eu:
8080/index html. 54 their own program/institution can produce valid and reliable information on a global scale.
& Learning indicators that were selected for the pilot test of U multirank. The column on the right-hand side includes some of the comments
Indicators for the dimension Teaching & Learning in the Focused Institutional and Field-based Rankings Focused Institutional Ranking Definition Comments 1 Expenditure on teaching Expenditure on teaching activities
Stakeholders questioned relevance. 2 Graduation rate The percentage of a cohort that graduated x years after entering the program (x is stipulated the normal')time expected for completing all requirements for the degree times 1. 5
) Graduation rate regarded by stakeholders as most relevant indicator. Shows effectiveness of schooling process. More selective institutions score better compared to (institutions in) open access settings.
Sensitive to discipline mix in institution and sensitive to economic circumstances. 3 Interdisciplinarity of programs The number of degree programs involving at least two traditional disciplines as a percentage of the total number of degree programs Based on objective statistics.
Relevant indicator according to stakeholders: shows teaching leads to broadly-educated graduates. But sensitive to regulatory (accreditation) and disciplinary context.
Data collection and availability problematic. 4 Relative rate of graduate (un) employment The rate of unemployment of graduates 18 months after graduation as a percentage of the national rate of unemployment
of graduates 18 months after graduation)( for bachelor graduates and master graduates) Reflects extent to which institution isin sync'with environment.
Sensitive to discipline mix in institution and sensitive to (regional) economic circumstances. Data availability poses problem. 55 5 Time to degree Average time to degree as a percentage of the official length of the program (bachelor and master) Reflects effectiveness of teaching process.
Availability of data may be a problem. Depends on the kind of programs. Field-based Ranking Definition Comments 6 Student-staff ratio The number of students per fte academic staff Fairly generally available.
Is an input indicator. Depends on educational approaches. Sensitive to definitions ofstaff'and to discipline mix in institution. 7 Graduation rate The percentage of a cohort that graduated after x years after entering the program (x is stipulated the normal')time expected for completing all requirements for the degree times
1. 5) See above institutional ranking 8 Investment in laboratories for Engineering FBR Investment in laboratories (average over last five years, in millions in national currencies) per student High
standard laboratories essential for offering high quality education. International comparisons difficult. 9 Qualification of academic staff The number of academic staff with Phd as a percentage of total number of academic staff (headcount) Proxy for teaching staff quality.
Generally available. Input indicator. Depends on national regulations and definitions ofstaff'10 Relative rate of graduate (un) employment The rate of unemployment of graduates 18 months after graduation as a percentage of the national rate of unemployment of graduates 18
months after graduation)( for bachelor graduates and master graduates) See above institutional ranking 11 Interdisciplinarity of programs The number of degree programs involving at least two traditional disciplines as a percentage of the total number of degree programs See above institutional
ranking 12 Inclusion of issues relevant for employability in curricula Rating existence of inclusion into curriculum (minimum levels/standards) of:
project based learning; joint courses/projects with business students (engineering; business knowledge (engineering; project management;
presentation skills; existence of external advisory board (including employers) Problems with regard to availability of data. 56 13 Inclusion of work experience into the program Rating based on duration (weeks/credits) and modality
access to computer support Data easily available. 15 Student gender balance Number of female students as a percentage of total enrolment Indicates social equity (a balanced situation is considered preferable.
But indicator of social context, not of educational quality. Student satisfaction indicators Indicators reflecting students'appreciation of several items related to the teaching & learning process.
Student satisfaction is of high conceptual validity. It can be made available in a comparative manner through a survey.
An issue might be whether student satisfaction surveys are prone to manipulation: do students voice their loyalty to the institution rather than their genuine (dissatisfaction?
Global comparability problematic: Cross-cultural differences may affect the students'answers to the questions. 16 Student satisfaction:
Overall judgment of program Overall satisfaction of students with their program and the situation at their higher education institution Refers to single question to give anoverall'assessment;
no composite indicator. 17 Student satisfaction: research orientation of educational program Index of four items: research orientation of the courses, teaching of relevant research methods,
opportunities for early participation in research and stimulation to give conference papers. 18 Student satisfaction:
Evaluation of teaching Satisfaction with regard to student's role in the evaluation of teaching, including prevalence of course evaluation by students,
relevance of issues included in course evaluation, information about evaluation outcomes, impact of evaluations 57 19 Student satisfaction:
Facilities The satisfaction of students with respect to facilities, including: Classrooms/lecture halls: Index including: Availability/access for students;
number of places; technical facilities/devices; Laboratories: Index including: Availability/access for students; number of places;
technical facilities/devices; Libraries: Index including: availability of literature needed; access to electronic journals; support services/e-services. 20 Student satisfaction:
Organization of program The satisfaction of students with the organization of a program, including possibility to graduate in time,
access to classes/courses, class size, relation of examination requirements to teaching 21 Student satisfaction:
Promotion of employability (inclusion of work experience) Index of several items: Students assess the support during their internships, the organization, preparation and evaluation of internships, the links with the theoretical phases 22 Student satisfaction:
Quality of courses Index including: Range of courses offered, coherence of modules/courses, didactic competencies of staff, stimulation by teaching quality of learning materials, quality of laboratory courses (engineering) 23 Student satisfaction:
Social climate Index including: Interaction with other students Interaction with teachers Attitude towards students in city Security 24 Student satisfaction:
Support by teachers Included items: Availability of teachers/professors (e g. during office hours, via email;
informal advice and coaching; feedback on homework, assignments, examinations; coaching during laboratory/IT tutorials (engineering only;
support during individual study time (e g. through learning platforms; suitability of handouts. 25 Student satisfaction:
Opportunities for a stay abroad Index made up of several items: The attractiveness of the university's exchange programs and the partner universities;
availability of exchange places; support and guidance in preparing for stay abroad; financial 58 support (scholarships, exemption from study fees;
transfer of credits from exchange university; integration of the stay abroad into studies (no time loss caused by stay abroad) and support in finding internships abroad. 26 Student satisfaction:
Student services Quality of a range of student services including: general student information, accommodation services,,
financial services, career service, international office and student organizations/associations 27 Student Satisfaction: University webpage Quality of information for students on the website.
Index of several items including general information on institution and admissions, information about the program, information about classes/lectures;
English-language information (for international students in non-English speaking countries) One indicator dropped from the list during the stakeholder consultation is graduate earnings.
Although the indicator may reflect the extent to which employers value the institution's graduates,
it was felt that this indicator is very sensitive to economic circumstances and institutions have little influence on labor markets.
In addition, data availability proved unsatisfactory for this indicator and comparability issues negatively affect its reliability.
For our field-based rankings, subject-level approaches to quality and educational standards do exist. In business studies, thetriple crown'of specialized
voluntary accreditation by AACSB (USA), AMBA (UK) and EQUIS (Europe) creates a build up of expectations on study programs in the field.
In the field of engineering, the Washington Accord is aninternational agreement among bodies responsible for accrediting engineering degree programs.
It recognizes the substantial equivalency of programs accredited by those bodies and recommends that graduates of programs accredited by any of the signatory bodies be recognized by the other bodies as having met the academic requirements for entry to the practice of engineering'(www. washingtonaccord. org).
In general, information on whether programs have acquired one or more of these international accreditations presents an overall, distant proxy to their educational quality.
However, the freedom to opt for international accreditation in business studies may differ across countries, which makes an accreditation indicator less suitable for international comparative ranking.
In engineering, adherence to the Washington Accord depends on national-level agencies, not on individual higher education institutions'59 strategies.
These considerations have contributed to our decision not to include accreditation-related indicators in our list of Teaching & Learning performance indicators.
Instead, the quality of the learning experience is reflected in the student satisfaction indicators included in Table 3-1. These indicators can be based on a student survey carried out among a sample of students from Business studies and Engineering.
As shown in the bottom half of Table 3-1, this survey focuses on provision of courses, organization of programs and examinations, interaction with teachers, facilities, etc.
Stakeholders'feedback on the student satisfaction indicators revealed that they have a positive view overall of the relevance of the indicators on student satisfaction.
However, it was also felt that the total number of indicators is quite high and should be reduced in the final indicator Set in the field-based rankings,
objective indicators are used in addition to the student satisfaction indicators. Most are similar to the indicators in the focused institutional rankings.
Some additional indicators are included to pay attention to the facilities and services provided by the institution to enhance the learning experience (e g. laboratories, curriculum).
Research 3. 3. 2selecting indicators for capturing the research performance of a higher education and research institution or a disciplinary unit (e g. department,
faculty) within that institution has to start with the definition of research. We take the definition set out in OECD's Frascati Manual:
11 Research and experimental development (R&d) comprise creative work undertaken on a systematic basis in order to increase the stock of knowledge, including knowledge of man, culture and society,
and the use of this stock of knowledge to devise new applications. The term R&d covers three activities:
basic research, applied research and experimental development. Given the increasing complexity of the research function of higher education institutions and its extension beyond Phd awarding institutions, U multirank adopts a broad definition of research,
incorporating elements of both basic and practice-oriented (applied) research. There is a growing diversity of research missions across the classical research universities and the more vocational oriented institutions (university colleges, institutes of technology, universities of applied sciences, Fachhochschulen, etc.
This is reflected in the wide range of research outputs and outlets mapped across the full spectrum,
from discovery to knowledge transfer to innovation. 11 http://browse. oecdbookshop. org/oecd/pdfs/browseit/9202081e. PDF 60 Research performance indicators may be distinguished into:
Output indicators, measuring the quantity of research products. Typical examples are the number of papers published
or the number of Phds delivered. Outcome indicators, relating to a level of performance or achievement.
Given that in most disciplines publications are seen often as the single most important research output of higher education institutions,
The Expert Group on Assessment of University Based Research12 defines research output as referring to individual journal articles, conference publications, book chapters, artistic performances, films, etc.
works within academic standards. 12 See: http://www. kowi. de/Portaldata/2/Resources/fp/assessing-europe-university-based-research. pdf 61 Table 3-2:
Primary form of written communications by discipline group Natural sciences Life sciences Engineering sciences Social sciences & Humanities Arts Journal article X X X X X Conference proceedings
Expert Group on Assessment of University-Based Research (2010) Apart from using existing bibliometric databases,
Recommended by Expert Group on University-based Research. Difficult to separate teaching and research expenditure in a uniform way. 2 Research income from competitive sources Income from European research programs+income from other international competitive research programs+income from research councils+income
and art schools to be covered in ranking. Data suffers from lack of agreed definitions and lack of availability.
awards and scholarships won by employees for research work and in (international cultural competitions, including awards granted by academies of science.
However, research findings are published not just in journals. 12 Doctorate productivity Number of completed Phds per number of Professors (head count)* 100 (three-year average) Indicates aspects of the quantity and quality of a unit's research.
an indicator reflecting arts-related output is included in U multirank as well. However, data availability is posing some challenges here.
Therefore it was decided to keep them in the list of indicators for U multirank's institutional ranking.
Knowledge transfer 3. 3. 3knowledge transfer has become increasingly relevant for higher education and research institutions as many nations and regions strive to make more science output readily available for economic, social and cultural development.
The process by which the knowledge, expertise and intellectually linked assets of Higher education institutions are applied constructively beyond Higher education for the wider benefit of the economy and society, through two-way engagement with business, the public sector, cultural and community partners.
the knowledge exchange) process in higher education and research institutions and ultimately on users, i e. business and the economy, has now become a preoccupation of many governing and funding bodies, as well as policy-makers.
Traditionally TT is concerned primarily with the management of intellectual property (IP) produced by universities and other higher education and research institutions.
Higher education and research institutions often have technology transfer offices (TTOS)( Debackere & Veugelers, 2005), which are units that liaise with industry and assist higher education and research institutions'personnel in the commercialisation of research results.
TTOS provide services in terms of assessing inventions, patenting, licensing IP, developing and funding spin-offs and other start-ups and approaching firms for contract based arrangements.
A typical classification of mechanisms and channels for knowledge transfer between higher education and research institutions and other actors would include four main interaction channels for communication between higher education and research institutions and their environment:
People, including students and researchers, Artefacts, including equipment, protocols, rules and regulations, Money. Texts are an obvious knowledge transfer channel.
however, already under the research dimension in U multirank. In the case of texts, it is customary to distinguish between two forms:
While publications are part of the research dimension in U multirank, patents will be included under the Knowledge Transfer dimension.
& Learning and Regional Orientation dimensions included in U multirank. Knowledge transfer through people also takes place through networks
usually through a range of short and long training programmes (offered by education institutions), some of which have an option of accreditation. 67 Money flows are an important interaction channel, next to texts and people.
Unlike texts and people, money is not a carrier of knowledge, but a way of valuing the knowledge transferred in its different forms.
such as the one carried out by the US-based Association of University Technology Managers (AUTM) for its Annual Licensing survey.
films and exhibition catalogues have been included in the scholarly outputs covered in the Research dimension of U multirank.
such as the Higher education-Business and Community Interaction (HE-BCI) Survey in the UK. 14 This UK survey began in 2001
U multirank particularly wants to capture aspects of knowledge transfer performance. However, given the state of the art in measuring knowledge transfer (Holi et al.
2008) 14 http://ec. europa. eu/invest-in-research/pdf/download en/knowledge transfer web. pdf. The HE-BCI survey is managed by the Higher education Funding Council for England (HEFCE)
and used as a source of information to inform the funding allocations to reward the UK universities'third stream activities.
2008) aims to create a ranking methodology for measuring university third mission activities along three subdimensions:
is regarded as relevant indicator by EGKTM. 3 University-industry joint publications Relative number of research publications that list an author affiliate address referring to a business enterprise or a private sector R&d unit;
Less relevant for HEIS oriented to humanities, social sciences. ISI databases available. Used in CWTS University-Industry Research Cooperation Scoreboard. 16 See also the brief section on the EUMIDA project,
included in this report. One of EUMIDA's findings is that data on technology transfer activity
which the university acts as an applicant related to number of academic staff Widely used in KT surveys.
Depends on disciplinary mix of HEI. Data are available from secondary (identical) data sources. 5 Size of Technology Transfer Office Number of employees (FTE) at Technology Transfer Office related to the number of FTE
KT function may be dispersed across the HEI. Not regarded as core indicator by EGKTM. 6 CPD courses offered Number of CPD courses offered per academic staff (fte) Captures outreach to professions Relatively new indicator.
CPD difficult to describe uniformly. 7 Co-patents Percentage of university patents for which at least one co-applicant is a firm,
as a proportion of all patents Reflects extent to which HEI shares its IP with external partners.
Depends on disciplinary mix of HEI. Data available from secondary sources (Patstat. 8 Number of Spin-offs The number of spin-offs created over the last three years per academic staff (fte) EGKTM regards Spin-offs as core indicator.
Field-based Ranking Definition Comments 9 Academic staff with work experience outside higher education Percentage of academic staff with work experience outside higher education within the last 10
years Signals that HEI's staff is placed well to bring work experience into their academic work.
HEIS not doing research in natural sciences/engineering/medical sciences hardly covered. 11 Co-patents Percentage of university patents for
HEIS not doing research in natural sciences/engineering/medical sciences hardly covered. Number of licences more robust than licensing income. 14 Patents awarded The number of patents awarded to the university related to number of academic staff Widely used KT indicator.
Data available from secondary (identical) data sources. Patents with an academic inventor but another institutional applicant (s) not taken into account.
Not relevant for all fields. 15 University-industry joint publications Number of research publications that list an author affiliate address referring to a business enterprise or a private sector R&d unit,
The number of collaborative research projects (university-industry) is another example of a knowledge transfer indicator that was selected not for the Focused Institutional Ranking.
International orientation 3. 3. 4internationalization is discussed a widely and complex phenomenon in higher education. The rise of globalization and Europeanization have put growing pressure on higher education
and research institutions to respond to these trends and develop an international orientation in their activities.
and promote international mobility of students and staff, Activities to develop and enhance international cooperation,
The increasing emphasis on the need to prepare students international labor markets and to increase their international cultural awareness,
The increasing internationalization of curricula The wish to increase the international position and reputation of higher education and research institutions (Enquist, 2005).
Indicators for the dimension International Orientation in the Focused Institutional and Field-based Rankings Focused Institutional Ranking Definition Comments 1 Educational programs in foreign language The number of programs offered in a foreign language
as a percentage of the total number of programs offered Signals the commitment to international orientation in teaching and learning.
Nationality not the most precise way of measuring international orientation. 72 nationality, employed by the institution or working on an exchange base 3 International doctorate graduation rate The number of doctorate degrees awarded to students
but bias towards certain disciplines and languages. 5 Number of joint degree programs The number of students in joint degree programs with foreign university (including integrated period at foreign university) as a percentage of total
enrolment Integration of international learning experiences is central element of internationalization. Data available. Indicator not often used.
Field-based Ranking Definition Comments 6 Incoming and outgoing students Incoming exchange students as a percentage of total number of students and the number of students going abroad as a percentage of total
number of students enrolled Important indicator of the internationalatmosphere'of a faculty/department. Addresses student mobility and curriculum quality.
Data available. 7 International graduate employment rate The number of graduates employed abroad or in an international organization as a percentage of the total number of graduates employed Indicates the student preparedness on the international labor market.
Data not readily available. No clear international standards for measuring. 8 International academic staff Percentage of international academic staff in total number of (regular) academic staff See above institutional ranking 9 International
research grants Research grants attained from foreign and international funding bodies as a percentage of total income Proxy of the international reputation and quality of research activities.
Stakeholders question relevance. 10 Student satisfaction: Internationalization of programs Index including the attractiveness of the university's exchange programs, the attractiveness of the partner universities, the sufficiency of the number of exchange places;
support and guidance in preparing the stay abroad; financial support; the transfer of credits from exchange university;
the integration of Addresses quality of the curriculum. Not used frequently. 73 the stay abroad into studies (no time loss caused by stay abroad.
but no problems of disciplinary distortion because comparison is made within the field. 12 Percentage of international students The number of degree-seeking students with a foreign diploma on entrance as percentage of total enrolment in degree programs.
Reflects attractiveness to international students. Data available but sensitive to location (distance to border) of HEI.
Stakeholders consider the indicator important. 13 Student satisfaction: International orientation of programs Rating including several issues:
existence of joint degree programs, inclusion of mandatory stays abroad, international students (degree and exchange), international background of staff
and teaching in foreign languages. Good indicator of international orientation of teaching; composite indicators depend on the availability of each data element.
It should be pointed out here that one of the indicators is a student satisfaction indicator:Student satisfaction:
Internationalisation of programs'.'This describes the opportunities for students to go abroad. Students'judgments about the opportunities to arrange a semester
or an internship abroad are an aspect of the internationalization of programs. This indicator is relevant for the field level.
An indicator that was considered, but dropped during the stakeholders'consultation process isSize of international office'.
'While this indicates the commitment of the higher education and research institution to internationalization, and data is available,
stakeholders consider this indicator not very important. Moreover, the validity is questionable as the size of the international office as a facilitating service is a very distant proxy indicator.
International partnerships',that is the number of international academic networks a higher education and research institution participates in,
) Higher education and research institutions can play an important role in the process of creating the conditions for a region to prosper.
How well a higher education and research institution is engaged in the region is considered increasingly to be an important part of the mission of higher education institutions.
The latter two dimensions are covered in the U multirank dimensionKnowledge Transfer'.'Indicators for the social dimension of the third mission comprise indicators on international mobility (that are covered in the U multirank dimension International Orientation) and a very limited number of indicators on regional engagement.
Activities and indicators on regional and community engagement can be categorized in three groups: outreach, partnerships and curricular engagement18.
and provision of institutional resources for regional and community use, benefitting both university and the regional community.
Partnerships focus on collaborative interactions with the region/community and related scholarship for the mutual beneficial exchange, exploration, discovery and application of knowledge, information and resources.
learning and scholarship that engage faculty, students and region/community in mutual beneficial and respectful collaboration.
Are there visible structures that function to assist with region-based teaching and learning? Is there adequate funding available for establishing and deepening region-based activities?
Are there courses that have a regional component (such as service-learning courses? are sustained there mutually beneficial
How much does the institution draw on regional resources (students, staff, funding) and how much does the region draw on the resources provided by the higher education and research institution (graduates and facilities)?
Clarification is required as to what constitutes a region. U multirank has suggested to start with the existing list of regions in the Nomenclature of Territorial Units for Statistics (NUTS) classification developed
and used by the European Union19, in particular the NUTS 2 level. For non-European countries a different region classification will need to be used.
In our feasibility study, we have allowed higher education and research institutions to specify their own delimitation of region
Indicators for the dimension Regional Engagement in the Focused Institutional and Field-based Rankings Focused Institutional Ranking Definition Comments 1 Graduates working in the region The number of graduates working in the region,
as a percentage of all graduates employed Frequently used in benchmarking exercises. Stakeholders like indicator.
Sensitive to way public funding for HEI is organized (national versus regional/federal systems. Availability of data problematic. 3 Regional joint research publications Number of research publications that list one or more author-affiliate addresses in the same NUTS2 or NUTS3 region,
New type of indicator. 5 Student internships in local/regional enterprises The number of student internships in regional enterprises as a percentage of total enrolment (with defined minimum of weeks
and/or credits) Internships open up communication channels between HEI and regional/local enterprises. Stakeholders see this as important indicator.
Indicator hardly ever used. 7 Graduates working in the region The number of graduates working in the region,
as a percentage of all graduates employed See above institutional ranking. 8 Regional participation in continuing education Number of regional participants (coming from NUTS3 region where HEI is located) as percentage of total number
of population in NUTS3 region aged 25+Indicates how much the HEI draws on the region and vice versa.
Indicator hardly ever used. 9 Student internships in local/regional enterprises Number of internships of students in regional enterprises (as percentage of total students See above institutional ranking,
but disciplinary bias not problematic at field level. 10 Summer school/courses for secondary education students Number of participants in schools/courses for secondary school students as a percentage of total enrolment
'Co-patents with regional firms'reflect cooperative research activities between higher education institutions and regional firms. While data may be found in international patent databases,
The same holds for measures of the regional economic impact of a higher education institution, such as the number of jobs generated by the university.
Assessing what the higher education and research institutiondelivers'to the region (in economic terms) is seen as most relevant
but data constraints prevent us from the use of such an indicator. Public lectures that are open to an external
A high percentage of new entrants from the region may be seen as the result of the high visibility of regionally active higher education and research institutions.
It may also be a result of the engagement with regional secondary schools. This indicator however was included not in our list,
and 7the pilot study on the empirical feasibility assessment of the U multirank tool and its various indicators will be discussed.
As a result of this pilot assessment the final list of indicators will be presented. 4 Constructing U Constructing U Constructing U Constructing U multirank:
and data collection instruments used in constructing U multirank. The first part is an overview of existing databases mainly on bibliometrics and patents.
and from students. 4. 2 Databases Existing databases 4. 2. 1one of the activities in the U multirank project was to review existing rankings
If existing databases can be relied on for quantifying the U multirank indicators this would be helpful in reducing the overall burden for institutions in handling the U-Multirank data requests.
For other aspects and dimensions, U multirank will have to rely on self-reported data. Regarding research output and impact, there are worldwide databases on journal publications and citations.
To further assess the availability of data covering individual higher education and research institutions, the results of the EUMIDA project were taken also into account. 21 The EUMIDA project (see:
www. eumida. org) seeks to develop the foundations of a coherent data infrastructure (and database) at the level of individual higher education institutions.
Our analysis on data availability was completed with a brief online consultation with the group of international experts connected to U multirank (see section 4. 2. 5). The international experts were asked to give their assessment of the 21 The U multirank project was granted access to the preliminary
in order to learn about data availability in the countries covered by EUMIDA. 80 situation with respect to data availability in some of the non-EU countries included in U multirank Bibliometric databases 4. 2. 2there are a number of international databases
which can serve as a source of information on the research output of a higher education and research institution (or one of its departments).
The production of publications by a higher education and research institute not only reflects research activities in the sense of original scientific research,
but usually also the presence of underlying capacity and capabilities for engaging in sustainable levels of scientific research. 22 The research profile of a higher education
The bibliometric methodologies applied in international comparative settings such as U multirank usually draw their information from publications that are released in scientific and technical journals.
U multirank therefore makes use of international bibliometric databases to compile some of its research performance indicators
To compile the publications-related indicators in the U multirank pilot study, bibliometric data was derived from the October 2010 edition of the Web of Science bibliographical database.
and operated by the CWTS (being one of the CHERPA Network partners) under a full license from Thomson Reuters. This dedicated version includes thestandardized institutional names'of higher education
This data processing of address information is done at the aggregate level of the entiremain'organization (not for sub-units such as departments or faculties.
All the selected institutions in the U multirank pilot study produced at least one Web of Science-indexed research publication during the years 1980-2010.
which mainly refer to discovery-orientedbasic'research of the kind that is conducted at universities and research institutes.
For the following six indicators selected for inclusion in the U multirank pilot test (see chapter 6) one can derive data from the CWTS/Thomson Reuters Web of Science database:
1. total publication output 2. university-industry joint publications 3. international joint publications 4. field-normalized citation rate 5. share of the world
#6) that were constructed specially for U multirank and that have never been used before in any international classification or ranking.
Patent databases 4. 2. 3as part of the indicators in the Knowledge Transfer dimension, U multirank selected the number of patent applications for
which a particular higher education and research institution acts as an applicant and (as part of that) the number of co-patents applied for by the institution together with a private organization.
For U multirank, patent data were retrieved from the European Patent office (EPO. Its Worldwide Patent Statistical Database (version October 2009) 25, also known as PATSTAT, is designed
and by DG Research. 25 This version is held by the K. U. Leuven (Catholic University Leuven)
Data availability according to EUMIDA 4. 2. 4like the U multirank project, the EUMIDA project (see http://www. eumida. org) collects data on individual higher education and research institutions.
The EUMIDA and U multirank project teams agreed to share information on issues such as definitions of data elements
The overlap lies mainly in the area of data related to the inputs (or activities) of higher education and research institutions.
since U-Map aims to build activity profiles for individual institutions whereas U multirank constructs performance profiles.
The findings of EUMIDA point to the fact that for the more research intensive higher education institutions, data for the dimensions of Education and Research are covered relatively well
Table 4-1 below shows the U multirank data elements that are covered in EUMIDA and whether information on these data elements may be found in national databases (statistical offices, ministries, rectors'associations, etc.).
The table shows that EUMIDA primarily focuses on the Teaching & Learning and Research dimensions,
The table illustrates that information on only a few U multirank data elements is available from national databases and,
Data elements shared between EUMIDA and U multirank: their coverage in national databases Dimension EUMIDA and U multirank data element European countries where data element is available in national databases Teaching & Learning relative rate of graduate unemployment
CZ, FI, NO, SK, ES Research expenditure on research AT*,BE, CY, CZ*,DK, EE, FI, GR*,HU, IT, LV*,LT*,LU, MT*,NO, PL*,RO*,SI*,ES, SE, CH,
IE*,IT, LU, MT*,NO, NL (p), PL*,SI, ES, UK 85 International Orientation (no overlap between U multirank and EUMIDA) Regional Engagement (no overlap between U multirank and EUMIDA) Source:
There are confidentiality issues (e g. national statistical offices may not be prepared to make data public without consulting individual HEIS)( p) indicates:
Data are only partially available (e g. only for public HEIS, or only for (some) research universities) The list of EUMIDA countries with abbreviations:
Austria (AT), Belgium (BE), Belgium-Flanders community (BE-FL), Bulgaria (BG), Cyprus (CY), Czech republic (CZ), Denmark (DK), Estonia (EE), Finland (FI) France (FR), Germany (DE
) Expert view on data availability in non-European countries 4. 2. 5the Expert Board of the U multirank project was consulted to assess for their six countries all from outside Europe the availability of data
related to the U multirank indicators. 27 They gave their judgement on the question whether data was available in national databases and/or in the institutions themselves.
Table 4-2 shows that the Teaching and Learning dimension scores best in terms of data availability.
Availability of U multirank data elements in countries'national databases according to experts in 6 countries (Argentina/AR, Australia/AU, Canada/CA, Saudi arabia/SA, South africa/ZA
, United states/US) Dimension U multirank data element Countries where data element is available in national databases Countries where data element is available in institutional database Teaching & Learning
graduation rate AR, CA, US, ZA AR, AU, SA, ZA relative rate of graduate unemployment AU, CA
AU, CA, SA, ZA incentives for knowledge exchange AR AR, AU, CA, SA CPD courses offered AU, CA, SA, ZA university-industry
ZA Patents AR AR, CA, US, ZA 87 International Orientation educational programs in foreign language ZA AR, AU, CA, SA, ZA international academic
staff ZA, US AR, AU, CA, SA, US, ZA joint degree programmes AR AR, AU, CA, US international doctorate graduation rate
US AR, CA, SA, US Regional Engagement income from regional sources AU, CA, SA, ZA student internships in local/regional enterprises AU
, SA, US, ZA graduates working in the region US research contracts with regional business AR, CA, ZA co-patents with regional firms ZA CA, ZA
Based on U multirank expert survey If we look at the outcomes, it appears that for the Teaching
& Learning indicators the situation is rather promising (graduation rate, time to degree). In the Research dimension, Expenditure on Research and Research Publication Output data are represented best in national databases.
or definitions that differ from the ones used for the questionnaires applied in U multirank (see next section).
the U multirank project had to rely largely on self-reported data (both at the institutional
collected directly from the higher education and research institutions. The main instruments to collect data from the institutions were four online questionnaires:
three for the institutions and one for students. 88 The four surveys are: U-Map questionnaire institutional questionnaire field-based questionnaire student survey.
In designing the questionnaires, emphasis was placed on the way in which questions were formulated. It is important that they can only be interpreted in one way
the U-Map questionnaire is an instrument for identifying similar subsets of higher education institutions within the U multirank sample.
students: numbers; modes of study and age; international students; students from region; graduates: by level of program;
subjects; orientation of degrees; graduates working in region; staff data: fte and headcount; international staff; income:
total income; income by type of activity; by source of income; expenditure: total expenditure; by cost centre;
use of full cost accounting; research & knowledge exchange: publications; patents; concerts and exhibitions; start-ups. The academic year 2008/2009 was selected as the default reference year.
Respondents from the institutions were advised to complete the U-Map questionnaire first before completing the other questionnaires. 89 4. 3. 1. 2 Institutional questionnaire By means of U multirank's institutional questionnaire28,
data is collected on the performance of the institution. Like the U-Map questionnaire, this questionnaire is structured along the lines of different data types to allow for a more rapid data collection by the institution's respondents.
The questionnaire is divided therefore into the following categories: general information: name and contact; public/private character and age of institution;
university hospital; students: enrolments; programme information: bachelor/master programmes offered; CPD courses; graduates: graduation rates;
graduate employment; staff: fte and headcount; international staff; technology transfer office staff; income: total; income from teaching;
income from research; income from other activities; expenditure: total expenditure; by cost centre; coverage; research & knowledge transfer:
publications; patents; concerts and exhibitions; start-ups. As the institutional questionnaire and the U-Map questionnaire partly share the same data elements,
Data elements from U-Map are transferred automatically to the U multirank questionnaire using atransfer tool'.
'The academic year 2008/2009 was selected as the default reference year. 4. 3. 1. 3 Field-based questionnaire The field-based questionnaire includes information on individual faculties/departments
and their programmes in the pilot fields of business studies, mechanical engineering and electrical engineering. Like the institutional questionnaire, the field-based questionnaire is structured along the different types of data requested to reduce the administrative burden for respondents.
number of professors; international visiting/guest professors; professors offering lectures abroad; professors with work experience abroad;
number Phds; number post docs; funding: external research funds; license agreements/income; joint R&d projects with local enterprises;
students: total number (female/international degree and exchange students; internships made; degree theses in cooperation with local enterprises;
regional engagement: continuing education programmes/professional development programmes; summer schools/courses for secondary education students; description:
accreditation of department; profile with regard to teaching & learning, profile with regard to research. A second part of the questionnaire asks for details of the individual study programmes to be included in the ranking.
In particular the following information was collected: basic information about the programme (e g. degree, length; interdisciplinary characteristics;
full time/part time; number of students enrolled in the programme; number of study places and level of tuition fees;
periods of work experience integrated in programme; international orientation; joint study programme; credits earned for achievements abroad;
number of exchange students from abroad; courses held in foreign language; special features; number of graduates;
information about labor market entry. Student survey 4. 3. 2for measuring student satisfaction (see section 3. 3. 1),
the main instrument is an online student survey. In order to assure that students are pressured not by their institution/teachers to rate their own institution favorably,
the institutions were asked to invite their students individually to participate in the survey either by mail or email rather than having them complete the survey in the classroom.
Access to the questionnaire was controlled by individual passwords. The student questionnaire uses a combination of open questions
and predefined answers and asks for the students'basic demographic data and information on their programme.
The main focus of the survey is on the assessment of the teaching and learning experience
and on the facilities of the institution. 91 In order to control for possible manipulation by institutions,
a number of control questions were included in the questionnaire. Students were asked for information on how they received the invitation
and whether there were any attempts by teachers, deans or others to influence their ratings. In relation to the student survey, the delimitation of the sample is important.
As students were asked to rate their own institution and programme, students who had started just their degree programme were excluded from the sample.
Hence students from the second year onwards in bachelor and master programmes and from the third year onwards in long (pre-Bologna) programmes were meant to be included.
In order to have a sample size that allows for analysis, the survey aimed to include up to 500 students by institution and field.
Pretesting the instruments 4. 3. 3a first version of the three new data collection instruments (the institutional questionnaire,
department questionnaire and student questionnaire) was tested between June and September 2010. The U-Map questionnaire had already been tested.
The U multirank questionnaires were tested in terms of cultural/linguistic understanding, clarity of definitions of data elements and feasibility of data collection.
Ten institutions were invited to complete and comment on the institutional and departmental questionnaire and to distribute 20 student questionnaires.
The selection was based on the list of institutions that had expressed their interest in participating in the project.
In selecting the institutions for the pre-test the U multirank team considered the geographical distribution and the type of institutions.
Since not all institutions responded fully to the pre-test, alight version'was sent to an additional 18 institutions.
Instead of asking them to provide all the data on a relatively short notice, these institutions were contacted to offer their feedback on the clarity of the questions and on the availability of data.
According to the pre-test results, the general format and structure of the institutional questionnaire seemed to be clear and user-friendly.
The pre-test showed, however, two types of problems for some indicators. Several indicators require a more precise specification
definition and/or examples. Respondents worried that for some indicators the definitions might not be sufficient for internationally comparable results.
Secondly, several indicators presented difficulties to respondents because the required data was collected not centrally by the institution.
Some of the frequently mentioned availability problems are presented separately for each dimension. Teaching and learning.
Questions about student numbers and study programmes seem to be unproblematic in most cases. Problems emerge however with some output-related 92 data elements such as graduate employment,
where often data is collected not at the institutional level. Interdisciplinarity of programs proved to be another problematic indicator
This was the case forgraduates working in the region'andstudent internships in regional enterprises'.
Information on international students and staff, as well as on programmes in a foreign language was largely available.
As expected, the question of how to define aninternational student'came up occasionally. In sum, the institutional questionnaire worked well in terms of its structure and usability.
The respondents did not find the questionnaire excessive or burdensome. The pre-test did reveal a need for clearer definitions for some data elements.
Problems with regard to the availability of data were reported mainly on issues of academic staff (e g. fte data, international staff), links to business (in education/internships and research) and the use of credits (ECTS.
The definition of the categories of academic staff(professors'other academic staff')clearly depends on national legislation
The student survey was pretested on a sample of over 80 students. In general, their comments were very positive.
and captured relevant issues of the students'teaching and learning experience/environment. Some students would have preferred more questions about the social climate at the institution
and about the city or town in which it was situated; a number of reactions (also from pre-test institutions) indicated that the questionnaire should not be any lengthier, however.
A major challenge deduced from these comments is how to compare across cultures students'assessment of their institutions.
Based on approved instruments from other fields (e g. surveys on health services) we have usedanchoring vignettes'to test sociocultural differences in assessing specific constellations of services/conditions in higher education with respect to teaching and learning.
to cover all relevant issues on the five dimensions of U multirank and to limit the questionnaire in terms of length.
In order to come to a meaningful and comprehensive set of indicators at the conclusion of the U multirank pilot study we had to aim for a broad data collection to cover a broad range of indicators.
For the student questionnaire the conclusion was that there is no need for changes in the design.
a number of supporting instruments were prepared for the four U multirank surveys. These instruments ensure that respondents will have a common understanding of definitions and concepts.
A glossary of indicators for the four surveys was published on the U multirank website. Throughout the data collection process the glossary was updated regularly.
This allowed questions to be asked concerning the questionnaires and for contact with the U multirank team on other matters.
A technical specifications protocol for U multirank was developed introducing additional functions in the questionnaire to ensure that a smooth data collection could take place:
the option to transfer data from the U-Map to the U multirank institutional questionnaire, and the option to have multiple users access the questionnaire at the same time.
We updated the U multirank website regularly and provided information about the steps/time schedules for data collection.
All institutions had clear communication partners from the U multirank team. 4. 4 A concluding perspective This chapter, providing a quick survey of existing databases,
one will have to rely to a large extent on data collected by means of questionnaires sent to representatives of institutions, their students and possibly their 95 graduates.
and include employers and other clients of higher education and research institutions, but that would make the task even bigger.
institutions, representatives of departments in the institution and students. Sampling techniques (selecting/identifying institutions, departments/programmes,
their representatives and their students) are crucial, as is the intelligent use of technology (internet, visualisation techniques, supporting tools).
The U multirank questionnaires therefore were accompanied by a glossary of definitions and an FAQ facility to improve the reliability of the answers.
However, as a result of differences in national higher education systems, different accounting systems, as well as different national customs and definitions of indicators, there are limits to the comparability of data.
Testing U multirank: pilot sample and data collectionmultirank: pilot sample and data collection Multirank: pilot sample and data collection Multirank:
and construction process for U multirank, we will describe the feasibility testing of this multidimensional ranking tool.
This test took place in a pilot study specifically undertaken to analyse the actual feasibility of U multirank on a global scale.
In addition we needed to ensure sufficient overlap between the institutional ranking and the field-based rankings in business studies and two fields of engineering.
one of the basic ideas of U multirank is the link to U-Map. U-Map is an effective tool to identify institutional activity profiles of institutions similar enough to compare them in rankings.
which makes it insufficiently applicable for the selection of the sample of pilot institutions for the U multirank feasibility test.
(and cannot) claim that we have designed a sample that is representative of the full diversity of higher education in the world (particularly as there is no adequate description of this diversity)
The existing set of higher education institutions in the U-Map database was included. This offered a clear indication of a broad variety of institutional profiles. 98 Some universities applied through the U multirank website to participate in the feasibility study.
Their broad profiles were checked as far as is possible against the U-Map dimensions in order to be able to describe their profiles.
In most countriesnational correspondents'(a network created by the research team) were asked to suggest institutions that would reflect the diversity of higher education institutions in their country.
and suggested institutions that offer programmes in the fields addressed by the pilot study (business studies and two fields of engineering).
an Institute for Water and Environment, an agricultural university, a School of Petroleum and Minerals, a military academy, several music academies and art schools, universities of applied sciences and a number of technical universities.
The 159 institutions that agreed to take part in the U multirank pilot are spread over 57 countries.
Our national correspondents explained that Chinese universities are reluctant to participate in rankings when they cannot predict the outcomes of participation
In the US the U multirank project is perceived as strongly European-focused, which kept some institutions from participating.
The problems with some countries are an important aspect regarding the feasibility of a global implementation of U multirank.
All in all the intention to attain a sufficient international scope in the U multirank pilot study by means of a global sample can be seen as successful.
Regional distribution of participating institutions Region and Country Initial proposal for number of institutions Institutions in the final pilot selection Institutions that confirmed participation Institutions which delivered U multirank institutional data Institutions
which delivered U multirank institutional data and U-Map data July 2010 February 2011 April 2011 April 2011 I. EU 27 (population in millions) Austria (8m
Region and Country Initial proposal for number of institutions Institutions in the final pilot selection Institutions that confirmed participation Institutions which delivered U multirank institutional data Institutions
which delivered U multirank institutional data and U-Map data Netherlands (16m) 3 7 3 3 3 Poland (38m) 6 12 7 7 6
and Country Initial proposal for number of institutions Institutions in the final pilot selection Institutions that confirmed participation Institutions which delivered U multirank institutional data Institutions
which delivered U multirank institutional data and U-Map data Other Asia 5 2 The Philippines 1 1 1 Taiwan 1 1 0 Vietnam 2
19 institutions are in the top 200 of the Times Higher education ranking, 47 in the top 500 of the ARWU ranking and 47 in the top 500 of the QS ranking.
Since the exact number of higher education institutions in the world is known not we use a rough estimate of 15,000 institutions worldwide.
In that case the top 500 comprises only 3%of all higher education institutions. In our sample 29%of the participating institutions are in the top 500,
57 departments in business studies participated, 50 in electrical engineering and 58 in mechanical engineering. Many institutions participated in more than one field
As has been explained, the field pilot study included a student satisfaction survey. Participating institutions were asked to send invitations to their bachelor
and master's students to take part in a survey. 106 departments agreed to do so. Some institutions decided to submit the information requested in the departmental questionnaire
but not to participate in the student survey as they did not want it to compete with their own surveys or effect participation in national surveys.
students were on holiday or taking examinations during the pilot study survey window. In some cases the response rate was very low
770 students provided data via the online questionnaire. After data cleaning we were able to include 5, 901 student responses in the analysis:
45%in business studies; 23%in mechanical engineering; and 32%in electrical engineering. 5. 3 Data collection The data collection for the pilot study took place via two different processes:
the collection of self-reported data from the institutions involved in the study (including the student survey) and the collection of data on these same institutions from existing international databases on publications/citations and patents.
In the following sections we discuss these data collection processes. 103 Institutional self-reported data 5. 3. 15.3.1.1 The process The process of data collection from the organizations was organised in a sequence of steps
o U multirank institutional questionnaire Field-based ranking: o U multirank field-based questionnaires o U multirank Student survey 104 Figure 5-1:
U multirank data collection process The institutions were given seven weeks to collect the data, with deadlines set according to the dates the institution confirmed their participation.
Thegrouping'criterion for this was the successful submission of the contact form. The next step to ensure a high response rate was to review
We advised the institutions to start working with the questionnaires in a certain order beginning with the U-Map and then the U multirank questionnaires
since a tool had been developed to facilitate the transfer of overlapping information from the U-Map questionnaire to the U multirank institutional questionnaire.
Organising a survey among students on a global scale was one of the major challenges in U multirank.
There are some international student surveys (such asEurostudent) 'but these usually focus on general aspects of student life and their socioeconomic situation.
To the best of our knowledge there is no global survey asking students to assess aspects of their own institutions and programmes.
So we had no way of knowing whether students from different countries and cultures would assess their institutions in comparable ways.
In Chapter 8 (8. 2) we will discuss the flexibility of our approach to a global scale student survey.
The data collection through the student survey was organized by the participating institutions. They were asked to send invitation letters to their students,
either by regular mail or by email. We prepared a standard letter to students explaining the purpose of the survey/project
and detailing the URL and personal password they needed to access the online questionnaire. Institutions were able to download a package including the letter and a list of passwords (for email invitation) and a form letter (for printed mail invitations.
If the letters were sent by post, institutions covered the costs of postage. No institution indicated that it did not participate in the student survey because of the cost of inviting the students.
In some countries (e g. Australia) the students were taking examinations or were on vacation at the time the survey started.
As a consequence some institutions decided not to participate in the survey; others decided to postpone the survey.
770 students participated in the survey, of this total 5, 901 could be included in the analysis. 106 5. 3. 1. 2 Follow-up survey After the completion of the data collection process we asked those institutions that submitted data to share their experience of the process
Field questionnaire Mechanical engineering 14 1. 0 20 6. 0 Organization of student survey 18 0. 2 21 4. 4 The analysis also showed that European institutions
8. 3 10 Field questionnaire Business studies 2. 5 10 7. 3 7 Field questionnaire Electrical engineering 3. 5 8 7. 0
5 Field questionnaire Mechanical engineering 4. 6 7 7. 0 4 Organization of student survey 4. 1 7 7. 9 7 107 Figure
critical comments indicated some confusion about the relationship between the U-Map and U multirank institutional questionnaires.
Most pilot institutions reported no major problems with regard to student, graduate and staff data. If they had problems these were mostly with research and third mission data (knowledge transfer,
regional engagement)( See Figure 5-4). 02468 10 12 very good good neutral poor very poor General procedures Communication with U multirank 0123456789
and the U multirank technical specification email (see appendices 10 and 11) with the institutions to ensure that a smooth data collection could take place.
all universities had clear communication partners in the U multirank team. The main part of the verification process consisted of the data cleaning procedures after receiving the data.
The student survey For the student survey, after data checks we omitted the following elements from the gross student sample:
Missing data on the students'institution Missing data on their field of study (business studies, mechanical engineering, electrical engineering) Students enrolled in programs other than bachelor/short national first degree programs
and master/long national first degree programs Students who had spent little time on the questionnaire and had responded not adequately.
Students had to answer at least parts of the questions that are used to calculate indicators and give the necessary information about their institution,
Students who reported themselves as formally enrolled but not studying actively Students reporting that they had moved just to their current institution Students who obviously did not answer the questionnaire seriously
In addition we performed a recoding exercise for those students who reported their field of study asother'.
'Based on their explanation and on the name of the programme they reported, the field was recoded manually in all cases where a clear attribution was possible.
As a result of these checks the data of about 800 student questionnaires have been omitted from the sample. International databases 5. 3. 2the data collection regarding the bibliometric and patent indicators took place by studying the relevant international databases
Note that this set includes four new performance indicators that have never been used before in any international ranking of higher education institutions.
The following four indicators have especially been designed for U multirank: International joint research publications; University-industry joint research publications;
Regional joint research publications; Highly cited research publications. Further information on each of the six bibliometric indicators used in the pilot study is presented below. 1) Total publication output Frequency count of research publications with at least one author address referring to the selected main organization.
This is an indicator of research collaboration with partners located in other countries. 3) University-industry joint research publications Frequency count of publications with at least one author address referring to the selected main organization
Statistical information on 500 universities worldwide is freely available at the CWTS website: www. socialsciences. leiden. edu/cwts/products-services/scoreboard. html 4) Regional joint research publications Frequency count of publications with at least one author address referring to the selected main organization
In a possible next stage of U multirank we expect to apply a different, and more flexible, way of delineating regions
'The bibliometric data in the pilot version of U multirank database refer to one measurement per indicator.
These data refer to database years. 114 The research publications in the three fields of our pilot study (business studies,
Although all the HEIS that participated in the U multirank pilot study produced at least one Wos-indexed research publication during the years 1980-2010
In follow-up stages of U multirank we plan to lower the threshold values for Wos-indexed publication output
Depending on the severity of the problem within a HEI, we can then either: remove the institution from all indicators that involve bibliometric data;
The development of patent indicators on the micro-level of specific entities such as universities is complicated by the heterogeneity of patentee names that appear in patent documents within and across patent systems.
was developed by ECOOM (Centre for R&d Monitoring, Leuven University; partner in CHERPA), in partnership with Sogeti29, in the framework of the EUROSTAT work on Harmonized Patent Statistics.
Second, and specifically for the U multirank pilot, keyword searches were designed and tailored for each institute individually,
Several mostly European studies have compared the volumes of suchuniversity-invented'patents (invented by an academic scientist)
versusuniversity-owned'(with the university registered as applicant). Evidence from studies in France 0%10%20%30%40%50%60%01234567891213141518192021222631404547596071454459%of institutes from pilot (N=165) Annual average patent volume (2000
2007) suggests that about 60%of university-invented patents are owned not university. The available evidence from some US studies indicates much smaller percentages (approximately 20%)of university-invented patents that are owned not university (Thursby et al.
2007). ) Moreover, national and institutional differences in culture and legislation regarding intellectual property rights on university-created knowledge will cause the size of the consequentialbias'to vary between countries.
Institutional and national differences may concern the autonomy of institutions, the control they exercise over their academic staff,
academic patents in Europe (i e. patents invented by academic scientists) are much less likely to be owned'by universities
(i e. the university is registered as applicant) than in the USA, as European universities have lower incentives to patent
or generally have less control over their scientists'activities (Lissoni et al.,2008). ) This does not mean that European academic scientists do not effectively contribute to the inventive activity taking place in their countries,
as one might presume from considering only the statistics on university-owned patents. On the contrary, the data provided
where universities own the majority of academic patents, Europe witnesses the dominance of business companies,
one should at all times bear in mind the relatively sizable volume of university-invented patents that is not retrieved by the institution-level search strategy and institutional and national variations in the size of the consequential limitation bias.
We have argued that the field-based rankings of indicators in each dimension contribute significantly to the value and the usability of U multirank.
At present, however, the breakdown of patent indicators by the fields defined in the U multirank pilot study (business studies,
The overview of higher education fields is based on educational programs, research fields and other academically-oriented criteria.
Due to the consequential large difference in notions that underliehigher education field'versustechnology field, 'a concordance between both is meaningless.
Therefore we were unable to produce patent analyses at the field-based level of U multirank. 6 Testing UTESTING U Testing U Testing U multirank:
results 6. 1 Introduction The main objective of the pilot study was to empirically test the feasibility of the U multirank instrument.
and the potential upscaling of U multirank to a globally applicable multidimensional ranking tool. 6. 2 Feasibility of indicators In the pilot study we analyzed the feasibility of the various indicators that were selected after the multi-stage process of stakeholder
the indicator focuses on the performance of (programs in) higher education and research institutions and is defined in such a way that it measuresrelative'characteristics (e g. controlling for size of the institution) o Face validity:
and the Advisory Group. 122 Teaching & Learning 6. 2. 1the first dimension of U multirank is Teaching & Learning.
Teaching & Learning TEACHING & LEARNING Rating of indicators (pre-pilot) Feasibility score (post-pilot) Focused institutional ranking Relevance Concept/construct validity Face validity Robustness Availability Preliminary rating
Feasibility score Data availability Conceptual clarity Data consistency Recommendation Graduation Rate A b Time to Degree B b Relative Rate of Graduate (Un) employment
Much to our surprise there were few comments on the indicators on graduation rate and time to degree.
The fact that in many countries/institutions different measurement periods (other than 18 months after graduation) are used seriously hampers the interpretation of the results on this indicator.
the indicators that have been built using the information from departmental questionnaires and the indicators related to student satisfaction data. 123 Table 6-2:
Teaching & Learning (departmental questionnaires) TEACHING & LEARNING Rating of indicators (pre-pilot) Feasibility score (post-pilot) Field-based ranking Departmental questionnaire Relevance Concept/construct validity Face validity Robustness Availability Preliminary
rating Feasibility score Data availability Conceptual clarity Data consistency Recommendation Student/staff ratio A a Graduation rate A b Qualification of academic staff
use different time periods in measuring employment status (e g. six, 12 or 18 months after graduation).
As normally the rate of employment is increasing continuously over time, particularly during the first year after graduation,
Teaching & Learning (student satisfaction scores) TEACHING & LEARNING Rating of indicators (pre-pilot) Feasibility score (post-pilot) Field-based ranking Student survey Relevance Concept/construct validity Face validity Robustness Availability Preliminary
rating Feasibility score Data availability Conceptual clarity Data consistency Recommendation Organization of programme A a Inclusion of work experience A a Evaluation of teaching A a
problems with regard to the feasibility of individual indicators from the student survey. General aspects of feasibility of a global student survey are discussed in section 6. 3. Research 6. 2. 2indicators on research include bibliometric indicators (institutional
and field-based) as well as indicators derived from institutional and field-based surveys. In general the feasibility of the research indicators,
Stakeholders, in particular representatives of art schools, stressed the relevance of this indicator despite the poor data situation.
efforts should be made to enhance the data situation on cultural research outputs of higher education institutions. This cannot be done by 126 producers of rankings alone;
Feasibility score Data availability Conceptual clarity Data consistency Recommendation External research income A a Total publication output*A a Student satisfaction:
In general, the data delivered by faculties/departments revealed some problems in clarity of definition of staff data.
The data on post-doc decisions proved to be more problematic in business studies than in engineering.
transfer A a Patents awarded**A b University-industry joint research publications*A a CPD courses offered per fte academic staff B b Start-ups per fte academic staff
Robustness Availability Preliminary rating Feasibility score Data availability Conceptual clarity Data consistency Recommendation University-industry joint research publications*A a Academic
but primarily in business studies and less in engineering. The indicators based on data from patent databases are feasible only for institutional ranking due to discrepancies in the definition and delineation of fields in the databases.
Robustness Availability Preliminary rating Feasibility score Data availability Conceptual clarity Data consistency Recommendation Percentage of programs in foreign language A a International joint research
publications*A a Percentage of international staff B A Percentage of students in international joint degree programs A b International doctorate graduation rate B A Percentage foreign degree
-seeking students New indicator B Percentage students coming in on exchanges New indicator A Percentage students sent out on exchanges New indicator A*Data source:
There were some problems reported with availability of information on nationality of qualifying diploma and students in international joint degree programs.
Robustness Availability Preliminary rating Feasibility score Data availability Conceptual clarity Data consistency Recommendation Percentage of international students A a Incoming and outgoing students
A a-B Opportunities to study abroad (student satisfaction) A b International orientation of programs A b International academic staff B A-B International joint research publications
*B A International research grants B b International doctorate graduation rate B A*Data source: Bibliometric analysis Observations from the pilot test:
Not all institutions have clear data on outgoing students. In some cases only those students participating in institutional or broader formal programs (e g.
ERASMUS) are registered and institutions do not record numbers of students with self-organized stays at foreign universities.
Availability of data was relatively low regarding the student satisfaction indicator as only a few students had participated already in a stay abroad
and could assess the support provided by their university. The indicatorinternational orientation of programs'is a composite indicator referring to several data elements;
feasibility is limited by missing cases for some of the data elements. Some institutions could not identify external research funds from international funding organizations.
Some universities had difficulties to identify their international staff based on this definition. Regional engagement 6. 2. 5up to now the regional engagement role of universities has not been included in rankings.
There are a number of studies on the regional economic impact of higher education and research institutions,
either for individual institutions and their regions or on higher education in general. Those studies do not offer comparable institutional indicators or indicators disaggregated by fields.
Table 6-10: Focused institutional ranking indicators: Regional Engagement REGIONAL ENGAGEMENT Rating of indicators (pre-pilot) Feasibility score (post-pilot) Focused institutional ranking Relevance Concept/construct validity Face validity
Robustness Availability Preliminary rating Feasibility score Data availability Conceptual clarity Data consistency Recommendation Percentage of income from regional sources A c In Percentage of graduates
working in the region B c In Research contracts with regional partners B b Regional joint research publications*B A Percentage of students in internships in local enterprises B c In*Data source:
which caused some problems in non-European higher education institutions. But even within Europe NUTS regions are seen as problematic by some institutions,
Both in institutional and in field-based data collection information on regional labor market entry of graduates could not be delivered by most institutions.
and the relevance of higher education and research to the regional economy and the regional society at large,
validity Robustness Availability Preliminary rating Feasibility score Data availability Conceptual clarity Data consistency Recommendation Graduates working in the region B c In Regional participation in continuing
education B c Out Student internships in local enterprises B b-C In Degree theses in cooperation with regional enterprises B b-C In Summer schools C C
While far from good, the data situation on student internships in local enterprises and degree theses in cooperation with local enterprises turned out to be less problematic in business studies than that found in the engineering field.
and knowledge of local higher education institutions to be utilized in a regional context, in particular in small-and medium-sized enterprises.
and in many non-metropolitan regions they play an important role in the recruitment of higher education graduates. 6. 3 Feasibility of data collection As explained in section 5. 3 data collection during the pilot
the U-map questionnaire to identify institutional profiles the U multirank institutional questionnaire the U multirank field-based questionnaire We supported this data collection with extensive data cleaning processes,
The parallel institutional data collection for U-Map and U multirank caused some confusion. Although a tool was implemented to pre-fill data from U-Map into U multirank,
some confusion remained concerning the link between the two instruments. In order to test some varieties, institutional and field-based questionnaires were implemented with different features (e g. definition of international staff).
In some countries the U multirank student survey conflicted with existing national surveys, which in some cases are highly relevant for institutions.
It should be evaluated how far U multirank and national surveys could be harmonized in terms of questionnaires and, at least, in terms of timing.
While a field period of four to six weeks after sending out invitations to students seems appropriate at individual institutions
the time window to organize a student survey across all institutions has to be at least six months
and data quality this problem will be mitigated. 135 Student survey data 6. 3. 2one of the major challenges regarding the feasibility of our global student survey is
whether the subjective evaluation of their own institution by students can be compared globally or whether there are differences in the levels of expectations or respondent behavior.
In our student questionnaire we usedanchoring vignettes'to control for such effects. Anchoring vignettes is designed a technique to ameliorate problems that occur
up to now they have not been used in comparative higher education research. Hence we had to develop our own approach to this research technique.
For a detailed description see appendix 9) Our general conclusion from the anchoring vignettes analysis was that no correlation could be found between the students'evaluation of the situation in their own institutions
This implies that the student assessments were influenced not systematically by differences in levels of expectation (related to different national backgrounds or cultures),
and thus that the feasibility of the data collection through a global-level student survey is sufficiently feasible.
To assess the feasibility of our bibliometric data collection we studied the potential effects of a bottom-up verification process via a special case study of six French universities.
For example, even a seemingly universal name such asuniversity'may not describe the same institutional reality in different systems in England or in the US
some so-calleduniversities'could be in fact umbrella organizations covering several autonomous universities, while in France many universities are thematic and issue from one comprehensiveroot'university.
Second, most of the national research systems tend to become more complex under the pressure of thefunding on project'policies that induce the setup of various networklike institutions such as consortia, platforms andpoles'.
For communication purposes, authors may prefer to replace the name of the university with the name of a 137 network.
In addition, in some countries like France, universities, schools and research institutions can be interwoven by many joint labs,
Several studies have shown that the volume of such university-invented patents is sizable (Azagra Caro et al.
is it possible to extend U multirank to a comprehensive global coverage and how easy would it be to add additional fields?
In terms of the feasibility of U multirank as a potential new global ranking tool, the results of the pilot study are positive,
and taking into account that it is clear that U multirank is based a Europe initiative, this represents a strong expression of worldwide interest.
Our single caveat concerns an immediate global-level introduction of U multirank. The pilot study suggests that a global multidimensional ranking is unlikely to prove feasible in the sense of achieving extensive coverage levels across the globe in the short term.
Higher education and research institutions in the USA showed very limited interest in the study, while in China formal conditions appeared to hamper the participation of institutions.
From their participation in the various stakeholder meetings, we can conclude that there is broad interest in the further development and implementation of U multirank.
And we believe that there are opportunities for the targeted recruitment of groups of institutions from outside Europe of particular interest to European higher education.
A final aspect of feasibility in terms of institutional participation is the question of institutional dropout and non-completion rates.
and 2) the conceptual interpretation of the'user-driven approach'applied in U multirank. It would be interesting to involve LERU again during a follow-up project
The other aspect of the potential up-scaling of U multirank is the extension to other fields.
Any extension of U multirank to new fields must deal with two questions: the relevance and meaningfulness of existing indicators for those fields,
While the U multirank feasibility study focused on the pilot fields of business studies and engineering, some issues of up-scaling to other fields have been discussed in the course of the stakeholder consultation.
While students can be asked about their learning experience in the same way across different fields
and clinical education are relevant indicators in the teaching and learning dimension. Following the user-and stakeholder-driven approach of U multirank,
we suggest that field-specific indicators for international rankings should be developed together with stakeholders from these fields.
and indicators. 140 In the two pilot fields of business studies and engineering we were able to use 86%of the final set of indicators in both fields.
when additional fields are addressed in U multirank, some specific field indicators will have to be developed. Based on the experience of the CHE ranking this will vary by field with some fields requiring no additional indicators
we conclude that up-scaling in terms of addressing a larger number of fields in U multirank is certainly feasible. 7 Applying U Applying U Applying U multirank:
A few rankings (e g. the Taiwanese College Navigator published by HEEACT30 and CHE ranking) implemented tools to produce a personalised ranking, based on user preferences and priorities with regard to the set of indicators.
This approach implies the user-driven notion of ranking which also is a basic feature of U multirank.
The presentation of U multirank results outlined in this chapter strictly follows this user-driven approach. But by relating institutional profiles (created in U-Map) with multidimensional rankings
U multirank introduces a second level of interactive ranking beyond the user-driven selection of indicators:
internationally-oriented research universities. U multirank has a much broader scope and intends to include a wider variety of institutional profiles.
We argue that it does not make much sense to compare institutions across diverse institutional profiles.
Hence U multirank offers a tool to identify and select institutions that are truly comparable in terms of their institutional profiles. 7. 2 Mapping diversity:
combining U-Map and U multirank From the beginning of the U multirank project one of the basic aims was that U multirank should be in contrast to existing global rankings
which brought about a dysfunctional shortsightedness onworld-class research universities'a tool to create transparency regarding the diversity of higher education institutions.
and decreasing diversity in higher education systems (see chapter 1). Our pilot sample includes institutions with quite diverse missions, structures and institutional profiles.
We have applied the U-Map profiling tool to specify these profiles. 30 College Navigator: http://cnt. heeact. edu. tw/site1/index2. asp?
The combination of U-Map and U multirank offers a new approach to user-driven rankings.
and hence the sample of institutions to be compared in U multirank. Figure 7-1: Combining U-Map and U multirank Our user-driven interactive web tool will imply both steps, too.
Users will be offered the option to decide if they want to produce a focused institutional ranking or a field-based ranking,
U multirank includes different ways of presenting the results. 143 7. 3 The presentation modes Presenting ranking results requires a general model for accessing the results,
In U multirank the presentation of data allows for both: a comparative overview on indicators across institutions,
U multirank produces indicators and results on different levels of aggregation leading to a hierarchical data model:
In U multirank we present the results alphabetically or by rank groups (see chapter 2). In the first layer of the table (field-based ranking),
sorted by indicatorresearch publication output student staff ratio graduation rate qualification of academic staff research publication output external research income citation index
%income third party funding CPD courses offered startup firms international academic staff%international students joint international publ. graduates working in the region student internships in local enterprise
& Learning Research Knowledge transfer international orientation Regional engagement student staff ratio graduation rate qualification of academic staff research publication output external
research income citation index%income third party funding CPD courses offered startup firms international academic staff%international students joint international publ. graduates working in the region
student internships in local enterprise regional copublication Institution 2-Institution 4 Institution 1--Institution 3-Institution 7---Institution 8--Institution 9 Institution 5-Institution 6-Teaching & Learning Research Knowledge
transfer international orientation Regional engagement 145 In chapter 1 we discussed the necessity of multidimensional and user-driven rankings for epistemological reasons.
6research 146 Personalized ranking tables 7. 3. 2the development of an interactive user-driven approach is a central feature of U multirank.
when applying U multirank. An intuitive, appealing visual presentation of the main results will introduce users to the performance ranking of higher education institutions.
Results at a glance presented in this way may encourage users to drill down to more detailed information.
so that there is a recognizable U multirank presentation style and users are confused not by multiple visual styles.
and discussed at a U multirank stakeholder workshop and there was a clear preference for thesunburst'chart similar to the one used in U-Map.
The colours symbolize the five U multirank dimensions, with the rays representing the individual indicators. In this chart the grouped performance scores of institutions on each indicator are represented by the length of the corresponding rays:
An example is a detailed view on the results of a department (the following screenshot shows a sample business administration study program at bachelor and masters level.
faculty/department (field) and program. 149 Figure 7-4: Text format presentation of detailed results (example) 7. 4 Contextuality Rankings do not
Context variables affecting the performance of higher education institutions. Context factors that may affect decision-making processes of users of rankings (e g. students, researchers) although not linked to the performance of institutions.
For individual users rankings reveal that there are differences in reality. For instance: for prospective students intending to choose a university or a study program,
low student satisfaction scores regarding the support by teaching staff in a specific university or program is relevant information,
although the indicator itself cannot explain the reasons behind this judgment. Rankings also have to be sensitive to context variables that may lead to methodological biases.
size and field structure of the institution. 150 The (national) higher education system as a general context for institutions:
this includes legal regulations (e g. concerning access) as well as the existence of legal/officialclassifications'of institutions (e g. in binary systems, the distinction between universities and other forms of non-university higher education institutions.
The structure of national higher education and research: the organization of research in different higher education systems is an example.
While in most countries research is integrated largely in universities, in some countries like France or Germany non-university research institutions undertake a major part of the national research effort.
A particular issue with regard to the context of higher education refers to the definition of the unit of analysis. The vast majority of rankings in higher education are comparing higher education institutions.
A few rankings explicitly compare higher education systems, either based on genuine data on higher education systems, e g. the University Systems Ranking published by the Lisbon Council31,
or by simply aggregating institutional data to the system level (e g. the QS National System Strength Ranking.
In this latter case global institutional rankings are used more or less implicitly to produce rankings of national higher education systems,
thereby creating various contextual problems. Both the Shanghai ranking and the QS rankings for instance are including universities only.
The fact that they do not include non-university research institutions, which are particularly important in some countries (e g. in France,
Germany), produces a bias when their results are interpreted as a comparative assessment of the performance or quality of national higher education and research systems.
U multirank addresses the issues of contextuality by applying the design principle of comparability (see chapter 2). In U multirank rankings are created only among institutions that have sufficiently similar institutional profiles.
Combining U-Map and U multirank produces an approach in which comparable institutions are identified before they are compared in one or more rankings.
By identifying comparable institutions, the impact of contextual factors may be assumed to be reduced. In addition, U multirank intends to offer relevant contextual information on institutions and fields.
Contextual information does not allow for causal analyses but it offers users the opportunity to create informed judgments of the importance of specific contexts
while assessing performances. During the further development of U multirank the production of contextual information will be an important topic. 31 See www. lisboncouncil. net 151 7. 5 User-friendliness U multirank is conceived as a user-driven and stakeholder
-oriented instrument. The development of the concept the definition of the indicators, processes of data collection and discussion on modes of presentation have been based on intensive stakeholder consultation.
In U multirank a number of features are included to increase the user-friendliness. In the same way as there is no one-size-fits-all-approach to rankings in terms of indicators,
U multirank, as any ranking, will have to find a balance between the need to reduce the complexity of information on the one hand and, at the same time,
U multirank wants to offer a tailor-made approach to presenting results, serving the information needs of different groups of users and taking into account their level of knowledge about higher education and higher education institutions.
Basic access is provided by the various modes of presentation described above (overview tables, personalised rankings and institutional profiles.
In addition access to and navigation through the web tool will be made highly user-driven by specificentrances'for different groups of users (e g. students, researchers/academic staff, institutional administrators, employers) offering specific information
In accordance with EU policies on eaccessiblity32 barriers to access to the U multirank results and data will be removed as much as possible.
'i e. users from within higher education will be able to use an English version of U multirank. In particular forlay users'(e g. prospective students) the existence of various language versions of U multirank would increase usability.
However, translation of the web tool and the underlying data is a substantial cost factor.
But at least an explanation of how to use U multirank and the glossary and definition of indicators and key concepts should be available in as many European languages as possible. 32 See http://europa. eu/legislation summaries/information society/l24226h en. htm (retrieved on 10 may 2011
and a feasible business model to finance U multirank (see chapter 8). Another important aspect of user-friendliness is the transparency about the methodology used in rankings.
For U multirank this includes a description of the basic methodological elements (institutional and field-based rankings,
the future 8. 1 Introduction An important aspect in terms of the feasibility of U multirank is the question of implementing the system on a widespread and regular basis
It is clear that the implementation of U multirank is a dynamic and only partially predictable process
and we must differentiate between a two-year pilot phase and a longer-term implementation/institutionalisation of U multirank.
One of our basic suggestions regarding transparency in higher education and research is the integration of U-Map and U multirank.
Therefore, many of the conclusions regarding the operational implementation in the final U-Map report (see www. u-map. eu) are also valid for U multirank.
global or European The pilot test showed some problems with the inclusion into U multirank of institutions from specific countries outside Europe.
Clearly, with participation in U multirank on a voluntary basis higher education institutions will have to be convinced of the benefits of participation.
This leads to the question of the scale of international scope that U multirank could and should attain.
We would argue that U multirank should aim to achieve a relatively wide coverage of European higher education institutions as quickly as possible during the next project Phase in Europe the feasibility
in order to be able to address the diversity of European higher education. But U multirank should remain a global tool.
There are institutions all over the world interested in benchmarking with European universities; the markets and peer institutions for European universities are increasingly becoming global;
and the impression that the U multirank instrument is 154 only there to serve European interest should be avoided.
The pilot study proves that U multirank can be applied globally. Based on the pilot results we suggest that the extension beyond Europe could best be organized systematically
and should not represent just a random sample. From outside Europe, the necessary institutions should be recruited to guarantee a sufficient sample of comparable institutions of different profiles.
For instance it could be an option to try and integrate the research-oriented, international universities scoring high in traditional rankings.
When this strategy leads to a substantial database within the next two years recruitment could be reinforced, at
which point the inclusion of these important peer institutions will hopefully motivate more institutions to join U multirank.
There is no definitive answer to the question of how many fields there are in international higher education. ISCED (1997) includes nine broad groups, such as humanities and arts, science, and agriculture.
Based on our pilot project we believe that it is feasible to add five new fields in each of the first three years of continued implementation of U multirank.
which fields to add it would make sense to focus on fields with significant numbers of students enrolled
theheart'of U multirank is the idea of creating a user-driven, flexible tool to obtain subjective ranking that are relevant from the perspective of the individual user.
with U multirank it is also possible to create so-calledauthoritative'ranking lists from the database.
An authoritative ranking could be produced by a specific association of higher education institutions. For instance international associations or consortia of universities (such as CESEAR, LERU or ELIA) might be interested in benchmarking
or ranking theirmembers'.'An authoritative ranking could be produced from the perspective of a specific stakeholder or client organization.
For instance, an international public organization might be interested in using the database to promote a ranking of the international, research-intensive universities in order to compare a sample of comparable universities worldwide.
On the other hand, in the first phase of implementation, U multirank should be perceived by all potential users as relevant for their individual needs.
or two international associations of higher education institutions and by conceptualizing one or more authoritative rankings with interested public and/or private partners.
however, is on establishing the flexible web tool. 156 8. 4 The need for international data systems U multirank is isolated not an system,
The development of the European database resulting from EUMIDA should take into account the basic data needs of U multirank.
such as staff data (the proper and unified definition of full-time equivalents and the specification of staff categories such asprofessor'is an important issue for the comparability of data),
or data related to students and graduates. EUMIDA could contribute to improve the data situation regarding employment-oriented outcome indicators.
A second aspect of integrated international data systems is the link between U multirank and national ranking systems.
U multirank implies a need for an international database of ranking data consisting of indicators which could be used as a flexible online tool
and rank comparable universities. Developing a European data system and connecting it to similar systems worldwide will strongly increase the potential for multidimensional global mapping and ranking.
Despite this clear need for cross-national/European/global data there will be a continued demand for information about national/regional higher education systems, in particular with regard to undergraduate higher education.
Although 157 mobility of students is increasing, the majority of in particular undergraduate students will continue to start higher education in their home country.
Hence field-based national rankings and cross-national regional rankings (such as the CHE ranking of German
The national rankings could refer to specific national higher education systems and at the same time provide a core set of joint indicators that can be used for European and global rankings.
In Spain we have the example of Fundacion CYD planning to implement a field-based ranking system based on U multirank standards.
In its operational phase the U multirank unit should develop standards and a set of basic indicators that national initiatives would have to fulfil
the U multirank unit will be able to pre-fill the data collection instruments and has to fill the gaps to attain European or worldwide coverage.
Finalisation of the various U multirank instruments 1. Full development of the database and web tool.
The prototypes of the instrument will demonstrate the outcomes and benefits of U multirank. 2. Setting of standards and norms and further development of underdeveloped dimensions and indicators.
These parts of the ranking model should be developed further. 3. Update of data collection tools/questionnaires according to the revision and further development of indicators and the experiences from the U multirank project.
In the first round of U multirank pre-filling proved difficult. The testing of national data systems for their pre-filling potential and the development of suggestions for the promotion of pre-filling are important steps to lower the costs of the system for the institutions.
A link to the development of a European higher education data system (EUMIDA) should be explored; coordination of all relevant EC projects should be part of the next Phase in addition,
and the international U multirank database should be realized. Roll out of U multirank across EU+countries 5. Invitation of EU+higher education institutions and data collection.
Within the next two years all identifiable European higher education institutions should be invited to 159 participate in the institutional as well as in the three selected field-based rankings.
The objective would be to achieve full coverage of institutional profiles and have a sufficient number of comparable institutions.
If we take into account the response rate of institutions in the pilot phase the inclusion of 700 institutions in the institutional and 500 in each field-based ranking appears realistic. 6. Targeted recruitment of higher education institutions outside Europe.
The combined U-Map/U multirank approach should be tested further by developing the means to produce the first authoritative ranking lists for universities with selected profiles.
for instance the profile of research orientation and a high degree of internationalization (international research intensive universities) and the profile of a strong focus on teaching
and alliances of higher education institutions willing to establish internal benchmarking processes and publish rankings of their membership.
If the objective is to establish U multirank as largely self-sustainable, a business plan is required. It could be a good idea to involve organizations with professional business expertise in the next project phase in order to work out a business plan,
the user-driven approach imbues U multirank with strong democratic characteristics and a role far from commercial interests,
if complete funding from nonprofit sources is unrealistic. 9. Formal institutionalization of the U multirank unit.
During the next project phase an operational organization to implement U multirank will need to be created and a governance and funding structure established,
The features of and opportunities offered by U multirank need to be communicated continuously. Since the success of U multirank requires institutions'voluntary participation a comprehensive promotion
and recruitment strategy will be needed, requiring the involvement of many key players 160 (governments, European commission, higher education associations, employer organizations, student organizations).
11. User-friendliness of the instrument. A crucial issue related to communication is the user-friendliness of U multirank.
This could be guaranteed by the smoothness of data collection and the services delivered to participants in the ranking process.
and knowledge about higher education of specific user groups (for instance secondary school leavers versus higher education decision-makers). A user-friendly tool needs various levels of information provision, understandable language, clarity of symbols and explanations, assisted navigation through the web tool and feedback loops providing information
12/2013 The 11 elements form the potential content of the next U multirank project phase, transforming U multirank from a feasible concept to a fully developed instrument already rolled out and ready for continuous operation. 8. 6 Criteria
and models of implementation An assessment of the various options for the organizational implementation of U multirank requires a set of analytical criteria.
The following criteria represent notions of good practice for this type of an implementation process such as governance
The ranking must be recognized open to higher education institutions of all types and from all participating countries, irrespective of their membership in associations, networks or conferences.
The ranking tool must be administered independent of the interests of higher education institutions or representative organizations in the higher education and research sector.
The implementation has to separate ranking from higher education policy issues such as higher education funding or accreditation.
A key element of U multirank is the flexible, stakeholder-oriented, user-driven approach. The implementation has to ensure this approach,
In general, the involvement of relevant actors in both the implementation of U multirank and its governance structure is a crucial success factor.
those parties taking responsibility for the governance of U multirank should be accepted broadly by stakeholders. Those who will be involved in the implementation should allow their names to be affiliated with the new instrument
and take responsibility in the governing bodies of the organizational structure. 163 We identified four basic options for responsibility structures for U multirank:
e g. media companies (interested in publishing rankings), consulting companies in the higher education context and data providers (such as the producers of bibliometric databases).
In this model, governments would use their authority over higher education to organize the rankings of higher education institutions.
i e. student organizations and associations of higher education institutions, would be responsible for the operation of the transparency instrument.
because if HEI experience high workloads with data collection they expect free products in return and are not willing to pay for basic data analysis.
Doubts about commitment to social values of European higher education area (e g. no free access for student users?.
Nonprofit organization can be linked with commitment to social values of European higher education area. The idea of international alliances ensures international orientation.
Assessment of the four models for implementing U multirank Criteria Model Inclusiveness International orientation Independence Professionalism Sustainability Efficiency Service orientation Credibility Commercial--+Government++-Stakeholder-+-Independent
It is independent both from higher education institutions (and their associations) and from higher education funding bodies/politics.
and funding instruments in higher education. It can offer a noncommercial character to the instrument, and it can guarantee external supervision of the implementation and broad and open access to the results.
Products for universities could be created, but the pricing policy mustn't destroy the willingness to participate.
A suggestion would be to organize the implementation of U multirank in such a way that basic ranking results can be provided for free to the participating institutions,
We believe that it is not reasonable in the initial phase of implementing U multirank to establish a new professional organization for running the system.
Once the extent of participation of higher education institutions is known, this option could be considered. The assumption is
There should be a next U multirank project phase before a ranking unit is established. 168 Figure 8-2:
Organizational structure for phase 1 (short term) We suggest that during the next two years of the project phase the current project structure of U multirank should be continued.
U multirank The following analysis of cost factors and scenarios is based on the situation of running U multirank as an established system.
Costs have been estimated based on this projection but will not become part of the final report. The cost estimations showed that U multirank is an ambitious project also in financial terms,
but in general it seems to be financially feasible. A general assumption based on the EU policy is that U multirank should become self-sustainable without long-term basic funding by the European commission.
EC contributions will decline over time and new funding sources will have to be found. However, from our calculations it became clear that there is no single financial source from
which we could expect to cover the whole costs of U multirank; the only option is diversified a funding base with a mix of financial sources.
If U multirank is not dependent on one major source a further advantage lies in the distribution of financial risks.
It is difficult to calculate the exact running costs associated with U multirank because these depend on many variables.
The major variable cost drivers of U multirank are: The number of countries and institutions involved. This determines the volume of data that has to be processed and the communication efforts.
The surveys that are needed to cover all indicators outlined in the data models of U multirank.
Major cost factors are for instance the realisation of student and graduate surveys or the use of databases charged with license fees, e g. bibliometric and patent data.
for instance a student survey is much more expensive if universities have no e-mail-addresses of their students,
requiring students to be addressed by letters. The frequency of the updating of ranking data. A multidimensional ranking with data from the institutions will not be updated every year;
the best timespan for rankings has to take into account the trade-off between obtaining up to date information
The intention of the European commission is to develop U multirank into a self-sustaining instrument, requiring no EU funding after its implementation phase.
Nevertheless the European commission should consider the option of continued support of part of U multirank's basic funding in the long run
and ensure a formal role for the EC as a partner in U multirank. To promote transparency
and performance in European higher education by establishing a transparency tool could be a long-term task of the EC.
For instance the EC could take on the role of promoter of students'interests and could see the delivery of a web tool free of charge to students as its responsibility.
To ensure students'free access to U multirank data the EC could provide also in the long run direct funding of user charges that would
otherwise have to be imposed upon the students. There are a number of opportunities to find funding sources for U-Multirank:
a) Basic funding by the governing institutions in form of a lump sum. This is realistic for the government model (e g. basic funding by EU) and for the 173 independent,
b) Funding/sponsorship from other national and international partners interested in the system. c) Charges from the ranking users (students, employers etc..
Discussions with potential funders so far have shown that the funding of U multirank has to rely on a mix of income streams.
We assume that a major part of the revenue comes from participation fees of higher education institutions, probably paid for them by national institutions (foundations, national governments, associations of institutions such as rectors'conferences).
To keep the web tool free of charges, especially for students, an equivalent to the charges could be paid by the EC.
since U multirank with its related surveys is an expensive form of ranking and the commercial sources are limited.
Charges to the users of the U multirank web tool would seriously undermine the aim of creating more transparency in European higher education by excluding students for example;
The EC could pay for student user charges. Project-based funding for special projects, for instance new methodological developments or rankings of a particulartype'of institution offer an interesting possibility with chances of cross-subsidization.
if it would work for U multirank. The questions are: would paying for rankings produce avalue for money'attitude on the part of institutions (or countries?
EC, foundations, other sponsors) with a combination of a variety of market sources contributing cost coverage plus some cost reductions through efficiency gains. 8. 9 A concluding perspective U multirank
after a next project phase of two years institutionalisation of a U multirank unit could be organized for the longer term.
After two more years, the roll out of the system should include about 700 European higher education institutions and about 500 institutions in the field-based ranking for each of three fields.
either for the public or for associations of higher education institutions, should be developed because of their market potential.
But the organizational basis of U multirank should be the nonprofit model with elements of the other options included.
In particular, the aim of financial self-sustainability for U multirank makes the combination of some nonprofit basic funding with the offer of commercial products inevitable.
and governance structure of U multirank. The analysis of the fixed and flexible cost determinants could lead to a calculation of the cost,
Linking student satisfaction and service quality perceptions: the case of university education. European Journal of Marketing, 31 (7), 528-540.
AUBR Expert Group. 2009). ) Assessing Europe's University-Based Research--Draft. s l. Brussels: European commission DG Research.
Azagra-Caro, J. M.,de Lucio, I. F, . & Gutierrez, G. A. 2003),University patents:
Output and input indicators of what?''''Research Evaluation, 12 (2): 5 16. Balconi, M.,Breschi, S,
Process and structure in higher education. London, Heinemann. Brandenburg, U & Federkeil, G. 2007) How to measure internationality and internationalisation of higher education institutions!
Indicators and key figures, CHE Working paper No. 92 Brown, R. M, . & Mazzarol, T. W. 2009).
The importance of institutional image to student satisfaction and loyalty within higher education. Higher education, 58 (1), 81-95.
Bucciarelli, L. L. 1994), Designing Engineers, Cambridge, MA: MIT Press CHERPA-Network. 2010). ) Interim progress report:
Design phase of the project'Design and testing the feasibility of a multidimensional global university ranking'.'Enschede:
CHEPS, University of Twente. Clark, B. R. 1983. The higher education system: academic organization in cross-national perspective.
Berkeley, University of California Press. Cremonini, L.,Benneworth, P, . & Westerheijden, D. F. 2011). In the shadow of celebrity:
The impact of world class universities policies on national higher education systems. Paper presented at the CHER 23rd annual conference.
Cremonini, L.,Westerheijden, D. F, . & Enders, J. 2008). Disseminating the Right Information to the Right Audience:
Higher education, 55,373-385. Debackere, K, . & Veugelers, R. 2005), The role of academic technology transfer organizations in improving industry-science links, Research Policy 34 (2005), pp. 321-342.
Academic quality, League tables, and Public Policy: A Cross-National Analysis of University ranking Systems. Higher education, 49,495-533.178 Dulleck, U. and R. Kerschbamer (2006."
"On Doctors, Mechanics, and Computer Specialists: The Economics of Credence Goods"Journal of Economic Literature 44 (1): 5-42.
Enquist, G. 2005) The internationalisation of higher education in Sweden, the National Agency for Higher education, Högskoleverkets rapportserie 2005: 27 R, Stockholm Espeland, W. N,
. & Saunder, M. 2007). Rankings and Reactivity: How Public Measures Recreate Social Worlds. American Journal of Sociology, 113 (1), 1-40.
Reputation indicators in rankings of higher education institutions. In B. Kehm & B. Stensaker (Eds. University rankings, Diversity,
and The New Landscape of Higher education (pp. 19-34). Rotterdam; Taipeh: Sense Publishers. Furco, A. & Miller, W. 2009), Issues in Benchmarking
and Assessing Institutional Engagement, New Directions for Higher education, No. 147, Fall 2009, p. 47-54.
Hazelkorn, E. 2011. Rankings and the Reshaping of Higher education: The Battle for World-Class Excellence.
London: Palgrave Macmillan. Heine, C, . & Willich, J. 2006). Informationsverhalten und Entscheidungsfindung bei der Studien-und Ausbildungswahl Studienberechtigte 2005 ein halbes Jahr vor dem Erwerb der Hochschulreife..
Forum Hochschule (3). Holi M. T.,Wickramasinghe, R. and van Leeuwen, M. 2008), Metrics for the evaluation of knowledge transfer activities at universities.
Promoting Civil Society Through Service-Learning. Norwell, Mass.:Kluwer. IAU, International Association of Universities (2005.
Global Survey Report, Internationalization of Higher education: New Directions, New Challenges, Paris: IAU. International Ranking Expert Group.
2006). ) Berlin Principles on Ranking of Higher education institutions. Retrieved 24.6,2006, from http://www. che. de/downloads/Berlin principles ireg 534. pdf 179 Ischinger, B. and Puukka, J. 2009), Universities for Cities and Regions:
Lessons from the OECD Reviews, Change: The Magazine of Higher Learning, Vol. 41, No. 3, p. 8-13.
Iversen, E. J.,Gulbrandsen, M, . & Klitkou, A. 2007), A baseline for the impact of academic patenting legislation in Norway.
Scientometrics, 70 (2), 393 414. Jaschik, S. 2007,03-19. Should U s. News Make Presidents Rich?
Insude Higher education. Retrieved from http://www. insidehighered. com/news/2007/03/19/usnews King, Gary et al (2004:
Global university rankings: private and public goods. Paper presented at the 19th Annual CHER conference, Kassel Mcdonough, P m.,Antonio, A l,
College rankings: democratized college knowledge for whom?.Research in Higher education, 39 (5), 513-537. Meyer, M.,Sinilainen, T. and Utecht, J. T. 2003),Toward hybrid triple helix indicators:
A study of university-related patents and a survey of academic inventors''.''Scientometrics, 58: 321 350.
Montesinos; P.,Carot; J. M.,Martinez; J. M.,Mora, F. 2008), Third Mission Ranking for World Class Universities:
Beyond Teaching and Research, Higher education in Europe, Vol. 33, Nr. 2, pp. 259-271. Nelson, P. 1970."
"Information and consumer behavior.""The Journal of Political economy 78 (2): 311-329. Nuffic (2010) Mapping internationalization, http://www. nuffic. nl/international-organizations/services/quality-assurance-and-internationalization/mapping-internationalization-mint OECD (2003), Turning Science into Business:
Patenting and licensing at public research organizations. Paris: OECD. Rojas-Méndez, J. I.,Vasquez-Parraga, A z.,Kara, A,
Determinants of Student Loyalty in Higher education: A Tested Relationship Approach in Latin america. Latin american Business Review, 10 (1), 21-39.
Global university rankings and their Impact. Brussels: European University Association. Sadlak, J, . & Liu, N c. Eds.).2007).
) The world-class university and ranking: Aiming beyond status. Bucharest; Shanghai; Cluj-Napoca. Salmi, J. 2009.
The Challenge of Establishing World-Class Universities. Washington, D c.:World bank. Saragossi, S, . & van Pottelsberghe, B. 2003),
What patent data reveal about universities: The case of Belgium''.''Journal of Technology Transfer, 28:47 51.
Schmiemann, M. and Durvy, J.-N. 2003), New approaches to technology transfer from publicly funded research in:
Teichler, U. 2004), The changing debate on internationalisation of higher education, Higher education 48 (1), 5-26.
The CHE Ranking of European Universities: A Pilot Study in Flanders and The netherlands. 2008). ) s l. Gütersloh, Enschede, Leiden, Brussels:
CWTS Leiden University; Vlaamse Overheido. Document Number) Thibaud, A. 2009. Vers quel i i i classement de Shanghai et des autres classements internationaux (No.
& Thursby, M. 2007), US faculty patenting: Inside and outside the university. NBER Working Paper 13256.
Cambridge, MA: National Bureau of Economic Research. Tijssen, R. F. W. 2003. Scoreboards of research excellence.
and E. van Wijk,(2009) Benchmarking university-industry research cooperation worldwide: performance measurements and indicators based on co-authorship data for the world's largest universities, Research Evaluation, 18, pp. 13-24.
Tijssen, R. J. W.,Waltman, L, . and N. J. Van Eck (2011) Collaborations span 1, 553 kilometres, Nature, 473, p. 154.
A global survey of university league tables. Toronto: Educational Policy Institute. Van dyke, N. 2005. Twenty Years of University Report cards.
Higher education in Europe, 30 (2), 103-125. Van Raan, Anthony (2003: Challenges in the Ranking of Universities.
In: Jan Sadlak, Liu Nian Cai (eds.:The World-Class University and Ranking: Aiming Beyond Status. Bucharest:
UNESCO-CEPES. van Vught, F. A. 2008. Mission diversity and reputation in higher education. Higher education Policy 21 (2), 151-174. van Vught, F. A.,Kaiser, F.,File, J. M.,Gaethgens, C.,Peter, R,
. & Westerheijden, D. F. 2010). U-Map: The European Classification of Higher education institutions. Enschede: CHEPS. Waltman, L.,R. J. W. Tijssen,
and N. J van Eck (2011) Globalisation of science in kilometres, Journal of Informetrics Westerheijden, D. F.,Beerkens, E.,Cremonini, L.,Huisman, J.,Kehm
, B.,Kovac, A.,et al. 2010). ) Th fi f w ki g h High i A:
Th B g Process Independent Assessment-Volume 1-Detailed assessment report. s l. Brussels: European commission, Directorate-General for Education and Culture.
Overtext Web Module V3.0 Alpha
Copyright Semantic-Knowledge, 1994-2011