, University of Eastern Finland (Kuopio Campus), Kuopio, Finland Abstract Purpose The purpose of this paper is to examine the information sourcing practices of small-to medium-sized enterprises (SMES) associated with the development of different types of innovation (product/process/market/organizational).
Insignificant to 5 Very important) REGKNOWA Sum-variable measuring the importance of regional knowledge organizations for innovation University of Kuopio Savonia University of Applied sciences Organizations of vocational education
Within SIS, the creation, selection and transformation of knowledge takes place within a complex matrix of interactions between different actors (firms, universities and other research organizations, educational organizations, financial organizations, public support
Over 58 percent of the entrepreneurs participating in this study had not been educated beyond elementary school. However, about 57 percent of the entrepreneurs had been educated in vocational school.
About the authors Miika Varis, after graduating from the University of Kuopio, acted as a research and teaching assistant in SME management (2001-2003) and in entrepreneurship and local economic development (2003-2005),
and lecturer in entrepreneurship (2005-2009) at the Department of Business and Management, University of Kuopio, Finland,
and from 2009 as a lecturer in entrepreneurship at the Department of health Policy and Management, University of Kuopio, Finland (1. 1. 2010 Department of health and Social Management,
University of Eastern Finland, Entrepreneurial SMES 153 Downloaded by WATERFORD INSTITUTE OF TECHNOLOGY At 04:12 03 july 2015 (PT) Kuopio Campus). He is currently finishing his doctoral dissertation on regional systems of innovation.
Varis@uef. fi Hannu Littunen, after graduating from the University of Jyva skyla, was a researcher at the University of Jyva skyla, School of business and Economics, Centre for Economic Research, Finland,
and a professor of entrepreneurship and regional development at the Department of Business and Management, University of Kuopio, Finland (2003-2009) and from 2009 a professor of entrepreneurship and regional development at the Department of health Policy and Management
, University of Kuopio, Finland (1. 1. 2010 Department of health and Social Management, University of Eastern Finland, Kuopio Campus). He completed his doctoral thesis in leadership
and management entitled The birth and success of new firms in a changing environment in the year 2001.
Prior to starting work at the University, he worked in various organizations in both public and private sectors in Finland.
how innovation shapes perceptions about universities and public research organisations. The Journal of Technology Transfer 39,454-471.
Design and Testing the Feasibility of a Multidimensional Global university ranking Final Report Frans van Vught & Frank Ziegele (eds.
Consortium for Higher education and Research Performance Assessment CHERPA-Network June 2011 2 CONTRACT-2009-1225/001-001 This report was commissioned by the Directorate General for Education
The CHERPA Network In cooperation with U multirank Project team Project leaders Frans van Vught (CHEPS)* Frank Ziegele (CHE)* Jon File (CHEPS)* Project co
Jiao Tong University) Simon Marginson (Melbourne University) Jamil Salmi (World bank) Alex Usher (IREG) Marijk van der Wende (OECD/AHELO) Cun-Mei Zhao (
U multirank final report 9 Table of contents Tables...13 Figures...14 Executive Summary...17 1 Reviewing current rankings...
23 1. 1 Introduction 23 1. 2 User-driven rankings as an epistemic necessity 23 1. 3 Transparency, quality and accountability in higher education 24
1. 4 Impacts of current rankings 33 1. 5 Indications for better practice 35 2 Designing U multirank...
Methodological standards 43 2. 4. 1 User-driven approach 44 2. 4. 2 U-Map and U multirank 45 2. 4. 3
Grouping 46 2. 4. 4 Design context 46 2. 4. 5 3 Constructing U multirank: Selecting indicators...
3. 3. 5 4 Constructing U multirank: databases and data collection tools...79 4. 1 Introduction 79 10 4. 2 Databases 79 Existing databases 79 4. 2. 1 Bibliometric databases 80 4
perspective 94 5 Testing U multirank: pilot sample and data collection...97 5. 1 Introduction 97 5. 2 The global sample 97 5. 3 Data collection 102 Institutional self-reported data 103
115 6 Testing U multirank: results...119 6. 1 Introduction 119 6. 2 Feasibility of indicators 119 Teaching & Learning 122 6. 2. 1 Research 124 6. 2
and patent data 135 6. 3. 3 6. 4 Feasibility of up-scaling 137 11 7 Applying U multirank:
combining U-Map and U multirank 141 7. 3 The presentation modes 143 Interactive tables 143 7. 3. 1 Personalized ranking tables 146
151 8 Implementing U multirank: the future...153 8. 1 Introduction 153 8. 2 Scope: global or European 153 8. 3 Personalized and authoritative rankings 154 8. 4 The need for international data systems 156 8. 5 Content and organization of the next
and models of implementation 161 8. 7 Towards a mixed implementation model 167 8. 8 Funding U multirank 169 8. 9 A concluding perspective 176 9 List
Classifications and rankings considered in U multirank...26 Table 1-2: Indicators and weights in global university rankings...
30 Table-2-1: Conceptual grid U multirank...42 Table 3-1: Indicators for the dimension Teaching & Learning in the Focused Institutional and Field-based Rankings...
54 Table 3-2: Primary form of written communications by discipline group...61 Table 3-3:
Data elements shared between EUMIDA and U multirank: their coverage in national databases...84 Table 4-2:
Availability of U multirank data elements in countries'national databases according to experts in 6 countries (Argentina/AR, Australia/AU, Canada/CA, Saudi arabia/SA, South africa/ZA
U multirank data collection process...104 Figure 5-2: Follow up survey: assessment of data procedures and communication...
Combining U-Map and U multirank...142 Figure 7-2: User selection of indicators for personalized ranking tables...
Assessment of the four models for implementing U multirank...166 Figure 8-2: Organizational structure for phase 1 (short term...
169 15 Preface On 2 june 2009 the European commission announced the launching of a feasibility study to develop a multidimensional global university ranking.
Its aims were to look into the feasibility of making a multidimensional ranking of universities in Europe,
but also parents and other stakeholders, to make informed choices between different higher education institutions and their programmes.
"and ignore the performance of universities in areas like humanities and social sciences, teaching quality and community outreach.
While drawing on the experience of existing university rankings and of EU-funded projects on transparency in higher education, the new ranking system should be:
In a first phase running until the end of 2009 the consortium would design a multidimensional ranking system for higher education institutions in consultation with stakeholders.
In a second phase ending in June 2011 the consortium would test the feasibility of the multidimensional ranking system on a sample of no less than 150 higher education and research institutions.
Education and Culture but other experts drawn from student organisations, employer organisations, the OECD, the Bologna Follow-up Group and a number of Associations of Universities.
ranking and transparency instruments in higher education and research. The international panel was consulted at key decision making moments in the project.
Stakeholder workshops were held four times during the project with an average attendance of 35 representatives drawn from a wide range of organisations including student bodies, employer organisations, rectors'conferences, national university associations and national representatives.
The consortium members benefitted from a strong network of national higher education experts in over 50 countries who were invaluable in suggesting a diverse group of institutions from their countries to be invited to participate in the pilot study.
This is the Final Report of the multidimensional global university ranking project. Readers interested in a fuller treatment of many of the topics covered in this report are referred to the project web-site (www. u multirank. eu) where the project's three Interim Reports can be found.
The web-site also includes a 30 page Overview of the major outcomes of the project. 17 Executive Summary Executive Summary Executive Summary Executive Summary Executive Summaryexecutive Summary Executive
Summary The need for a new transparency tool in higher education and rthe need for a new transparency tool in higher education
and rthe need for a new transparency tool in higher education and rthe need for a new transparency tool in higher education
and rthe need for a new transparency tool in higher education and rthe need for a new transparency tool in higher education
and rthe need for a new transparency tool in higher education and rthe need for a new transparency tool in higher education
and rthe need for a new transparency tool in higher education and rthe need for a new transparency
tool in higher education and r The need for a new transparency tool in higher education and r The need for a new transparency tool in higher education and rthe need for a new transparency tool in higher education
and rthe need for a new transparency tool in higher education and rthe need for a new transparency tool in higher education
and rthe need for a new transparency tool in higher education and r The need for a new transparency tool in higher education and r The need for a new transparency tool in higher education and rthe need for a new transparency tool in higher education
and rthe need for a new transparency tool in higher education and rthe need for a new transparency tool in higher education
and rthe need for a new transparency tool in higher education and r The need for a new transparency tool in higher education and rthe need for a new transparency tool in higher education
and rthe need for a new transparency tool in higher education and rthe need for a new transparency tool in higher education
and r The need for a new transparency tool in higher education and rthe need for a new transparency tool in higher education
and rthe need for a new transparency tool in higher education and rthe need for a new transparency tool in higher education
and r The need for a new transparency tool in higher education and rthe need for a new transparency tool in higher education
and rthe need for a new transparency tool in higher education and rthe need for a new transparency
tool in higher education and rthe need for a new transparency tool in higher education and rthe need for a new transparency tool in higher education
and rthe need for a new transparency tool in higher education and rthe need for a new transparency tool in higher education
and r The need for a new transparency tool in higher education and rthe need for a new transparency tool in higher education
and rthe need for a new transparency tool in higher education and rthe need for a new transparency tool in higher education
and rthe need for a new transparency tool in higher education and r The need for a new transparency tool in higher education and rthe need for a new transparency tool in higher education
and rthe need for a new transparency tool in higher education and r The need for a new transparency tool in higher education and rthe need for a new transparency tool in higher education
and rthe need for a new transparency tool in higher education and r esearch esearch esearch esearch esearch esearch esearch The project encompassed the design and testing of a new transparency tool for higher education and research.
More specifically, the focus was on a transparency tool that will enhance our understanding of the multiple performances of different higher education
and research institutions across the diverse range of activities they are involved in: higher education and research institutions are multipurpose organisations
and different institutions focus on different blends of purposes and associated activities. Transparency is of major importance for higher education and research worldwide
which is expected increasingly to make a crucial contribution to the innovation and growth strategies of nations around the globe.
Obtaining valid information on higher education within and across national borders is critical in this regard, yet higher education and research systems are becoming more complex and at first sight less intelligible for many stakeholders.
The more complex higher education systems become, the more sophisticated our transparency tools need to be.
Sophisticated tools can be designed in such a way that they are user-friendly and can cater to the different needs of a wide variety of stakeholders.
An enhanced understanding of the diversity in the profiles and performances of higher education and research institutions at a national, European and global level requires a new ranking tool.
Existing international transparency instruments do not reflect this diversity adequately and tend to focus on a single dimension of university performance research.
The new tool will promote the development of diverse institutional profiles. It will also address most of the major shortcomings of existing ranking instruments, such as language and field biases, the exaggeration of small differences in performance and the arbitrary effects of indicator weightings on ranking outcomes.
We have called this new tool U multirank as this stresses three fundamental points of departure: it is multidimensional,
recognising that higher education institutions serve multiple purposes and perform a range of different activities; it is a ranking of university performances
(although not in the sense of an aggregated league table like other global rankings); and it is driven user (as a stakeholder with particular interests,
you are enabled to rank institutions with comparable profiles according to the criteria important to you). 18 The design and keythe design and keythe design and keythe design and keythe design and keythe design and keythe design and keythe design
and key The design and keythe design and keythe design and keythe design and key The design
and key characteristics of Ucharacteristics of Ucharacteristics of Ucharacteristics of U characteristics of Ucharacteristics of U characteristics of U characteristics of Ucharacteristics of Ucharacteristics of U characteristics of U characteristics of U multirank Multirank
Multirank Multirankmultirank On the basis of a carefully selected set of design principles we have developed a new international ranking instrument that is user-driven,
multidimensional and methodologically robust. This new on-line instrument enables its users first to identify institutions that are sufficiently comparable to be ranked and, second,
to design a personalised ranking by selecting the indicators of particular relevance to them. U multirank enables such comparisons to be made both at the level of institutions as a whole and in the broad disciplinary fields in
which they are active. The integration of the already designed and tested U-Map classification tool into U multirank enables the creation of the user-selected groups of sufficiently comparable institutions.
This two-step approach is completely new in international and national rankings. On the basis of an extensive stakeholder consultation process (focusing on relevance) and a thorough methodological analysis (focusing on validity, reliability and feasibility),
U multirank includes a range of indicators that will enable users to compare the performance of institutions across five dimensions of higher education and research activities:
Teaching and learning Research Knowledge transfer International orientation Regional engagement On the basis of data gathered on these indicators across the five performance dimensions,
U multirank could provide its users with the on-line functionality to create two general types of rankings:
Focused institutional rankings: rankings on the indicators of the five performance dimensions at the level of institutions as a whole Field-based rankings:
rankings on the indicators of the five performance dimensions in a specific field in which institutions are active U multirank would also include the facility for users to create institutional
and field performance profiles by including (not aggregating) the indicators within the five dimensions (or a selection of them) into a multidimensional performance chart.
At the institutional level these take the form ofsunburst charts 'while at the field level these are structured asfield-tables'.
'19 In the sunburst charts, the performance on all indicators at the institutional level is represented by the size of the rays of thesun':
'a larger ray means a higher performance on that indicator. The colour of a ray reflects the dimension to which it belongs.
This personalised interactive ranking table reflects the user driven nature of U multirank. 20 Table 1:
In order to be able to apply the principle of comparability we have integrated the existing transparency tool the U-Map classification into U multirank.
It is driven a user higher education mapping tool that allows users to select comparable institutions on the basis ofactivity profiles'generated by the U-Map tool.
These activity profiles reflect the diverse activities of different higher education and research organisations using a set of dimensions similar to those developed in U multirank.
The underlying indicators differ as U-Map is concerned with understanding the mix of activities an institution is engaged in
while U multirank is concerned with an institution's performance in these activities (how well it does
Integrating U-Map into U multirank enables the creation of user-selected groups of sufficiently comparable institutions that can then be compared in focused institutional
of the UTHE findings of the U The findings of the UTHE findings of the UTHE findings of the U multirank pilot study Multirank pilot study Multirank pilot study Multirank pilot studymultirank pilot studymultirank pilot studymultirank pilot studymultirank
pilot study Multirank pilot studymultirank pilot studymultirank pilot study Multirank pilot studymultirank pilot study U multirank was tested in a pilot study involving 159 higher education institutions drawn from 57 countries:
and faculties performing very differently across the five dimensions and their underlying indicators. The multidimensional approach makes these diverse performances transparent.
and it is clear that U multirank is based a Europe project, this represents a strong expression of interest.
organisational and financial challenges, there are no inherent features of U multirank that rule out the possibility of such future growth.
and 22 operational feasibility we have developed a U multirankVersion 1. 0'that is ready to be implemented in European higher education
d implementation of Ud implementation of Ud implementation of U-Multirankmultirankmultirankmultirank Multirank Multirankmultirank The outcomes of the pilot study suggest some clear next steps in the further development of U multirank and its implementation
The refinement of U multirank instruments: Some modifications need to be made to a number of indicators and to the data gathering instruments based on the experience of the pilot study.
Roll out of U multirank across European countries: Given the need for more transparent information in the emerging European higher education area all European higher education
and research institutions should be invited to participate in U multirank in the next phase. Many European stakeholders are interested in assessing
and comparing European higher education and research institutions and programmes globally. Targeted recruitment of relevant peer institutions from outside Europe should be continued in the next phase of the development of U multirank.
Developing linkages with national and international data-bases. The design of specific authoritative rankings: Although U multirank has been designed to be driven user,
this does not preclude the use of the tool and underlying database to produce authoritative expert institutional and field based rankings for particular groups of comparable institutions on dimensions particularly relevant to their activity profiles.
In terms of the organisational arrangements for these activities we favour a further two year project phase for U multirank.
In the longer term on the basis of a detailed analysis of different organisational models for an institutionalised U multirank our strong preference is for an independent nonprofit organisation operating with multiple sources of funding.
This organisation would be independent both from higher education institutions (and their associations) and from higher education governance and funding bodies.
Its noncommercial character will add legitimacy as will external supervision via a Board of trustees. 23 1 Review Review Reviewing current rankings current rankings current rankings current rankings current rankings
classifications, and rankings-from the point of view of the information they could deliver to assist different stakeholders in their different decisions regarding higher education and research institutions.
changing the tactics in the game (more attacks 1 See www. u multirank. eu 24 late in a drawn match),
and sparking off debates among commentators of the sport for and against the new rule. 1 In university rankings,
what isthe best university'.'But different to sports, there are no officially recognised bodies that are accepted as authorities that may define the rules of the game.
that e g. the Shanghai ranking is simply a game that is as different from the Times Higher game as rugby is from football.
The issue with the some of the current university rankings is that they tend to be presented
Our alternative to assuming an unwarranted position of authority is to reflect critically on the different roles that higher education
to define explicitly our conceptual framework regarding the different functions of higher education institutions, and in turn to derive sets of indicators from this framework.
In this sense, we want to democratise rankings in higher education and research. Based on the epistemological position that any choice of sets of indicators is driven by their makers'conceptual frameworks,
quality and accountability in higher education It is recognized widely that although the current transparency tools especially university league tables are controversial,
they seem to be here to stay, and that especially global university league tables have a great impact on decision-makers at all levels in all countries,
including in universities (Hazelkorn, 2011). They reflect a growing international competition among universities for talent and resources;
at the same time they reinforce competition by their very results. On the positive side they 1 http://en. wikipedia. org/wiki/Three points for a win 25 urge decision-makers to think bigger and set the bar higher,
especially in the research universities that are the main subjects of the current global league tables.
Yet major concerns remain as to league tables'methodological underpinnings and to their policy impact on stratification rather than on diversification of mission.
Under vertical stratification we understand distinguishing higher education and research institutions asbetter'orworse'in prestige or performance;
it denotes all manners of providing insight into the diversity of higher education. Transparency tools are instruments that aim to provide information to stakeholders about the efforts and performance of higher education and research institutions.
A classification is a systematic, nominal distribution among a number of classes or characteristics without any (intended) order of preference.
Classifications give descriptive categorizations of characteristics intending to focus on the efforts and activities of higher education and research institutions, according to the criterion of similarity.
Rankings are intended hierarchical categorizations to render the outputs of the higher education and research institutions according to the criterion of best performance.
Most existing rankings in higher education take the form of a league table A league table is a single-dimensional,
of which information they could deliver to assist users in their different decisions regarding higher education and research institutions.
Classifications and rankings considered in U multirank Type Name Classifications Carnegie classification (USA) U-Map (Europe) Global League tables and Rankings Shanghai Jiao Tong University's (SJTU
) Academic ranking of world universities (ARWU) Times Higher education (Supplement)( THE) QS (Quacquarelli Symonds Ltd) Top Universities Leiden Ranking National League tables and Rankings US News & World Report (USN≀
/University ranking (CHE; Germany) Studychoice123 (SK123; The netherlands) Specialized League tables and Rankings Financial times ranking of business schools and programmes (FT;
especially the most influential ones, the global university rankings are all league tables. The relationship of indicators collected
2010) and even already anticipating the current U multirank project, the situation has begun to change: ranking producers are becoming more explicit and reflective about their methodologies and underlying conceptual frameworks.
while most rankings give only a single ranking The problem of ignoring diversity within higher education and research institutions:
ignoring education and other functions of higher education and research institutions (practice-oriented research, innovation,third mission')The problem of composite overall indicators:
while practically all informed the design of U multirank. We already mentioned some of them. The full list includes:
The Berlin Principles on Ranking of Higher education institutions (International Ranking Expert Group, 2006), which define sixteen standards
user-oriented manner enabling custom-made rankings rather than dictating a single one Focused institutional rankings, in particular the Leiden ranking of university research, also with a clear focus,
thus strengthening the potential information base for other dimensions than fundamental research Comparative assessment of higher education student's learning outcomes (AHELO):
much like PISA does for secondary school pupils. Recent reports on rankings such as the report of the Assessment of University-Based Research Expert Group (AUBR Expert Group, 2009) which defined a number of principles for sustainable collection of research data,
such as purposeful definition of the units or clusters of research, attention to the use of non-obtrusive measurement e g. through digital repositories of publications,
to ensure that in the development of the set of indicators for U multirank we would not overlook any dimensions,
The global rankings that we studied limit their interest to several hundred preselected universities, estimated to be no more than 1%of the total number of higher education institutions worldwide.
The criteria used to establish a threshold generally concern the research output of the institution;
Although it could be argued that world-class universities may act as role models (Salmi, 2009), the evidence that strong institutions inspire better performance across whole higher education systems is so far mainly found in the area of research rather than that of teaching (Sadlak & Liu,
2007) if there are positive system-wide spill overs at all (Cremonini, Benneworth & Westerheijden, 2011). From our overview of the indicators used in the main global university rankings (summarised in Table 1-2) we concluded that they focus indeed heavily on research aspects of the higher education institutions (research output,
impact as measured through citations, and reputation in the eyes of academic peers) and that efforts to include the education dimension remain weak
as they focus mainly on indicators related to the research function of universities (Rauhvargers, 2011). 30 Table 1-2:
Indicators and weights in global university rankings HEEACT 2010 ARWU 2010 THE 2010 QS 2011 Leiden Rankings 2010 Research output Articles past 11 years (10
and alternative calculation MNCS2) Size-dependent'brute force'impact indicator (multiplication of P with the university's field-normalized average impact):
P*CPP/FCSM Citations-per-publication indicator (CPP) Quality of education Alumni of an institution winning Nobel prizes and Fields Medals (10%)Phds awarded per staff (6%)Undergraduates admitted per staff
(4. 5%)Income per staff (2. 25%)Ratio Phd awards/bachelor awards (2. 25%)Faculty student ratio (20%)31 HEEACT 2010
staff and students (5%)Industry income per staff (2. 5%)International faculty (5%)International students (5%)Website http://ranking. heeact. edu. tw
/en-us/2010/Page/Indicators http://www. arwu. org/ARWUMETHODOLOGY2010. jsp http://www. timeshighereducation. co. uk/world-university rankings/2010-2011/analysis
-methodology. html http://www. topuniversities. com/university rankings/world-university rankings http://www. socialsciences. leiden. edu/cwts/products-services/leiden-ranking-2010
National databases on higher education and research institutions cover different information based on national, different definitions of items and are
Self-reported data collected by higher education and research institutions participating in a ranking. This source is used regularly though not in all global rankings, due to the lack of externally available and verified statistics (Thibaud, 2009.
The drawback is high expense for the ranking organisation and for the participating higher education and research institutions.
but not in existing global university rankings. Reputation surveys are used globally, but have been proven to be very weak cross-nationally (Federkeil,
which is not often the case in the current global university rankings. Manipulation of opinion-type data has surfaced in surveys for ranking
U-Map, has testedpre-filling'higher education institutions'questionnaires, i e. data available in national public sources are entered into 3 The beginnings of European data collection as in the EUMIDA project may help to overcome this problem for the European region in years to come. 33 the questionnaires sent to higher education institutions for data
gathering. This should reduce the effort required from higher education institutions and give them the opportunity to verify thepre-filled'data as well.
The U-Map test withpre-filling'from national data sources in Norway appeared to be resulted successful
and in a substantial decrease of the burden of gathering data at the level of higher education institutions. 1. 4 Impacts of current rankings According to many commentators,
encouraging higher education and research institutions to improve their performance. Impacts may affect amongst other things:
less by domestic undergraduate entrants, more at the graduate and postgraduate levels. Especially at the undergraduate level, rankings appear to be used particularly by students of high achievement and by those coming from highly educated families (Cremonini, Westerheijden & Enders, 2008;
Heine & Willich, 2006; Mcdonough, Antonio & Perez, 1998. Institutional management. Rankings strongly impact on the management in higher education institutions.
The majority of higher education leaders report that they use potential improvement in rank to justify claims on resources (Espeland & Saunder, 2007;
Hazelkorn, 2011. In institutional actions to improve ranking positions, they tend to focus on targeting the indicators in league tables that are influenced most easily, e g. the institution's branding,
In nations across the globe, global rankings have prompted the desire forworld-class universities'both as symbols of national achievement and prestige and supposedly as engines of the knowledge economy (Marginson, 2006.
if redirecting funds to a small set of higher education and research institutions to make themworld class'benefits the whole higher education system;
research on this question is lacking until now. 34 The higher educationreputation race'.'The reputation race (van Vught, 2008) implies the existence of an ever-increasing search by higher education and research institutions and their funders for higher positions in the league tables.
In Hazelkorn's survey of higher education institutions, 3%were ranked first in their country, but 19%wanted to get to that position (Hazelkorn, 2011).
The reputation race has costly implications. The problem of the reputation race is that the investments do not always lead to better education and research,
and that the resources spent might be used more efficiently elsewhere. Besides, the link between quality in research and quality in teaching is not particularly strong (see Dill & Soo, 2005.
This standardization process is likely to reduce the horizontal diversity in higher education systems.Matthew effect'.
Most of the effects discussed above are rather negative to students, institutions and the higher education sector.
and to policy-makers to consider where in the higher education system investment should be directed for the system to fulfil its social functions optimally.
for others, e g. university leaders and national policy-makers, information about the higher education institution as a whole has priority (related to the strategic orientation of institutions;
In rankings comparisons should be made between higher education and research institutions of similar characteristics, leading to the need for a pre-selection of a set of more or less homogeneous institutions.
Rankings that include very different profiles of higher education and research institutions are non-informative and misleading.
The various functions of higher education and research institutions for a heterogeneity of stakeholders and target groups can only be addressed adequately in a multidimensional approach.
These general conclusions have been an important source of inspiration for how we designed U multirank, a new, global, multidimensional ranking instrument.
multidimensional global ranking tool that we have calledU multirank'.'First, we present the general design principles that to a large extent have guided the design process.
Finally, we outline a number of methodological choices that have a major impact on the operational design of U multirank. 2. 2 Design Principles U multirank aims to address the challenges identified as arising from the various currently existing ranking tools.
when designing and constructing U multirank. Our fundamental epistemological argument is that as all observations of reality are driven theory (formed by conceptual systems) anobjective ranking'cannot be developed (see chapter 1). Every ranking will reflect the normative design and selection criteria of its constructors.
Higher education and research institutions are predominantly multipurpose, multiple-mission organizations undertaking different mixes of activities (teaching and learning, research, knowledge transfer, regional engagement,
It makes no sense to compare the research performance of a major metropolitan research university with that of a remotely located University of Applied science;
or the internationalization achievements of a national humanities college whose major purpose is to develop
and preserve its unique national language with an internationally orientated European university with branch campuses in Asia.
The fourth principle is that higher education rankings should reflect the multilevel nature of higher education. With very few exceptions, higher education institutions are combinations of faculties, departments and programs of varying strength.
Producing only aggregated institutional rankings disguises this reality and does not produce the information most valued by major groups of stakeholders:
'These principles underpin the design of U multirank, resulting in a user-driven, multidimensional and methodologically robust ranking instrument.
In addition, U multirank aims to enable its users to identify institutions and programs that are sufficiently comparable to be ranked,
For the design of U multirank we specify our own conceptual framework in the following section. 2. 3 Conceptual framework A meaningful ranking requires a conceptual framework
We found a number of points of departure for a general framework for studying higher education and research institutions in the higher education literature.
First, a common point of departure is that processing knowledge is the general characteristic of higher education and research institutions (Clark 1983;
or its transfer to stakeholders outside the higher education and research institutions (knowledge transfer) or to various groups oflearners'(education).
Of course, a focus on the overall objectives of higher education and research institutions in the three well-known primary processes
and knowledge transfer'is a simplification of the complex world of higher education and research institutions.
which higher education and research institutions are involved. The three functions are a useful way to describe conceptually the general purposes of these institutions
The second conceptual assumption is that the performance of higher education and research institutions may be directed at differentaudiences'.
'In the current higher education and research policy area, two main general audiences have been prioritised, the first through the international orientation of higher education and research institutions.
which a higher education institution operates. In reality theseaudiences'are combined of course often in the various activities of higher education
and research institutions. 40 It is understood that the functions higher education and research institutions fulfil for international and regional audiences are manifestations of their primary processes,
i e. the three functions of education, research and knowledge transfer mentioned before. What we mean by this is that there may be educational elements
A major issue in higher education and research institutions, as in many social systems, has been that the transformation from inputs to performances is not self-evident.
different higher education and research institutions may reach quite different types and levels of performance. We make a general distinction between theenabling'stages of the overall creation stages on the one hand
Ranking information is produced to inform users about the value of higher education and research, which is necessary as it is not obvious that they are easily able to take effective decisions without such information.
Higher education is not an ordinarygood'for which the users themselves may assess the value a priori (using, e g.,
Higher education is to be seen as an experience good (Nelson 1970: the users may assess the quality of the good only
while or afterexperiencing'it (i e. the higher education program), but suchexperienceis ex post knowledge.
Some even say that higher education is a credence good (Dulleck and Kerschbamer 2006: the value of the good cannot be assessed
They need information on how the competences acquired during higher education will improve their position on the career or social ladder.
Some users are interested in the overall performance of higher education and research institutions (e g. policy-makers) and for them the internal processes contributing to performance are of less interest The institution may well remain ablack box'for these users.
as they may consider this as an important aspect of their learning experience and their time in higher education (consumption motives).
Students might also be interested in the long-term impact of taking the program as they may see higher education as an investment
Users engage with higher education for a variety of reasons and therefore will be interested in different dimensions
and performance indicators of higher education institutions and the programs they offer. Rankings must be designed in a balanced way
Filtering higher education and research institutions into homogeneous groups requires contextual information rather than only the input
Contextual information for higher education and research institutions relates to their positioning in society and specific institutional appearances.
A substantial part of the relevant context is captured by applying another multidimensional transparency tool (U-Map) in preselecting higher education
Conceptual grid U multirank Stages Functions & Audiences Enabling Performance Input Process Output Impact Functions context Teaching & Learning Research Knowledge Transfer Audiences
International Orientation Regional Engagement Using this conceptual framework we have selected the following five dimensions as the major content categories of U multirank:
This often leads to an emphasis on indicators of the enabling stages of the higher education production process, rather than on the area of performance, largely because governance of higher education and research institutions has concentrated traditionally on the bureaucratic (in Weber's neutral sense of the word) control
Then too, inputs and 43 processes can be influenced by managers of higher education and research institutions.
Similarly, higher education and research institution managers may make facilities and resources available for research, but they cannot guarantee that scientific breakthroughs are created'.
'Inputs and processes are the parts of a higher education and research institution's system that are documented best.
In our design of U multirank we focused on the selection of output and impact indicators.
U multirank intends to be a multidimensional performance assessment tool and thus needs to imply indicators that relate to the performances of higher education
and research institutions. 2. 4 Methodological aspects There are a number of methodological aspects that have a clear impact on the way a new,
multidimensional ranking tool like U multirank can be developed. In this section we explain the various methodological choices made when designing U multirank.
Methodological standards 2. 4. 1in addition to the content-related conceptual framework, the new ranking tool and its underlying indicators must be based also on methodological standards of empirical research, validity and reliability
In addition, because U multirank is an international comparative transparency tool, it must deal with the issue of comparability across cultures and countries and finally,
U multirank has to address the issue of feasibility. Validity (Construct) validity refers to the evidence about
When characterizing, e g. the internationality of a higher education institution, the percentage of international students is a valid indicator
National higher education systems are based on national legislation setting specific legal frameworks, including legal definitions (e g
Feasibility The objective of U multirank is to design a multidimensional global ranking tool that is feasible in practice.
can U multirank be applied in reality and can it be applied with a favourable relation between benefits and costs in terms of financial and human resources?
We report on the empirical assessment of the feasibility of U multirank in chapter 6 of this report.
User-driven approach 2. 4. 2to guide the readers'understanding of U multirank, we now briefly describe the way we have worked methodologically out the principle of being driven user (see section 2. 2). We propose an interactive web-based approach,
2. choose whether to focus the ranking on higher education and research institutions as a whole (focused institutional rankings) or on fields within these institutions (field-based rankings;
U-Map and U multirank 2. 4. 3the principle of comparability (see section 2. 2) calls for a method that helps us in finding institutions the purposes
can be found in the connection of U multirank with U-Map (see www. u-map. eu). U-Map,
describes(maps')higher education institutions on a number of dimensions, each representing an aspect of their activities.
U-Map can prepare the ground for U multirank in the sense that it helps identify those higher education institutions that are comparable and for which,
therefore, performance can be compared by means of the U multirank ranking tool. A detailed description of the methodology used in this classification can be found on the U-Map website (http://www. u-map. eu/methodology doc/)and in the final report of the U-Map project,
U multirank focuses on the performance aspects of higher education and research institutions. U multirank shows how well the higher education institutions are performing in the context of their institutional profile.
Thus, the emphasis is on indicators of performance, whereas in U-Map it lies on the enablers of that performance the inputs and activities.
U-Map and U multirank share the same conceptual model. The conceptual model provides the rationale for the selection of the indicators in both U-Map and U multirank, both
of which are complementary instruments for mapping diversity, horizontal diversity in classification and vertical diversity in ranking.
As an alternative U multirank uses a grouping method. Instead of calculating exact league table positions we will assign institutions to a limited number of groups.
Design context 2. 4. 5in this chapter we have described the general aspects of the design process regarding U multirank.
we have described the conceptual framework from which the five dimensions of U multirank are deduced, and we have outlined a number of methodological approaches to be applied in U multirank.
Together these elements form the design context from which we have constructed U multirank. The design choices made here are in accordance with both the Berlin Principles and the recommendations by the Expert Group on the Assessment of University-based Research.
The Berlin Principles4 emphasize (a o.)the importance of being clear about the purpose of rankings and their target groups,
of recognising the diversity of institutional profiles, 4 http://www. ireg-observatory. org/index. php?
Based on our design context, in the following chapters we report on the construction of U multirank. 5 Expert Group on Assessment of University-Based Research (2010),
Assessing Europe's University-Based Research, European commission, DG Research, EUR 24187 EN, Brussels 3 Constructing U Constructing U Constructing U Constructing U multirank:
Selecting indicatorsmultirank: Selecting indicators Multirank: Selecting indicators Multirank: Selecting indicators Multirank: Selecting indicators Multirank: Selecting indicators Multirank:
Selecting indicators 3. 1 Introduction Having set out the design context for U multirank in the previous chapter,
we now turn to a major part of the process of constructing U multirank: the selection and definition of the indicators.
These indicators are assumed to enable us to measure the performances of higher education and research institutions both at the institutional and at the field level,
The other important components of the construction process for U multirank are the databases and the data collection tools that allow us to actuallyfill'the indicators.
These will be discussed further in chapter 4 as we explain the design of U multirank in more detail.
In chapters 5 and 6 we report on the U multirank pilot study during which we analysed the data quality
Various categories of stakeholders (student organizations, employer organizations, associations and consortia of higher education institutions, government representatives, international organizations) have been involved in an iterative process of consultation to come to a stakeholder-based assessment of the relevance
presented to them as potential items in the five dimensions of U multirank (see 3. 3). In addition,
we invited feedback from international experts in higher education and research and from the Advisory board of the U multirank project.
the indicator focuses on the performance of (programs in) higher education and research institutions and is defined in such a way that it measuresrelative'characteristics (e g. controlling for size of the institution).
The required data to construct the indicator is either available in existing databases and/or in higher education and research institutions,
the indicators selected for the pre-test phase in U multirank (see 6. 2) then were grouped into three categories:
The outcome of the pre-test was used then as further input for the wider pilot where the actual data was collected to quantify the indicators for U multirank at both the institutional and the field level.
Teaching and learning 3. 3. 1education is the core activity in most higher education and research institutions.
Moreover, the pace of change of higher education and research institutions means that long-term performance is of low predictive value for judgments on the future of those institutions.
All we could aspire to in a ranking is to assessearly warning indicators'of higher education's contribution,
As one of the main objectives of our U multirank project is to inform stakeholders such as students,
even if they have become almost ubiquitous in this world's higher education, are too diverse to lead to comparable indicators (see chapter 1:
some quality assurance procedures focus on programs, others on entire higher education institutions; they have different foci, use different data,
Nevertheless we should remain vigilant to uncover signs of university efforts to manipulate their students'responses;
& Learning indicators that were selected for the pilot test of U multirank. The column on the right-hand side includes some of the comments
Data availability poses problem. 55 5 Time to degree Average time to degree as a percentage of the official length of the program (bachelor and master) Reflects effectiveness of teaching process.
Overall judgment of program Overall satisfaction of students with their program and the situation at their higher education institution Refers to single question to give anoverall'assessment;
The attractiveness of the university's exchange programs and the partner universities; availability of exchange places;
transfer of credits from exchange university; integration of the stay abroad into studies (no time loss caused by stay abroad) and support in finding internships abroad. 26 Student satisfaction:
University webpage Quality of information for students on the website. Index of several items including general information on institution and admissions, information about the program, information about classes/lectures;
and recommends that graduates of programs accredited by any of the signatory bodies be recognized by the other bodies as having met the academic requirements for entry to the practice of engineering'(www. washingtonaccord. org).
In engineering, adherence to the Washington Accord depends on national-level agencies, not on individual higher education institutions'59 strategies.
Research 3. 3. 2selecting indicators for capturing the research performance of a higher education and research institution or a disciplinary unit (e g. department,
faculty) within that institution has to start with the definition of research. We take the definition set out in OECD's Frascati Manual:
Given the increasing complexity of the research function of higher education institutions and its extension beyond Phd awarding institutions, U multirank adopts a broad definition of research,
There is a growing diversity of research missions across the classical research universities and the more vocational oriented institutions (university colleges, institutes of technology, universities of applied sciences, Fachhochschulen, etc.
Given that in most disciplines publications are seen often as the single most important research output of higher education institutions,
The Expert Group on Assessment of University Based Research12 defines research output as referring to individual journal articles, conference publications, book chapters, artistic performances, films, etc.
http://www. kowi. de/Portaldata/2/Resources/fp/assessing-europe-university-based-research. pdf 61 Table 3-2:
Expert Group on Assessment of University-Based Research (2010) Apart from using existing bibliometric databases,
Recommended by Expert Group on University-based Research. Difficult to separate teaching and research expenditure in a uniform way. 2 Research income from competitive sources Income from European research programs+income from other international competitive research programs+income from research councils+income
an indicator reflecting arts-related output is included in U multirank as well. However, data availability is posing some challenges here.
Therefore it was decided to keep them in the list of indicators for U multirank's institutional ranking.
Knowledge transfer 3. 3. 3knowledge transfer has become increasingly relevant for higher education and research institutions as many nations and regions strive to make more science output readily available for economic, social and cultural development.
The process by which the knowledge, expertise and intellectually linked assets of Higher education institutions are applied constructively beyond Higher education for the wider benefit of the economy and society, through two-way engagement with business, the public sector, cultural and community partners.
the knowledge exchange) process in higher education and research institutions and ultimately on users, i e. business and the economy, has now become a preoccupation of many governing and funding bodies, as well as policy-makers.
Traditionally TT is concerned primarily with the management of intellectual property (IP) produced by universities and other higher education and research institutions.
Higher education and research institutions often have technology transfer offices (TTOS)( Debackere & Veugelers, 2005), which are units that liaise with industry and assist higher education and research institutions'personnel in the commercialisation of research results.
TTOS provide services in terms of assessing inventions, patenting, licensing IP, developing and funding spin-offs and other start-ups and approaching firms for contract based arrangements.
A typical classification of mechanisms and channels for knowledge transfer between higher education and research institutions and other actors would include four main interaction channels for communication between higher education and research institutions and their environment:
however, already under the research dimension in U multirank. In the case of texts, it is customary to distinguish between two forms:
While publications are part of the research dimension in U multirank, patents will be included under the Knowledge Transfer dimension.
& Learning and Regional Orientation dimensions included in U multirank. Knowledge transfer through people also takes place through networks
such as the one carried out by the US-based Association of University Technology Managers (AUTM) for its Annual Licensing survey.
films and exhibition catalogues have been included in the scholarly outputs covered in the Research dimension of U multirank.
such as the Higher education-Business and Community Interaction (HE-BCI) Survey in the UK. 14 This UK survey began in 2001
U multirank particularly wants to capture aspects of knowledge transfer performance. However, given the state of the art in measuring knowledge transfer (Holi et al.
2008) 14 http://ec. europa. eu/invest-in-research/pdf/download en/knowledge transfer web. pdf. The HE-BCI survey is managed by the Higher education Funding Council for England (HEFCE)
and used as a source of information to inform the funding allocations to reward the UK universities'third stream activities.
2008) aims to create a ranking methodology for measuring university third mission activities along three subdimensions:
is regarded as relevant indicator by EGKTM. 3 University-industry joint publications Relative number of research publications that list an author affiliate address referring to a business enterprise or a private sector R&d unit;
Less relevant for HEIS oriented to humanities, social sciences. ISI databases available. Used in CWTS University-Industry Research Cooperation Scoreboard. 16 See also the brief section on the EUMIDA project,
included in this report. One of EUMIDA's findings is that data on technology transfer activity
which the university acts as an applicant related to number of academic staff Widely used in KT surveys.
Depends on disciplinary mix of HEI. Data are available from secondary (identical) data sources. 5 Size of Technology Transfer Office Number of employees (FTE) at Technology Transfer Office related to the number of FTE
KT function may be dispersed across the HEI. Not regarded as core indicator by EGKTM. 6 CPD courses offered Number of CPD courses offered per academic staff (fte) Captures outreach to professions Relatively new indicator.
CPD difficult to describe uniformly. 7 Co-patents Percentage of university patents for which at least one co-applicant is a firm,
as a proportion of all patents Reflects extent to which HEI shares its IP with external partners.
Depends on disciplinary mix of HEI. Data available from secondary sources (Patstat. 8 Number of Spin-offs The number of spin-offs created over the last three years per academic staff (fte) EGKTM regards Spin-offs as core indicator.
Field-based Ranking Definition Comments 9 Academic staff with work experience outside higher education Percentage of academic staff with work experience outside higher education within the last 10
years Signals that HEI's staff is placed well to bring work experience into their academic work.
HEIS not doing research in natural sciences/engineering/medical sciences hardly covered. 11 Co-patents Percentage of university patents for
HEIS not doing research in natural sciences/engineering/medical sciences hardly covered. Number of licences more robust than licensing income. 14 Patents awarded The number of patents awarded to the university related to number of academic staff Widely used KT indicator.
Data available from secondary (identical) data sources. Patents with an academic inventor but another institutional applicant (s) not taken into account.
Not relevant for all fields. 15 University-industry joint publications Number of research publications that list an author affiliate address referring to a business enterprise or a private sector R&d unit,
The number of collaborative research projects (university-industry) is another example of a knowledge transfer indicator that was selected not for the Focused Institutional Ranking.
International orientation 3. 3. 4internationalization is discussed a widely and complex phenomenon in higher education. The rise of globalization and Europeanization have put growing pressure on higher education
and research institutions to respond to these trends and develop an international orientation in their activities.
The increasing internationalization of curricula The wish to increase the international position and reputation of higher education and research institutions (Enquist, 2005).
but bias towards certain disciplines and languages. 5 Number of joint degree programs The number of students in joint degree programs with foreign university (including integrated period at foreign university) as a percentage of total
number of students enrolled Important indicator of the internationalatmosphere'of a faculty/department. Addresses student mobility and curriculum quality.
Internationalization of programs Index including the attractiveness of the university's exchange programs, the attractiveness of the partner universities, the sufficiency of the number of exchange places;
the transfer of credits from exchange university; the integration of Addresses quality of the curriculum.
Data available but sensitive to location (distance to border) of HEI. Stakeholders consider the indicator important. 13 Student satisfaction:
'While this indicates the commitment of the higher education and research institution to internationalization, and data is available,
International partnerships',that is the number of international academic networks a higher education and research institution participates in,
) Higher education and research institutions can play an important role in the process of creating the conditions for a region to prosper.
How well a higher education and research institution is engaged in the region is considered increasingly to be an important part of the mission of higher education institutions.
The latter two dimensions are covered in the U multirank dimensionKnowledge Transfer'.'Indicators for the social dimension of the third mission comprise indicators on international mobility (that are covered in the U multirank dimension International Orientation) and a very limited number of indicators on regional engagement.
Activities and indicators on regional and community engagement can be categorized in three groups: outreach, partnerships and curricular engagement18.
and provision of institutional resources for regional and community use, benefitting both university and the regional community.
learning and scholarship that engage faculty, students and region/community in mutual beneficial and respectful collaboration.
funding) and how much does the region draw on the resources provided by the higher education and research institution (graduates and facilities)?
U multirank has suggested to start with the existing list of regions in the Nomenclature of Territorial Units for Statistics (NUTS) classification developed
In our feasibility study, we have allowed higher education and research institutions to specify their own delimitation of region
Sensitive to way public funding for HEI is organized (national versus regional/federal systems. Availability of data problematic. 3 Regional joint research publications Number of research publications that list one or more author-affiliate addresses in the same NUTS2 or NUTS3 region,
and/or credits) Internships open up communication channels between HEI and regional/local enterprises. Stakeholders see this as important indicator.
as a percentage of all graduates employed See above institutional ranking. 8 Regional participation in continuing education Number of regional participants (coming from NUTS3 region where HEI is located) as percentage of total number
of population in NUTS3 region aged 25+Indicates how much the HEI draws on the region and vice versa.
but disciplinary bias not problematic at field level. 10 Summer school/courses for secondary education students Number of participants in schools/courses for secondary school students as a percentage of total enrolment
'Co-patents with regional firms'reflect cooperative research activities between higher education institutions and regional firms. While data may be found in international patent databases,
The same holds for measures of the regional economic impact of a higher education institution, such as the number of jobs generated by the university.
Assessing what the higher education and research institutiondelivers'to the region (in economic terms) is seen as most relevant
but data constraints prevent us from the use of such an indicator. Public lectures that are open to an external
A high percentage of new entrants from the region may be seen as the result of the high visibility of regionally active higher education and research institutions.
It may also be a result of the engagement with regional secondary schools. This indicator however was included not in our list,
and 7the pilot study on the empirical feasibility assessment of the U multirank tool and its various indicators will be discussed.
As a result of this pilot assessment the final list of indicators will be presented. 4 Constructing U Constructing U Constructing U Constructing U multirank:
and data collection instruments used in constructing U multirank. The first part is an overview of existing databases mainly on bibliometrics and patents.
and from students. 4. 2 Databases Existing databases 4. 2. 1one of the activities in the U multirank project was to review existing rankings
If existing databases can be relied on for quantifying the U multirank indicators this would be helpful in reducing the overall burden for institutions in handling the U-Multirank data requests.
For other aspects and dimensions, U multirank will have to rely on self-reported data. Regarding research output and impact, there are worldwide databases on journal publications and citations.
To further assess the availability of data covering individual higher education and research institutions, the results of the EUMIDA project were taken also into account. 21 The EUMIDA project (see:
www. eumida. org) seeks to develop the foundations of a coherent data infrastructure (and database) at the level of individual higher education institutions.
Our analysis on data availability was completed with a brief online consultation with the group of international experts connected to U multirank (see section 4. 2. 5). The international experts were asked to give their assessment of the 21 The U multirank project was granted access to the preliminary
in order to learn about data availability in the countries covered by EUMIDA. 80 situation with respect to data availability in some of the non-EU countries included in U multirank Bibliometric databases 4. 2. 2there are a number of international databases
which can serve as a source of information on the research output of a higher education and research institution (or one of its departments).
The production of publications by a higher education and research institute not only reflects research activities in the sense of original scientific research,
but usually also the presence of underlying capacity and capabilities for engaging in sustainable levels of scientific research. 22 The research profile of a higher education
The bibliometric methodologies applied in international comparative settings such as U multirank usually draw their information from publications that are released in scientific and technical journals.
U multirank therefore makes use of international bibliometric databases to compile some of its research performance indicators
To compile the publications-related indicators in the U multirank pilot study, bibliometric data was derived from the October 2010 edition of the Web of Science bibliographical database.
and operated by the CWTS (being one of the CHERPA Network partners) under a full license from Thomson Reuters. This dedicated version includes thestandardized institutional names'of higher education
This data processing of address information is done at the aggregate level of the entiremain'organization (not for sub-units such as departments or faculties.
All the selected institutions in the U multirank pilot study produced at least one Web of Science-indexed research publication during the years 1980-2010.
which mainly refer to discovery-orientedbasic'research of the kind that is conducted at universities and research institutes.
For the following six indicators selected for inclusion in the U multirank pilot test (see chapter 6) one can derive data from the CWTS/Thomson Reuters Web of Science database:
1. total publication output 2. university-industry joint publications 3. international joint publications 4. field-normalized citation rate 5. share of the world
#6) that were constructed specially for U multirank and that have never been used before in any international classification or ranking.
Patent databases 4. 2. 3as part of the indicators in the Knowledge Transfer dimension, U multirank selected the number of patent applications for
which a particular higher education and research institution acts as an applicant and (as part of that) the number of co-patents applied for by the institution together with a private organization.
For U multirank, patent data were retrieved from the European Patent office (EPO. Its Worldwide Patent Statistical Database (version October 2009) 25, also known as PATSTAT, is designed
and by DG Research. 25 This version is held by the K. U. Leuven (Catholic University Leuven)
Data availability according to EUMIDA 4. 2. 4like the U multirank project, the EUMIDA project (see http://www. eumida. org) collects data on individual higher education and research institutions.
The EUMIDA and U multirank project teams agreed to share information on issues such as definitions of data elements
The overlap lies mainly in the area of data related to the inputs (or activities) of higher education and research institutions.
since U-Map aims to build activity profiles for individual institutions whereas U multirank constructs performance profiles.
The findings of EUMIDA point to the fact that for the more research intensive higher education institutions, data for the dimensions of Education and Research are covered relatively well
Table 4-1 below shows the U multirank data elements that are covered in EUMIDA and whether information on these data elements may be found in national databases (statistical offices, ministries, rectors'associations, etc.).
The table illustrates that information on only a few U multirank data elements is available from national databases and,
Data elements shared between EUMIDA and U multirank: their coverage in national databases Dimension EUMIDA and U multirank data element European countries where data element is available in national databases Teaching & Learning relative rate of graduate unemployment
CZ, FI, NO, SK, ES Research expenditure on research AT*,BE, CY, CZ*,DK, EE, FI, GR*,HU, IT, LV*,LT*,LU, MT*,NO, PL*,RO*,SI*,ES, SE, CH,
IE*,IT, LU, MT*,NO, NL (p), PL*,SI, ES, UK 85 International Orientation (no overlap between U multirank and EUMIDA) Regional Engagement (no overlap between U multirank and EUMIDA) Source:
There are confidentiality issues (e g. national statistical offices may not be prepared to make data public without consulting individual HEIS)( p) indicates:
Data are only partially available (e g. only for public HEIS, or only for (some) research universities) The list of EUMIDA countries with abbreviations:
Austria (AT), Belgium (BE), Belgium-Flanders community (BE-FL), Bulgaria (BG), Cyprus (CY), Czech republic (CZ), Denmark (DK), Estonia (EE), Finland (FI) France (FR), Germany (DE
) Expert view on data availability in non-European countries 4. 2. 5the Expert Board of the U multirank project was consulted to assess for their six countries all from outside Europe the availability of data
related to the U multirank indicators. 27 They gave their judgement on the question whether data was available in national databases and/or in the institutions themselves.
Availability of U multirank data elements in countries'national databases according to experts in 6 countries (Argentina/AR, Australia/AU, Canada/CA, Saudi arabia/SA, South africa/ZA
, United states/US) Dimension U multirank data element Countries where data element is available in national databases Countries where data element is available in institutional database Teaching & Learning
AU, CA, SA, ZA incentives for knowledge exchange AR AR, AU, CA, SA CPD courses offered AU, CA, SA, ZA university-industry
Based on U multirank expert survey If we look at the outcomes, it appears that for the Teaching
or definitions that differ from the ones used for the questionnaires applied in U multirank (see next section).
the U multirank project had to rely largely on self-reported data (both at the institutional
collected directly from the higher education and research institutions. The main instruments to collect data from the institutions were four online questionnaires:
the U-Map questionnaire is an instrument for identifying similar subsets of higher education institutions within the U multirank sample.
Respondents from the institutions were advised to complete the U-Map questionnaire first before completing the other questionnaires. 89 4. 3. 1. 2 Institutional questionnaire By means of U multirank's institutional questionnaire28,
university hospital; students: enrolments; programme information: bachelor/master programmes offered; CPD courses; graduates: graduation rates;
Data elements from U-Map are transferred automatically to the U multirank questionnaire using atransfer tool'.
'The academic year 2008/2009 was selected as the default reference year. 4. 3. 1. 3 Field-based questionnaire The field-based questionnaire includes information on individual faculties/departments
summer schools/courses for secondary education students; description: accreditation of department; profile with regard to teaching & learning, profile with regard to research.
The U multirank questionnaires were tested in terms of cultural/linguistic understanding, clarity of definitions of data elements and feasibility of data collection.
In selecting the institutions for the pre-test the U multirank team considered the geographical distribution and the type of institutions.
Based on approved instruments from other fields (e g. surveys on health services) we have usedanchoring vignettes'to test sociocultural differences in assessing specific constellations of services/conditions in higher education with respect to teaching and learning.
to cover all relevant issues on the five dimensions of U multirank and to limit the questionnaire in terms of length.
In order to come to a meaningful and comprehensive set of indicators at the conclusion of the U multirank pilot study we had to aim for a broad data collection to cover a broad range of indicators.
a number of supporting instruments were prepared for the four U multirank surveys. These instruments ensure that respondents will have a common understanding of definitions and concepts.
A glossary of indicators for the four surveys was published on the U multirank website. Throughout the data collection process the glossary was updated regularly.
This allowed questions to be asked concerning the questionnaires and for contact with the U multirank team on other matters.
A technical specifications protocol for U multirank was developed introducing additional functions in the questionnaire to ensure that a smooth data collection could take place:
the option to transfer data from the U-Map to the U multirank institutional questionnaire, and the option to have multiple users access the questionnaire at the same time.
We updated the U multirank website regularly and provided information about the steps/time schedules for data collection.
All institutions had clear communication partners from the U multirank team. 4. 4 A concluding perspective This chapter, providing a quick survey of existing databases,
and include employers and other clients of higher education and research institutions, but that would make the task even bigger.
The U multirank questionnaires therefore were accompanied by a glossary of definitions and an FAQ facility to improve the reliability of the answers.
However, as a result of differences in national higher education systems, different accounting systems, as well as different national customs and definitions of indicators, there are limits to the comparability of data.
Testing U multirank: pilot sample and data collectionmultirank: pilot sample and data collection Multirank: pilot sample and data collection Multirank:
and construction process for U multirank, we will describe the feasibility testing of this multidimensional ranking tool.
This test took place in a pilot study specifically undertaken to analyse the actual feasibility of U multirank on a global scale.
one of the basic ideas of U multirank is the link to U-Map. U-Map is an effective tool to identify institutional activity profiles of institutions similar enough to compare them in rankings.
which makes it insufficiently applicable for the selection of the sample of pilot institutions for the U multirank feasibility test.
(and cannot) claim that we have designed a sample that is representative of the full diversity of higher education in the world (particularly as there is no adequate description of this diversity)
The existing set of higher education institutions in the U-Map database was included. This offered a clear indication of a broad variety of institutional profiles. 98 Some universities applied through the U multirank website to participate in the feasibility study.
Their broad profiles were checked as far as is possible against the U-Map dimensions in order to be able to describe their profiles.
In most countriesnational correspondents'(a network created by the research team) were asked to suggest institutions that would reflect the diversity of higher education institutions in their country.
an Institute for Water and Environment, an agricultural university, a School of Petroleum and Minerals, a military academy, several music academies and art schools, universities of applied sciences and a number of technical universities.
The 159 institutions that agreed to take part in the U multirank pilot are spread over 57 countries.
Our national correspondents explained that Chinese universities are reluctant to participate in rankings when they cannot predict the outcomes of participation
In the US the U multirank project is perceived as strongly European-focused, which kept some institutions from participating.
The problems with some countries are an important aspect regarding the feasibility of a global implementation of U multirank.
All in all the intention to attain a sufficient international scope in the U multirank pilot study by means of a global sample can be seen as successful.
Regional distribution of participating institutions Region and Country Initial proposal for number of institutions Institutions in the final pilot selection Institutions that confirmed participation Institutions which delivered U multirank institutional data Institutions
which delivered U multirank institutional data and U-Map data July 2010 February 2011 April 2011 April 2011 I. EU 27 (population in millions) Austria (8m
Region and Country Initial proposal for number of institutions Institutions in the final pilot selection Institutions that confirmed participation Institutions which delivered U multirank institutional data Institutions
which delivered U multirank institutional data and U-Map data Netherlands (16m) 3 7 3 3 3 Poland (38m) 6 12 7 7 6
and Country Initial proposal for number of institutions Institutions in the final pilot selection Institutions that confirmed participation Institutions which delivered U multirank institutional data Institutions
which delivered U multirank institutional data and U-Map data Other Asia 5 2 The Philippines 1 1 1 Taiwan 1 1 0 Vietnam 2
19 institutions are in the top 200 of the Times Higher education ranking, 47 in the top 500 of the ARWU ranking and 47 in the top 500 of the QS ranking.
Since the exact number of higher education institutions in the world is known not we use a rough estimate of 15,000 institutions worldwide.
In that case the top 500 comprises only 3%of all higher education institutions. In our sample 29%of the participating institutions are in the top 500,
and master's students to take part in a survey. 106 departments agreed to do so. Some institutions decided to submit the information requested in the departmental questionnaire
o U multirank institutional questionnaire Field-based ranking: o U multirank field-based questionnaires o U multirank Student survey 104 Figure 5-1:
U multirank data collection process The institutions were given seven weeks to collect the data, with deadlines set according to the dates the institution confirmed their participation.
Thegrouping'criterion for this was the successful submission of the contact form. The next step to ensure a high response rate was to review
We advised the institutions to start working with the questionnaires in a certain order beginning with the U-Map and then the U multirank questionnaires
since a tool had been developed to facilitate the transfer of overlapping information from the U-Map questionnaire to the U multirank institutional questionnaire.
Organising a survey among students on a global scale was one of the major challenges in U multirank.
critical comments indicated some confusion about the relationship between the U-Map and U multirank institutional questionnaires.
regional engagement)( See Figure 5-4). 02468 10 12 very good good neutral poor very poor General procedures Communication with U multirank 0123456789
and the U multirank technical specification email (see appendices 10 and 11) with the institutions to ensure that a smooth data collection could take place.
all universities had clear communication partners in the U multirank team. The main part of the verification process consisted of the data cleaning procedures after receiving the data.
and master/long national first degree programs Students who had spent little time on the questionnaire and had responded not adequately.
Note that this set includes four new performance indicators that have never been used before in any international ranking of higher education institutions.
The following four indicators have especially been designed for U multirank: International joint research publications; University-industry joint research publications;
Regional joint research publications; Highly cited research publications. Further information on each of the six bibliometric indicators used in the pilot study is presented below. 1) Total publication output Frequency count of research publications with at least one author address referring to the selected main organization.
This is an indicator of research collaboration with partners located in other countries. 3) University-industry joint research publications Frequency count of publications with at least one author address referring to the selected main organization
Statistical information on 500 universities worldwide is freely available at the CWTS website: www. socialsciences. leiden. edu/cwts/products-services/scoreboard. html 4) Regional joint research publications Frequency count of publications with at least one author address referring to the selected main organization
In a possible next stage of U multirank we expect to apply a different, and more flexible, way of delineating regions
'The bibliometric data in the pilot version of U multirank database refer to one measurement per indicator.
Although all the HEIS that participated in the U multirank pilot study produced at least one Wos-indexed research publication during the years 1980-2010
In follow-up stages of U multirank we plan to lower the threshold values for Wos-indexed publication output
Depending on the severity of the problem within a HEI, we can then either: remove the institution from all indicators that involve bibliometric data;
The development of patent indicators on the micro-level of specific entities such as universities is complicated by the heterogeneity of patentee names that appear in patent documents within and across patent systems.
was developed by ECOOM (Centre for R&d Monitoring, Leuven University; partner in CHERPA), in partnership with Sogeti29, in the framework of the EUROSTAT work on Harmonized Patent Statistics.
Second, and specifically for the U multirank pilot, keyword searches were designed and tailored for each institute individually,
Several mostly European studies have compared the volumes of suchuniversity-invented'patents (invented by an academic scientist)
versusuniversity-owned'(with the university registered as applicant). Evidence from studies in France 0%10%20%30%40%50%60%01234567891213141518192021222631404547596071454459%of institutes from pilot (N=165) Annual average patent volume (2000
2007) suggests that about 60%of university-invented patents are owned not university. The available evidence from some US studies indicates much smaller percentages (approximately 20%)of university-invented patents that are owned not university (Thursby et al.
2007). ) Moreover, national and institutional differences in culture and legislation regarding intellectual property rights on university-created knowledge will cause the size of the consequentialbias'to vary between countries.
Institutional and national differences may concern the autonomy of institutions, the control they exercise over their academic staff,
academic patents in Europe (i e. patents invented by academic scientists) are much less likely to be owned'by universities
(i e. the university is registered as applicant) than in the USA, as European universities have lower incentives to patent
or generally have less control over their scientists'activities (Lissoni et al.,2008). ) This does not mean that European academic scientists do not effectively contribute to the inventive activity taking place in their countries,
as one might presume from considering only the statistics on university-owned patents. On the contrary, the data provided
where universities own the majority of academic patents, Europe witnesses the dominance of business companies,
one should at all times bear in mind the relatively sizable volume of university-invented patents that is not retrieved by the institution-level search strategy and institutional and national variations in the size of the consequential limitation bias.
We have argued that the field-based rankings of indicators in each dimension contribute significantly to the value and the usability of U multirank.
At present, however, the breakdown of patent indicators by the fields defined in the U multirank pilot study (business studies,
The overview of higher education fields is based on educational programs, research fields and other academically-oriented criteria.
Due to the consequential large difference in notions that underliehigher education field'versustechnology field, 'a concordance between both is meaningless.
Therefore we were unable to produce patent analyses at the field-based level of U multirank. 6 Testing UTESTING U Testing U Testing U multirank:
results 6. 1 Introduction The main objective of the pilot study was to empirically test the feasibility of the U multirank instrument.
and the potential upscaling of U multirank to a globally applicable multidimensional ranking tool. 6. 2 Feasibility of indicators In the pilot study we analyzed the feasibility of the various indicators that were selected after the multi-stage process of stakeholder
the indicator focuses on the performance of (programs in) higher education and research institutions and is defined in such a way that it measuresrelative'characteristics (e g. controlling for size of the institution) o Face validity:
and the Advisory Group. 122 Teaching & Learning 6. 2. 1the first dimension of U multirank is Teaching & Learning.
efforts should be made to enhance the data situation on cultural research outputs of higher education institutions. This cannot be done by 126 producers of rankings alone;
In general, the data delivered by faculties/departments revealed some problems in clarity of definition of staff data.
transfer A a Patents awarded**A b University-industry joint research publications*A a CPD courses offered per fte academic staff B b Start-ups per fte academic staff
Robustness Availability Preliminary rating Feasibility score Data availability Conceptual clarity Data consistency Recommendation University-industry joint research publications*A a Academic
and institutions do not record numbers of students with self-organized stays at foreign universities.
and could assess the support provided by their university. The indicatorinternational orientation of programs'is a composite indicator referring to several data elements;
Some universities had difficulties to identify their international staff based on this definition. Regional engagement 6. 2. 5up to now the regional engagement role of universities has not been included in rankings.
There are a number of studies on the regional economic impact of higher education and research institutions,
either for individual institutions and their regions or on higher education in general. Those studies do not offer comparable institutional indicators or indicators disaggregated by fields.
Table 6-10: Focused institutional ranking indicators: Regional Engagement REGIONAL ENGAGEMENT Rating of indicators (pre-pilot) Feasibility score (post-pilot) Focused institutional ranking Relevance Concept/construct validity Face validity
which caused some problems in non-European higher education institutions. But even within Europe NUTS regions are seen as problematic by some institutions,
and the relevance of higher education and research to the regional economy and the regional society at large,
and knowledge of local higher education institutions to be utilized in a regional context, in particular in small-and medium-sized enterprises.
and in many non-metropolitan regions they play an important role in the recruitment of higher education graduates. 6. 3 Feasibility of data collection As explained in section 5. 3 data collection during the pilot
the U-map questionnaire to identify institutional profiles the U multirank institutional questionnaire the U multirank field-based questionnaire We supported this data collection with extensive data cleaning processes,
The parallel institutional data collection for U-Map and U multirank caused some confusion. Although a tool was implemented to pre-fill data from U-Map into U multirank,
some confusion remained concerning the link between the two instruments. In order to test some varieties, institutional and field-based questionnaires were implemented with different features (e g. definition of international staff).
In some countries the U multirank student survey conflicted with existing national surveys, which in some cases are highly relevant for institutions.
It should be evaluated how far U multirank and national surveys could be harmonized in terms of questionnaires and, at least, in terms of timing.
up to now they have not been used in comparative higher education research. Hence we had to develop our own approach to this research technique.
To assess the feasibility of our bibliometric data collection we studied the potential effects of a bottom-up verification process via a special case study of six French universities.
For example, even a seemingly universal name such asuniversity'may not describe the same institutional reality in different systems in England or in the US
some so-calleduniversities'could be in fact umbrella organizations covering several autonomous universities, while in France many universities are thematic and issue from one comprehensiveroot'university.
Second, most of the national research systems tend to become more complex under the pressure of thefunding on project'policies that induce the setup of various networklike institutions such as consortia, platforms andpoles'.
For communication purposes, authors may prefer to replace the name of the university with the name of a 137 network.
In addition, in some countries like France, universities, schools and research institutions can be interwoven by many joint labs,
Several studies have shown that the volume of such university-invented patents is sizable (Azagra Caro et al.
is it possible to extend U multirank to a comprehensive global coverage and how easy would it be to add additional fields?
In terms of the feasibility of U multirank as a potential new global ranking tool, the results of the pilot study are positive,
and taking into account that it is clear that U multirank is based a Europe initiative, this represents a strong expression of worldwide interest.
Our single caveat concerns an immediate global-level introduction of U multirank. The pilot study suggests that a global multidimensional ranking is unlikely to prove feasible in the sense of achieving extensive coverage levels across the globe in the short term.
Higher education and research institutions in the USA showed very limited interest in the study, while in China formal conditions appeared to hamper the participation of institutions.
From their participation in the various stakeholder meetings, we can conclude that there is broad interest in the further development and implementation of U multirank.
And we believe that there are opportunities for the targeted recruitment of groups of institutions from outside Europe of particular interest to European higher education.
and 2) the conceptual interpretation of the'user-driven approach'applied in U multirank. It would be interesting to involve LERU again during a follow-up project
The other aspect of the potential up-scaling of U multirank is the extension to other fields.
Any extension of U multirank to new fields must deal with two questions: the relevance and meaningfulness of existing indicators for those fields,
While the U multirank feasibility study focused on the pilot fields of business studies and engineering, some issues of up-scaling to other fields have been discussed in the course of the stakeholder consultation.
Following the user-and stakeholder-driven approach of U multirank, we suggest that field-specific indicators for international rankings should be developed together with stakeholders from these fields.
when additional fields are addressed in U multirank, some specific field indicators will have to be developed. Based on the experience of the CHE ranking this will vary by field with some fields requiring no additional indicators
we conclude that up-scaling in terms of addressing a larger number of fields in U multirank is certainly feasible. 7 Applying U Applying U Applying U multirank:
A few rankings (e g. the Taiwanese College Navigator published by HEEACT30 and CHE ranking) implemented tools to produce a personalised ranking, based on user preferences and priorities with regard to the set of indicators.
This approach implies the user-driven notion of ranking which also is a basic feature of U multirank.
The presentation of U multirank results outlined in this chapter strictly follows this user-driven approach. But by relating institutional profiles (created in U-Map) with multidimensional rankings
U multirank introduces a second level of interactive ranking beyond the user-driven selection of indicators:
internationally-oriented research universities. U multirank has a much broader scope and intends to include a wider variety of institutional profiles.
We argue that it does not make much sense to compare institutions across diverse institutional profiles.
Hence U multirank offers a tool to identify and select institutions that are truly comparable in terms of their institutional profiles. 7. 2 Mapping diversity:
combining U-Map and U multirank From the beginning of the U multirank project one of the basic aims was that U multirank should be in contrast to existing global rankings
which brought about a dysfunctional shortsightedness onworld-class research universities'a tool to create transparency regarding the diversity of higher education institutions.
and decreasing diversity in higher education systems (see chapter 1). Our pilot sample includes institutions with quite diverse missions, structures and institutional profiles.
We have applied the U-Map profiling tool to specify these profiles. 30 College Navigator: http://cnt. heeact. edu. tw/site1/index2. asp?
The combination of U-Map and U multirank offers a new approach to user-driven rankings.
and hence the sample of institutions to be compared in U multirank. Figure 7-1: Combining U-Map and U multirank Our user-driven interactive web tool will imply both steps, too.
Users will be offered the option to decide if they want to produce a focused institutional ranking or a field-based ranking,
U multirank includes different ways of presenting the results. 143 7. 3 The presentation modes Presenting ranking results requires a general model for accessing the results,
In U multirank the presentation of data allows for both: a comparative overview on indicators across institutions,
U multirank produces indicators and results on different levels of aggregation leading to a hierarchical data model:
In U multirank we present the results alphabetically or by rank groups (see chapter 2). In the first layer of the table (field-based ranking),
6research 146 Personalized ranking tables 7. 3. 2the development of an interactive user-driven approach is a central feature of U multirank.
when applying U multirank. An intuitive, appealing visual presentation of the main results will introduce users to the performance ranking of higher education institutions.
Results at a glance presented in this way may encourage users to drill down to more detailed information.
so that there is a recognizable U multirank presentation style and users are confused not by multiple visual styles.
and discussed at a U multirank stakeholder workshop and there was a clear preference for thesunburst'chart similar to the one used in U-Map.
The colours symbolize the five U multirank dimensions, with the rays representing the individual indicators. In this chart the grouped performance scores of institutions on each indicator are represented by the length of the corresponding rays:
An example is a detailed view on the results of a department (the following screenshot shows a sample business administration study program at bachelor and masters level.
faculty/department (field) and program. 149 Figure 7-4: Text format presentation of detailed results (example) 7. 4 Contextuality Rankings do not
Context variables affecting the performance of higher education institutions. Context factors that may affect decision-making processes of users of rankings (e g. students, researchers) although not linked to the performance of institutions.
for prospective students intending to choose a university or a study program, low student satisfaction scores regarding the support by teaching staff in a specific university or program is relevant information,
although the indicator itself cannot explain the reasons behind this judgment. Rankings also have to be sensitive to context variables that may lead to methodological biases.
size and field structure of the institution. 150 The (national) higher education system as a general context for institutions:
this includes legal regulations (e g. concerning access) as well as the existence of legal/officialclassifications'of institutions (e g. in binary systems, the distinction between universities and other forms of non-university higher education institutions.
The structure of national higher education and research: the organization of research in different higher education systems is an example.
While in most countries research is integrated largely in universities, in some countries like France or Germany non-university research institutions undertake a major part of the national research effort.
A particular issue with regard to the context of higher education refers to the definition of the unit of analysis. The vast majority of rankings in higher education are comparing higher education institutions.
A few rankings explicitly compare higher education systems, either based on genuine data on higher education systems, e g. the University Systems Ranking published by the Lisbon Council31,
or by simply aggregating institutional data to the system level (e g. the QS National System Strength Ranking.
In this latter case global institutional rankings are used more or less implicitly to produce rankings of national higher education systems,
thereby creating various contextual problems. Both the Shanghai ranking and the QS rankings for instance are including universities only.
The fact that they do not include non-university research institutions, which are particularly important in some countries (e g. in France,
Germany), produces a bias when their results are interpreted as a comparative assessment of the performance or quality of national higher education and research systems.
U multirank addresses the issues of contextuality by applying the design principle of comparability (see chapter 2). In U multirank rankings are created only among institutions that have sufficiently similar institutional profiles.
Combining U-Map and U multirank produces an approach in which comparable institutions are identified before they are compared in one or more rankings.
By identifying comparable institutions, the impact of contextual factors may be assumed to be reduced. In addition, U multirank intends to offer relevant contextual information on institutions and fields.
Contextual information does not allow for causal analyses but it offers users the opportunity to create informed judgments of the importance of specific contexts
while assessing performances. During the further development of U multirank the production of contextual information will be an important topic. 31 See www. lisboncouncil. net 151 7. 5 User-friendliness U multirank is conceived as a user-driven and stakeholder
-oriented instrument. The development of the concept the definition of the indicators, processes of data collection and discussion on modes of presentation have been based on intensive stakeholder consultation.
In U multirank a number of features are included to increase the user-friendliness. In the same way as there is no one-size-fits-all-approach to rankings in terms of indicators,
U multirank, as any ranking, will have to find a balance between the need to reduce the complexity of information on the one hand and, at the same time,
U multirank wants to offer a tailor-made approach to presenting results, serving the information needs of different groups of users and taking into account their level of knowledge about higher education and higher education institutions.
Basic access is provided by the various modes of presentation described above (overview tables, personalised rankings and institutional profiles.
In accordance with EU policies on eaccessiblity32 barriers to access to the U multirank results and data will be removed as much as possible.
'i e. users from within higher education will be able to use an English version of U multirank. In particular forlay users'(e g. prospective students) the existence of various language versions of U multirank would increase usability.
However, translation of the web tool and the underlying data is a substantial cost factor.
But at least an explanation of how to use U multirank and the glossary and definition of indicators and key concepts should be available in as many European languages as possible. 32 See http://europa. eu/legislation summaries/information society/l24226h en. htm (retrieved on 10 may 2011
and a feasible business model to finance U multirank (see chapter 8). Another important aspect of user-friendliness is the transparency about the methodology used in rankings.
For U multirank this includes a description of the basic methodological elements (institutional and field-based rankings,
the future 8. 1 Introduction An important aspect in terms of the feasibility of U multirank is the question of implementing the system on a widespread and regular basis
It is clear that the implementation of U multirank is a dynamic and only partially predictable process
and we must differentiate between a two-year pilot phase and a longer-term implementation/institutionalisation of U multirank.
One of our basic suggestions regarding transparency in higher education and research is the integration of U-Map and U multirank.
Therefore, many of the conclusions regarding the operational implementation in the final U-Map report (see www. u-map. eu) are also valid for U multirank.
global or European The pilot test showed some problems with the inclusion into U multirank of institutions from specific countries outside Europe.
Clearly, with participation in U multirank on a voluntary basis higher education institutions will have to be convinced of the benefits of participation.
This leads to the question of the scale of international scope that U multirank could and should attain.
We would argue that U multirank should aim to achieve a relatively wide coverage of European higher education institutions as quickly as possible during the next project Phase in Europe the feasibility
in order to be able to address the diversity of European higher education. But U multirank should remain a global tool.
There are institutions all over the world interested in benchmarking with European universities; the markets and peer institutions for European universities are increasingly becoming global;
and the impression that the U multirank instrument is 154 only there to serve European interest should be avoided.
The pilot study proves that U multirank can be applied globally. Based on the pilot results we suggest that the extension beyond Europe could best be organized systematically
and should not represent just a random sample. From outside Europe, the necessary institutions should be recruited to guarantee a sufficient sample of comparable institutions of different profiles.
For instance it could be an option to try and integrate the research-oriented, international universities scoring high in traditional rankings.
When this strategy leads to a substantial database within the next two years recruitment could be reinforced, at
which point the inclusion of these important peer institutions will hopefully motivate more institutions to join U multirank.
There is no definitive answer to the question of how many fields there are in international higher education. ISCED (1997) includes nine broad groups, such as humanities and arts, science, and agriculture.
Based on our pilot project we believe that it is feasible to add five new fields in each of the first three years of continued implementation of U multirank.
theheart'of U multirank is the idea of creating a user-driven, flexible tool to obtain subjective ranking that are relevant from the perspective of the individual user.
with U multirank it is also possible to create so-calledauthoritative'ranking lists from the database.
An authoritative ranking could be produced by a specific association of higher education institutions. For instance international associations or consortia of universities (such as CESEAR, LERU or ELIA) might be interested in benchmarking
or ranking theirmembers'.'An authoritative ranking could be produced from the perspective of a specific stakeholder or client organization.
For instance, an international public organization might be interested in using the database to promote a ranking of the international, research-intensive universities in order to compare a sample of comparable universities worldwide.
On the other hand, in the first phase of implementation, U multirank should be perceived by all potential users as relevant for their individual needs.
or two international associations of higher education institutions and by conceptualizing one or more authoritative rankings with interested public and/or private partners.
however, is on establishing the flexible web tool. 156 8. 4 The need for international data systems U multirank is isolated not an system,
The development of the European database resulting from EUMIDA should take into account the basic data needs of U multirank.
A second aspect of integrated international data systems is the link between U multirank and national ranking systems.
U multirank implies a need for an international database of ranking data consisting of indicators which could be used as a flexible online tool
and rank comparable universities. Developing a European data system and connecting it to similar systems worldwide will strongly increase the potential for multidimensional global mapping and ranking.
Despite this clear need for cross-national/European/global data there will be a continued demand for information about national/regional higher education systems, in particular with regard to undergraduate higher education.
the majority of in particular undergraduate students will continue to start higher education in their home country. Hence field-based national rankings and cross-national regional rankings (such as the CHE ranking of German
The national rankings could refer to specific national higher education systems and at the same time provide a core set of joint indicators that can be used for European and global rankings.
In Spain we have the example of Fundacion CYD planning to implement a field-based ranking system based on U multirank standards.
In its operational phase the U multirank unit should develop standards and a set of basic indicators that national initiatives would have to fulfil
the U multirank unit will be able to pre-fill the data collection instruments and has to fill the gaps to attain European or worldwide coverage.
Finalisation of the various U multirank instruments 1. Full development of the database and web tool.
The prototypes of the instrument will demonstrate the outcomes and benefits of U multirank. 2. Setting of standards and norms and further development of underdeveloped dimensions and indicators.
These parts of the ranking model should be developed further. 3. Update of data collection tools/questionnaires according to the revision and further development of indicators and the experiences from the U multirank project.
In the first round of U multirank pre-filling proved difficult. The testing of national data systems for their pre-filling potential and the development of suggestions for the promotion of pre-filling are important steps to lower the costs of the system for the institutions.
A link to the development of a European higher education data system (EUMIDA) should be explored; coordination of all relevant EC projects should be part of the next Phase in addition,
and the international U multirank database should be realized. Roll out of U multirank across EU+countries 5. Invitation of EU+higher education institutions and data collection.
Within the next two years all identifiable European higher education institutions should be invited to 159 participate in the institutional as well as in the three selected field-based rankings.
The objective would be to achieve full coverage of institutional profiles and have a sufficient number of comparable institutions.
If we take into account the response rate of institutions in the pilot phase the inclusion of 700 institutions in the institutional and 500 in each field-based ranking appears realistic. 6. Targeted recruitment of higher education institutions outside Europe.
The combined U-Map/U multirank approach should be tested further by developing the means to produce the first authoritative ranking lists for universities with selected profiles.
for instance the profile of research orientation and a high degree of internationalization (international research intensive universities) and the profile of a strong focus on teaching
and alliances of higher education institutions willing to establish internal benchmarking processes and publish rankings of their membership.
If the objective is to establish U multirank as largely self-sustainable, a business plan is required. It could be a good idea to involve organizations with professional business expertise in the next project phase in order to work out a business plan,
the user-driven approach imbues U multirank with strong democratic characteristics and a role far from commercial interests,
if complete funding from nonprofit sources is unrealistic. 9. Formal institutionalization of the U multirank unit.
During the next project phase an operational organization to implement U multirank will need to be created and a governance and funding structure established,
The features of and opportunities offered by U multirank need to be communicated continuously. Since the success of U multirank requires institutions'voluntary participation a comprehensive promotion
and recruitment strategy will be needed, requiring the involvement of many key players 160 (governments, European commission, higher education associations, employer organizations, student organizations).
11. User-friendliness of the instrument. A crucial issue related to communication is the user-friendliness of U multirank.
This could be guaranteed by the smoothness of data collection and the services delivered to participants in the ranking process.
and knowledge about higher education of specific user groups (for instance secondary school leavers versus higher education decision-makers). A user-friendly tool needs various levels of information provision, understandable language, clarity of symbols and explanations, assisted navigation through the web tool and feedback loops providing information
12/2013 The 11 elements form the potential content of the next U multirank project phase, transforming U multirank from a feasible concept to a fully developed instrument already rolled out and ready for continuous operation. 8. 6 Criteria
and models of implementation An assessment of the various options for the organizational implementation of U multirank requires a set of analytical criteria.
The following criteria represent notions of good practice for this type of an implementation process such as governance
The ranking must be recognized open to higher education institutions of all types and from all participating countries, irrespective of their membership in associations, networks or conferences.
The ranking tool must be administered independent of the interests of higher education institutions or representative organizations in the higher education and research sector.
The implementation has to separate ranking from higher education policy issues such as higher education funding or accreditation.
A key element of U multirank is the flexible, stakeholder-oriented, user-driven approach. The implementation has to ensure this approach,
In general, the involvement of relevant actors in both the implementation of U multirank and its governance structure is a crucial success factor.
those parties taking responsibility for the governance of U multirank should be accepted broadly by stakeholders. Those who will be involved in the implementation should allow their names to be affiliated with the new instrument
and take responsibility in the governing bodies of the organizational structure. 163 We identified four basic options for responsibility structures for U multirank:
e g. media companies (interested in publishing rankings), consulting companies in the higher education context and data providers (such as the producers of bibliometric databases).
In this model, governments would use their authority over higher education to organize the rankings of higher education institutions.
i e. student organizations and associations of higher education institutions, would be responsible for the operation of the transparency instrument.
because if HEI experience high workloads with data collection they expect free products in return and are not willing to pay for basic data analysis.
Doubts about commitment to social values of European higher education area (e g. no free access for student users?.
Nonprofit organization can be linked with commitment to social values of European higher education area. The idea of international alliances ensures international orientation.
Assessment of the four models for implementing U multirank Criteria Model Inclusiveness International orientation Independence Professionalism Sustainability Efficiency Service orientation Credibility Commercial--+Government++-Stakeholder-+-Independent
It is independent both from higher education institutions (and their associations) and from higher education funding bodies/politics.
and funding instruments in higher education. It can offer a noncommercial character to the instrument, and it can guarantee external supervision of the implementation and broad and open access to the results.
Products for universities could be created, but the pricing policy mustn't destroy the willingness to participate.
A suggestion would be to organize the implementation of U multirank in such a way that basic ranking results can be provided for free to the participating institutions,
We believe that it is not reasonable in the initial phase of implementing U multirank to establish a new professional organization for running the system.
Once the extent of participation of higher education institutions is known, this option could be considered. The assumption is
There should be a next U multirank project phase before a ranking unit is established. 168 Figure 8-2:
Organizational structure for phase 1 (short term) We suggest that during the next two years of the project phase the current project structure of U multirank should be continued.
U multirank The following analysis of cost factors and scenarios is based on the situation of running U multirank as an established system.
Costs have been estimated based on this projection but will not become part of the final report. The cost estimations showed that U multirank is an ambitious project also in financial terms,
but in general it seems to be financially feasible. A general assumption based on the EU policy is that U multirank should become self-sustainable without long-term basic funding by the European commission.
EC contributions will decline over time and new funding sources will have to be found. However, from our calculations it became clear that there is no single financial source from
which we could expect to cover the whole costs of U multirank; the only option is diversified a funding base with a mix of financial sources.
If U multirank is not dependent on one major source a further advantage lies in the distribution of financial risks.
It is difficult to calculate the exact running costs associated with U multirank because these depend on many variables.
The major variable cost drivers of U multirank are: The number of countries and institutions involved. This determines the volume of data that has to be processed and the communication efforts.
The surveys that are needed to cover all indicators outlined in the data models of U multirank.
if universities have no e-mail-addresses of their students, requiring students to be addressed by letters.
The intention of the European commission is to develop U multirank into a self-sustaining instrument, requiring no EU funding after its implementation phase.
Nevertheless the European commission should consider the option of continued support of part of U multirank's basic funding in the long run
and ensure a formal role for the EC as a partner in U multirank. To promote transparency
and performance in European higher education by establishing a transparency tool could be a long-term task of the EC.
To ensure students'free access to U multirank data the EC could provide also in the long run direct funding of user charges that would
Discussions with potential funders so far have shown that the funding of U multirank has to rely on a mix of income streams.
since U multirank with its related surveys is an expensive form of ranking and the commercial sources are limited.
Charges to the users of the U multirank web tool would seriously undermine the aim of creating more transparency in European higher education by excluding students for example;
if it would work for U multirank. The questions are: would paying for rankings produce avalue for money'attitude on the part of institutions (or countries?
EC, foundations, other sponsors) with a combination of a variety of market sources contributing cost coverage plus some cost reductions through efficiency gains. 8. 9 A concluding perspective U multirank
after a next project phase of two years institutionalisation of a U multirank unit could be organized for the longer term.
After two more years, the roll out of the system should include about 700 European higher education institutions and about 500 institutions in the field-based ranking for each of three fields.
either for the public or for associations of higher education institutions, should be developed because of their market potential.
But the organizational basis of U multirank should be the nonprofit model with elements of the other options included.
In particular, the aim of financial self-sustainability for U multirank makes the combination of some nonprofit basic funding with the offer of commercial products inevitable.
and governance structure of U multirank. The analysis of the fixed and flexible cost determinants could lead to a calculation of the cost,
the case of university education. European Journal of Marketing, 31 (7), 528-540. AUBR Expert Group.
) Assessing Europe's University-Based Research--Draft. s l. Brussels: European commission DG Research. Azagra-Caro, J. M.,de Lucio,
. & Gutierrez, G. A. 2003),University patents: Output and input indicators of what?''''Research Evaluation, 12 (2): 5 16.
Process and structure in higher education. London, Heinemann. Brandenburg, U & Federkeil, G. 2007) How to measure internationality and internationalisation of higher education institutions!
Indicators and key figures, CHE Working paper No. 92 Brown, R. M, . & Mazzarol, T. W. 2009).
The importance of institutional image to student satisfaction and loyalty within higher education. Higher education, 58 (1), 81-95.
Bucciarelli, L. L. 1994), Designing Engineers, Cambridge, MA: MIT Press CHERPA-Network. 2010). ) Interim progress report:
Design phase of the project'Design and testing the feasibility of a multidimensional global university ranking'.'Enschede:
CHEPS, University of Twente. Clark, B. R. 1983. The higher education system: academic organization in cross-national perspective.
Berkeley, University of California Press. Cremonini, L.,Benneworth, P, . & Westerheijden, D. F. 2011). In the shadow of celebrity:
The impact of world class universities policies on national higher education systems. Paper presented at the CHER 23rd annual conference.
Cremonini, L.,Westerheijden, D. F, . & Enders, J. 2008). Disseminating the Right Information to the Right Audience:
Higher education, 55,373-385. Debackere, K, . & Veugelers, R. 2005), The role of academic technology transfer organizations in improving industry-science links, Research Policy 34 (2005), pp. 321-342.
Academic quality, League tables, and Public Policy: A Cross-National Analysis of University ranking Systems. Higher education, 49,495-533.178 Dulleck, U. and R. Kerschbamer (2006."
"On Doctors, Mechanics, and Computer Specialists: The Economics of Credence Goods"Journal of Economic Literature 44 (1): 5-42.
Enquist, G. 2005) The internationalisation of higher education in Sweden, the National Agency for Higher education, Högskoleverkets rapportserie 2005: 27 R, Stockholm Espeland, W. N,
. & Saunder, M. 2007). Rankings and Reactivity: How Public Measures Recreate Social Worlds. American Journal of Sociology, 113 (1), 1-40.
Reputation indicators in rankings of higher education institutions. In B. Kehm & B. Stensaker (Eds. University rankings, Diversity,
and The New Landscape of Higher education (pp. 19-34). Rotterdam; Taipeh: Sense Publishers. Furco, A. & Miller, W. 2009), Issues in Benchmarking
and Assessing Institutional Engagement, New Directions for Higher education, No. 147, Fall 2009, p. 47-54.
Hazelkorn, E. 2011. Rankings and the Reshaping of Higher education: The Battle for World-Class Excellence.
London: Palgrave Macmillan. Heine, C, . & Willich, J. 2006). Informationsverhalten und Entscheidungsfindung bei der Studien-und Ausbildungswahl Studienberechtigte 2005 ein halbes Jahr vor dem Erwerb der Hochschulreife..
Forum Hochschule (3). Holi M. T.,Wickramasinghe, R. and van Leeuwen, M. 2008), Metrics for the evaluation of knowledge transfer activities at universities.
IAU, International Association of Universities (2005. Global Survey Report, Internationalization of Higher education: New Directions, New Challenges, Paris:
IAU. International Ranking Expert Group. 2006). ) Berlin Principles on Ranking of Higher education institutions. Retrieved 24.6,2006, from http://www. che. de/downloads/Berlin principles ireg 534. pdf 179 Ischinger, B. and Puukka, J. 2009), Universities for Cities and Regions:
Lessons from the OECD Reviews, Change: The Magazine of Higher Learning, Vol. 41, No. 3, p. 8-13.
Iversen, E. J.,Gulbrandsen, M, . & Klitkou, A. 2007), A baseline for the impact of academic patenting legislation in Norway.
Insude Higher education. Retrieved from http://www. insidehighered. com/news/2007/03/19/usnews King, Gary et al (2004:
Global university rankings: private and public goods. Paper presented at the 19th Annual CHER conference, Kassel Mcdonough, P m.,Antonio, A l,
College rankings: democratized college knowledge for whom?.Research in Higher education, 39 (5), 513-537. Meyer, M.,Sinilainen, T. and Utecht, J. T. 2003),Toward hybrid triple helix indicators:
A study of university-related patents and a survey of academic inventors''.''Scientometrics, 58: 321 350.
Montesinos; P.,Carot; J. M.,Martinez; J. M.,Mora, F. 2008), Third Mission Ranking for World Class Universities:
Beyond Teaching and Research, Higher education in Europe, Vol. 33, Nr. 2, pp. 259-271. Nelson, P. 1970."
"Information and consumer behavior.""The Journal of Political economy 78 (2): 311-329. Nuffic (2010) Mapping internationalization, http://www. nuffic. nl/international-organizations/services/quality-assurance-and-internationalization/mapping-internationalization-mint OECD (2003), Turning Science into Business:
Patenting and licensing at public research organizations. Paris: OECD. Rojas-Méndez, J. I.,Vasquez-Parraga, A z.,Kara, A,
Determinants of Student Loyalty in Higher education: A Tested Relationship Approach in Latin america. Latin american Business Review, 10 (1), 21-39.
Global university rankings and their Impact. Brussels: European University Association. Sadlak, J, . & Liu, N c. Eds.).2007).
) The world-class university and ranking: Aiming beyond status. Bucharest; Shanghai; Cluj-Napoca. Salmi, J. 2009.
The Challenge of Establishing World-Class Universities. Washington, D c.:World bank. Saragossi, S, . & van Pottelsberghe, B. 2003),
What patent data reveal about universities: The case of Belgium''.''Journal of Technology Transfer, 28:47 51.
Schmiemann, M. and Durvy, J.-N. 2003), New approaches to technology transfer from publicly funded research in:
Teichler, U. 2004), The changing debate on internationalisation of higher education, Higher education 48 (1), 5-26.
The CHE Ranking of European Universities: A Pilot Study in Flanders and The netherlands. 2008). ) s l. Gütersloh, Enschede, Leiden, Brussels:
CWTS Leiden University; Vlaamse Overheido. Document Number) Thibaud, A. 2009. Vers quel i i i classement de Shanghai et des autres classements internationaux (No.
& Thursby, M. 2007), US faculty patenting: Inside and outside the university. NBER Working Paper 13256.
Cambridge, MA: National Bureau of Economic Research. Tijssen, R. F. W. 2003. Scoreboards of research excellence.
and E. van Wijk,(2009) Benchmarking university-industry research cooperation worldwide: performance measurements and indicators based on co-authorship data for the world's largest universities, Research Evaluation, 18, pp. 13-24.
Tijssen, R. J. W.,Waltman, L, . and N. J. Van Eck (2011) Collaborations span 1, 553 kilometres, Nature, 473, p. 154.
A global survey of university league tables. Toronto: Educational Policy Institute. Van dyke, N. 2005. Twenty Years of University Report cards.
Higher education in Europe, 30 (2), 103-125. Van Raan, Anthony (2003: Challenges in the Ranking of Universities.
In: Jan Sadlak, Liu Nian Cai (eds.:The World-Class University and Ranking: Aiming Beyond Status. Bucharest:
UNESCO-CEPES. van Vught, F. A. 2008. Mission diversity and reputation in higher education. Higher education Policy 21 (2), 151-174. van Vught, F. A.,Kaiser, F.,File, J. M.,Gaethgens, C.,Peter, R,
. & Westerheijden, D. F. 2010). U-Map: The European Classification of Higher education institutions. Enschede: CHEPS. Waltman, L.,R. J. W. Tijssen,
and N. J van Eck (2011) Globalisation of science in kilometres, Journal of Informetrics Westerheijden, D. F.,Beerkens, E.,Cremonini, L.,Huisman, J.,Kehm
, B.,Kovac, A.,et al. 2010). ) Th fi f w ki g h High i A:
Th B g Process Independent Assessment-Volume 1-Detailed assessment report. s l. Brussels: European commission, Directorate-General for Education and Culture.
Overtext Web Module V3.0 Alpha
Copyright Semantic-Knowledge, 1994-2011