, University of Eastern Finland (Kuopio Campus), Kuopio, Finland Abstract Purpose The purpose of this paper is to examine the information sourcing practices of small-to medium-sized enterprises (SMES) associated with the development of different types of innovation (product/process/market/organizational).
Insignificant to 5 Very important) REGKNOWA Sum-variable measuring the importance of regional knowledge organizations for innovation University of Kuopio Savonia University of Applied sciences Organizations of vocational education
Within SIS, the creation, selection and transformation of knowledge takes place within a complex matrix of interactions between different actors (firms, universities and other research organizations, educational organizations, financial organizations, public support
About the authors Miika Varis, after graduating from the University of Kuopio, acted as a research and teaching assistant in SME management (2001-2003) and in entrepreneurship and local economic development (2003-2005),
and lecturer in entrepreneurship (2005-2009) at the Department of Business and Management, University of Kuopio, Finland,
and from 2009 as a lecturer in entrepreneurship at the Department of health Policy and Management, University of Kuopio, Finland (1. 1. 2010 Department of health and Social Management,
University of Eastern Finland, Entrepreneurial SMES 153 Downloaded by WATERFORD INSTITUTE OF TECHNOLOGY At 04:12 03 july 2015 (PT) Kuopio Campus). He is currently finishing his doctoral dissertation on regional systems of innovation.
Varis@uef. fi Hannu Littunen, after graduating from the University of Jyva skyla, was a researcher at the University of Jyva skyla, School of business and Economics, Centre for Economic Research, Finland,
and a professor of entrepreneurship and regional development at the Department of Business and Management, University of Kuopio, Finland (2003-2009) and from 2009 a professor of entrepreneurship and regional development at the Department of health Policy and Management
, University of Kuopio, Finland (1. 1. 2010 Department of health and Social Management, University of Eastern Finland, Kuopio Campus). He completed his doctoral thesis in leadership
and management entitled The birth and success of new firms in a changing environment in the year 2001.
Prior to starting work at the University, he worked in various organizations in both public and private sectors in Finland.
how innovation shapes perceptions about universities and public research organisations. The Journal of Technology Transfer 39,454-471.
Design and Testing the Feasibility of a Multidimensional Global university ranking Final Report Frans van Vught & Frank Ziegele (eds.
The CHERPA Network In cooperation with U multirank Project team Project leaders Frans van Vught (CHEPS)* Frank Ziegele (CHE)* Jon File (CHEPS)* Project co
Jiao Tong University) Simon Marginson (Melbourne University) Jamil Salmi (World bank) Alex Usher (IREG) Marijk van der Wende (OECD/AHELO) Cun-Mei Zhao (
U multirank final report 9 Table of contents Tables...13 Figures...14 Executive Summary...17 1 Reviewing current rankings...
1. 4 Impacts of current rankings 33 1. 5 Indications for better practice 35 2 Designing U multirank...
Methodological standards 43 2. 4. 1 User-driven approach 44 2. 4. 2 U-Map and U multirank 45 2. 4. 3
Grouping 46 2. 4. 4 Design context 46 2. 4. 5 3 Constructing U multirank: Selecting indicators...
3. 3. 5 4 Constructing U multirank: databases and data collection tools...79 4. 1 Introduction 79 10 4. 2 Databases 79 Existing databases 79 4. 2. 1 Bibliometric databases 80 4
perspective 94 5 Testing U multirank: pilot sample and data collection...97 5. 1 Introduction 97 5. 2 The global sample 97 5. 3 Data collection 102 Institutional self-reported data 103
115 6 Testing U multirank: results...119 6. 1 Introduction 119 6. 2 Feasibility of indicators 119 Teaching & Learning 122 6. 2. 1 Research 124 6. 2
and patent data 135 6. 3. 3 6. 4 Feasibility of up-scaling 137 11 7 Applying U multirank:
combining U-Map and U multirank 141 7. 3 The presentation modes 143 Interactive tables 143 7. 3. 1 Personalized ranking tables 146
151 8 Implementing U multirank: the future...153 8. 1 Introduction 153 8. 2 Scope: global or European 153 8. 3 Personalized and authoritative rankings 154 8. 4 The need for international data systems 156 8. 5 Content and organization of the next
and models of implementation 161 8. 7 Towards a mixed implementation model 167 8. 8 Funding U multirank 169 8. 9 A concluding perspective 176 9 List
Classifications and rankings considered in U multirank...26 Table 1-2: Indicators and weights in global university rankings...
30 Table-2-1: Conceptual grid U multirank...42 Table 3-1: Indicators for the dimension Teaching & Learning in the Focused Institutional and Field-based Rankings...
54 Table 3-2: Primary form of written communications by discipline group...61 Table 3-3:
Data elements shared between EUMIDA and U multirank: their coverage in national databases...84 Table 4-2:
Availability of U multirank data elements in countries'national databases according to experts in 6 countries (Argentina/AR, Australia/AU, Canada/CA, Saudi arabia/SA, South africa/ZA
U multirank data collection process...104 Figure 5-2: Follow up survey: assessment of data procedures and communication...
Combining U-Map and U multirank...142 Figure 7-2: User selection of indicators for personalized ranking tables...
Assessment of the four models for implementing U multirank...166 Figure 8-2: Organizational structure for phase 1 (short term...
169 15 Preface On 2 june 2009 the European commission announced the launching of a feasibility study to develop a multidimensional global university ranking.
Its aims were to look into the feasibility of making a multidimensional ranking of universities in Europe,
but also parents and other stakeholders, to make informed choices between different higher education institutions and their programmes.
"and ignore the performance of universities in areas like humanities and social sciences, teaching quality and community outreach.
While drawing on the experience of existing university rankings and of EU-funded projects on transparency in higher education, the new ranking system should be:
In a first phase running until the end of 2009 the consortium would design a multidimensional ranking system for higher education institutions in consultation with stakeholders.
Education and Culture but other experts drawn from student organisations, employer organisations, the OECD, the Bologna Follow-up Group and a number of Associations of Universities.
Stakeholder workshops were held four times during the project with an average attendance of 35 representatives drawn from a wide range of organisations including student bodies, employer organisations, rectors'conferences, national university associations and national representatives.
This is the Final Report of the multidimensional global university ranking project. Readers interested in a fuller treatment of many of the topics covered in this report are referred to the project web-site (www. u multirank. eu) where the project's three Interim Reports can be found.
The web-site also includes a 30 page Overview of the major outcomes of the project. 17 Executive Summary Executive Summary Executive Summary Executive Summary Executive Summaryexecutive Summary Executive
and tend to focus on a single dimension of university performance research. The new tool will promote the development of diverse institutional profiles.
We have called this new tool U multirank as this stresses three fundamental points of departure: it is multidimensional,
recognising that higher education institutions serve multiple purposes and perform a range of different activities; it is a ranking of university performances
(although not in the sense of an aggregated league table like other global rankings); and it is driven user (as a stakeholder with particular interests,
and key characteristics of Ucharacteristics of Ucharacteristics of Ucharacteristics of U characteristics of Ucharacteristics of U characteristics of U characteristics of Ucharacteristics of Ucharacteristics of U characteristics of U characteristics of U multirank Multirank
U multirank enables such comparisons to be made both at the level of institutions as a whole and in the broad disciplinary fields in
The integration of the already designed and tested U-Map classification tool into U multirank enables the creation of the user-selected groups of sufficiently comparable institutions.
U multirank includes a range of indicators that will enable users to compare the performance of institutions across five dimensions of higher education and research activities:
U multirank could provide its users with the on-line functionality to create two general types of rankings:
which institutions are active U multirank would also include the facility for users to create institutional
This personalised interactive ranking table reflects the user driven nature of U multirank. 20 Table 1:
In order to be able to apply the principle of comparability we have integrated the existing transparency tool the U-Map classification into U multirank.
and research organisations using a set of dimensions similar to those developed in U multirank. The underlying indicators differ as U-Map is concerned with understanding the mix of activities an institution is engaged in
while U multirank is concerned with an institution's performance in these activities (how well it does
Integrating U-Map into U multirank enables the creation of user-selected groups of sufficiently comparable institutions that can then be compared in focused institutional
of the UTHE findings of the U The findings of the UTHE findings of the UTHE findings of the U multirank pilot study Multirank pilot study Multirank pilot study Multirank pilot studymultirank pilot studymultirank pilot studymultirank pilot studymultirank
pilot study Multirank pilot studymultirank pilot studymultirank pilot study Multirank pilot studymultirank pilot study U multirank was tested in a pilot study involving 159 higher education institutions drawn from 57 countries:
and faculties performing very differently across the five dimensions and their underlying indicators. The multidimensional approach makes these diverse performances transparent.
and it is clear that U multirank is based a Europe project, this represents a strong expression of interest.
organisational and financial challenges, there are no inherent features of U multirank that rule out the possibility of such future growth.
and 22 operational feasibility we have developed a U multirankVersion 1. 0'that is ready to be implemented in European higher education
d implementation of Ud implementation of Ud implementation of U-Multirankmultirankmultirankmultirank Multirank Multirankmultirank The outcomes of the pilot study suggest some clear next steps in the further development of U multirank and its implementation
The refinement of U multirank instruments: Some modifications need to be made to a number of indicators and to the data gathering instruments based on the experience of the pilot study.
Roll out of U multirank across European countries: Given the need for more transparent information in the emerging European higher education area all European higher education
and research institutions should be invited to participate in U multirank in the next phase. Many European stakeholders are interested in assessing
Targeted recruitment of relevant peer institutions from outside Europe should be continued in the next phase of the development of U multirank.
Although U multirank has been designed to be driven user, this does not preclude the use of the tool
In terms of the organisational arrangements for these activities we favour a further two year project phase for U multirank.
In the longer term on the basis of a detailed analysis of different organisational models for an institutionalised U multirank our strong preference is for an independent nonprofit organisation operating with multiple sources of funding.
This organisation would be independent both from higher education institutions (and their associations) and from higher education governance and funding bodies.
changing the tactics in the game (more attacks 1 See www. u multirank. eu 24 late in a drawn match),
and sparking off debates among commentators of the sport for and against the new rule. 1 In university rankings,
what isthe best university'.'But different to sports, there are no officially recognised bodies that are accepted as authorities that may define the rules of the game.
that e g. the Shanghai ranking is simply a game that is as different from the Times Higher game as rugby is from football.
The issue with the some of the current university rankings is that they tend to be presented
to define explicitly our conceptual framework regarding the different functions of higher education institutions, and in turn to derive sets of indicators from this framework.
although the current transparency tools especially university league tables are controversial, they seem to be here to stay,
and that especially global university league tables have a great impact on decision-makers at all levels in all countries,
including in universities (Hazelkorn, 2011). They reflect a growing international competition among universities for talent and resources;
at the same time they reinforce competition by their very results. On the positive side they 1 http://en. wikipedia. org/wiki/Three points for a win 25 urge decision-makers to think bigger and set the bar higher,
especially in the research universities that are the main subjects of the current global league tables.
Classifications and rankings considered in U multirank Type Name Classifications Carnegie classification (USA) U-Map (Europe) Global League tables and Rankings Shanghai Jiao Tong University's (SJTU
) Academic ranking of world universities (ARWU) Times Higher education (Supplement)( THE) QS (Quacquarelli Symonds Ltd) Top Universities Leiden Ranking National League tables and Rankings US News & World Report (USN≀
/University ranking (CHE; Germany) Studychoice123 (SK123; The netherlands) Specialized League tables and Rankings Financial times ranking of business schools and programmes (FT;
especially the most influential ones, the global university rankings are all league tables. The relationship of indicators collected
2010) and even already anticipating the current U multirank project, the situation has begun to change: ranking producers are becoming more explicit and reflective about their methodologies and underlying conceptual frameworks.
while practically all informed the design of U multirank. We already mentioned some of them. The full list includes:
The Berlin Principles on Ranking of Higher education institutions (International Ranking Expert Group, 2006), which define sixteen standards
user-oriented manner enabling custom-made rankings rather than dictating a single one Focused institutional rankings, in particular the Leiden ranking of university research, also with a clear focus,
Recent reports on rankings such as the report of the Assessment of University-Based Research Expert Group (AUBR Expert Group, 2009) which defined a number of principles for sustainable collection of research data,
to ensure that in the development of the set of indicators for U multirank we would not overlook any dimensions,
The global rankings that we studied limit their interest to several hundred preselected universities, estimated to be no more than 1%of the total number of higher education institutions worldwide.
The criteria used to establish a threshold generally concern the research output of the institution;
Although it could be argued that world-class universities may act as role models (Salmi, 2009), the evidence that strong institutions inspire better performance across whole higher education systems is so far mainly found in the area of research rather than that of teaching (Sadlak & Liu,
From our overview of the indicators used in the main global university rankings (summarised in Table 1-2) we concluded that they focus indeed heavily on research aspects of the higher education institutions (research output,
as they focus mainly on indicators related to the research function of universities (Rauhvargers, 2011). 30 Table 1-2:
Indicators and weights in global university rankings HEEACT 2010 ARWU 2010 THE 2010 QS 2011 Leiden Rankings 2010 Research output Articles past 11 years (10
and alternative calculation MNCS2) Size-dependent'brute force'impact indicator (multiplication of P with the university's field-normalized average impact):
(4. 5%)Income per staff (2. 25%)Ratio Phd awards/bachelor awards (2. 25%)Faculty student ratio (20%)31 HEEACT 2010
staff and students (5%)Industry income per staff (2. 5%)International faculty (5%)International students (5%)Website http://ranking. heeact. edu. tw
/en-us/2010/Page/Indicators http://www. arwu. org/ARWUMETHODOLOGY2010. jsp http://www. timeshighereducation. co. uk/world-university rankings/2010-2011/analysis
-methodology. html http://www. topuniversities. com/university rankings/world-university rankings http://www. socialsciences. leiden. edu/cwts/products-services/leiden-ranking-2010
but not in existing global university rankings. Reputation surveys are used globally, but have been proven to be very weak cross-nationally (Federkeil,
which is not often the case in the current global university rankings. Manipulation of opinion-type data has surfaced in surveys for ranking
U-Map, has testedpre-filling'higher education institutions'questionnaires, i e. data available in national public sources are entered into 3 The beginnings of European data collection as in the EUMIDA project may help to overcome this problem for the European region in years to come. 33 the questionnaires sent to higher education institutions for data
gathering. This should reduce the effort required from higher education institutions and give them the opportunity to verify thepre-filled'data as well.
The U-Map test withpre-filling'from national data sources in Norway appeared to be resulted successful
and in a substantial decrease of the burden of gathering data at the level of higher education institutions. 1. 4 Impacts of current rankings According to many commentators,
Rankings strongly impact on the management in higher education institutions. The majority of higher education leaders report that they use potential improvement in rank to justify claims on resources (Espeland & Saunder, 2007;
In nations across the globe, global rankings have prompted the desire forworld-class universities'both as symbols of national achievement and prestige and supposedly as engines of the knowledge economy (Marginson, 2006.
In Hazelkorn's survey of higher education institutions, 3%were ranked first in their country, but 19%wanted to get to that position (Hazelkorn, 2011).
for others, e g. university leaders and national policy-makers, information about the higher education institution as a whole has priority (related to the strategic orientation of institutions;
These general conclusions have been an important source of inspiration for how we designed U multirank, a new, global, multidimensional ranking instrument.
multidimensional global ranking tool that we have calledU multirank'.'First, we present the general design principles that to a large extent have guided the design process.
Finally, we outline a number of methodological choices that have a major impact on the operational design of U multirank. 2. 2 Design Principles U multirank aims to address the challenges identified as arising from the various currently existing ranking tools.
when designing and constructing U multirank. Our fundamental epistemological argument is that as all observations of reality are driven theory (formed by conceptual systems) anobjective ranking'cannot be developed (see chapter 1). Every ranking will reflect the normative design and selection criteria of its constructors.
It makes no sense to compare the research performance of a major metropolitan research university with that of a remotely located University of Applied science;
and preserve its unique national language with an internationally orientated European university with branch campuses in Asia.
With very few exceptions, higher education institutions are combinations of faculties, departments and programs of varying strength.
'These principles underpin the design of U multirank, resulting in a user-driven, multidimensional and methodologically robust ranking instrument.
In addition, U multirank aims to enable its users to identify institutions and programs that are sufficiently comparable to be ranked,
For the design of U multirank we specify our own conceptual framework in the following section. 2. 3 Conceptual framework A meaningful ranking requires a conceptual framework
which a higher education institution operates. In reality theseaudiences'are combined of course often in the various activities of higher education
and performance indicators of higher education institutions and the programs they offer. Rankings must be designed in a balanced way
Conceptual grid U multirank Stages Functions & Audiences Enabling Performance Input Process Output Impact Functions context Teaching & Learning Research Knowledge Transfer Audiences
International Orientation Regional Engagement Using this conceptual framework we have selected the following five dimensions as the major content categories of U multirank:
In our design of U multirank we focused on the selection of output and impact indicators.
U multirank intends to be a multidimensional performance assessment tool and thus needs to imply indicators that relate to the performances of higher education
multidimensional ranking tool like U multirank can be developed. In this section we explain the various methodological choices made when designing U multirank.
Methodological standards 2. 4. 1in addition to the content-related conceptual framework, the new ranking tool and its underlying indicators must be based also on methodological standards of empirical research, validity and reliability
In addition, because U multirank is an international comparative transparency tool, it must deal with the issue of comparability across cultures and countries and finally,
U multirank has to address the issue of feasibility. Validity (Construct) validity refers to the evidence about
When characterizing, e g. the internationality of a higher education institution, the percentage of international students is a valid indicator
Feasibility The objective of U multirank is to design a multidimensional global ranking tool that is feasible in practice.
can U multirank be applied in reality and can it be applied with a favourable relation between benefits and costs in terms of financial and human resources?
We report on the empirical assessment of the feasibility of U multirank in chapter 6 of this report.
User-driven approach 2. 4. 2to guide the readers'understanding of U multirank, we now briefly describe the way we have worked methodologically out the principle of being driven user (see section 2. 2). We propose an interactive web-based approach,
U-Map and U multirank 2. 4. 3the principle of comparability (see section 2. 2) calls for a method that helps us in finding institutions the purposes
can be found in the connection of U multirank with U-Map (see www. u-map. eu). U-Map,
describes(maps')higher education institutions on a number of dimensions, each representing an aspect of their activities.
U-Map can prepare the ground for U multirank in the sense that it helps identify those higher education institutions that are comparable and for which,
therefore, performance can be compared by means of the U multirank ranking tool. A detailed description of the methodology used in this classification can be found on the U-Map website (http://www. u-map. eu/methodology doc/)and in the final report of the U-Map project,
U multirank focuses on the performance aspects of higher education and research institutions. U multirank shows how well the higher education institutions are performing in the context of their institutional profile.
Thus, the emphasis is on indicators of performance, whereas in U-Map it lies on the enablers of that performance the inputs and activities.
U-Map and U multirank share the same conceptual model. The conceptual model provides the rationale for the selection of the indicators in both U-Map and U multirank, both
of which are complementary instruments for mapping diversity, horizontal diversity in classification and vertical diversity in ranking.
As an alternative U multirank uses a grouping method. Instead of calculating exact league table positions we will assign institutions to a limited number of groups.
Design context 2. 4. 5in this chapter we have described the general aspects of the design process regarding U multirank.
we have described the conceptual framework from which the five dimensions of U multirank are deduced, and we have outlined a number of methodological approaches to be applied in U multirank.
Together these elements form the design context from which we have constructed U multirank. The design choices made here are in accordance with both the Berlin Principles and the recommendations by the Expert Group on the Assessment of University-based Research.
The Berlin Principles4 emphasize (a o.)the importance of being clear about the purpose of rankings and their target groups,
of recognising the diversity of institutional profiles, 4 http://www. ireg-observatory. org/index. php?
Based on our design context, in the following chapters we report on the construction of U multirank. 5 Expert Group on Assessment of University-Based Research (2010),
Assessing Europe's University-Based Research, European commission, DG Research, EUR 24187 EN, Brussels 3 Constructing U Constructing U Constructing U Constructing U multirank:
Selecting indicatorsmultirank: Selecting indicators Multirank: Selecting indicators Multirank: Selecting indicators Multirank: Selecting indicators Multirank: Selecting indicators Multirank:
Selecting indicators 3. 1 Introduction Having set out the design context for U multirank in the previous chapter,
we now turn to a major part of the process of constructing U multirank: the selection and definition of the indicators.
The other important components of the construction process for U multirank are the databases and the data collection tools that allow us to actuallyfill'the indicators.
These will be discussed further in chapter 4 as we explain the design of U multirank in more detail.
In chapters 5 and 6 we report on the U multirank pilot study during which we analysed the data quality
Various categories of stakeholders (student organizations, employer organizations, associations and consortia of higher education institutions, government representatives, international organizations) have been involved in an iterative process of consultation to come to a stakeholder-based assessment of the relevance
presented to them as potential items in the five dimensions of U multirank (see 3. 3). In addition,
we invited feedback from international experts in higher education and research and from the Advisory board of the U multirank project.
the indicators selected for the pre-test phase in U multirank (see 6. 2) then were grouped into three categories:
The outcome of the pre-test was used then as further input for the wider pilot where the actual data was collected to quantify the indicators for U multirank at both the institutional and the field level.
As one of the main objectives of our U multirank project is to inform stakeholders such as students,
some quality assurance procedures focus on programs, others on entire higher education institutions; they have different foci, use different data,
Nevertheless we should remain vigilant to uncover signs of university efforts to manipulate their students'responses;
& Learning indicators that were selected for the pilot test of U multirank. The column on the right-hand side includes some of the comments
Overall judgment of program Overall satisfaction of students with their program and the situation at their higher education institution Refers to single question to give anoverall'assessment;
The attractiveness of the university's exchange programs and the partner universities; availability of exchange places;
transfer of credits from exchange university; integration of the stay abroad into studies (no time loss caused by stay abroad) and support in finding internships abroad. 26 Student satisfaction:
University webpage Quality of information for students on the website. Index of several items including general information on institution and admissions, information about the program, information about classes/lectures;
In engineering, adherence to the Washington Accord depends on national-level agencies, not on individual higher education institutions'59 strategies.
faculty) within that institution has to start with the definition of research. We take the definition set out in OECD's Frascati Manual:
Given the increasing complexity of the research function of higher education institutions and its extension beyond Phd awarding institutions, U multirank adopts a broad definition of research,
There is a growing diversity of research missions across the classical research universities and the more vocational oriented institutions (university colleges, institutes of technology, universities of applied sciences, Fachhochschulen, etc.
Given that in most disciplines publications are seen often as the single most important research output of higher education institutions,
The Expert Group on Assessment of University Based Research12 defines research output as referring to individual journal articles, conference publications, book chapters, artistic performances, films, etc.
http://www. kowi. de/Portaldata/2/Resources/fp/assessing-europe-university-based-research. pdf 61 Table 3-2:
Expert Group on Assessment of University-Based Research (2010) Apart from using existing bibliometric databases,
Recommended by Expert Group on University-based Research. Difficult to separate teaching and research expenditure in a uniform way. 2 Research income from competitive sources Income from European research programs+income from other international competitive research programs+income from research councils+income
an indicator reflecting arts-related output is included in U multirank as well. However, data availability is posing some challenges here.
Therefore it was decided to keep them in the list of indicators for U multirank's institutional ranking.
The process by which the knowledge, expertise and intellectually linked assets of Higher education institutions are applied constructively beyond Higher education for the wider benefit of the economy and society, through two-way engagement with business, the public sector, cultural and community partners.
Traditionally TT is concerned primarily with the management of intellectual property (IP) produced by universities and other higher education and research institutions.
however, already under the research dimension in U multirank. In the case of texts, it is customary to distinguish between two forms:
While publications are part of the research dimension in U multirank, patents will be included under the Knowledge Transfer dimension.
& Learning and Regional Orientation dimensions included in U multirank. Knowledge transfer through people also takes place through networks
such as the one carried out by the US-based Association of University Technology Managers (AUTM) for its Annual Licensing survey.
films and exhibition catalogues have been included in the scholarly outputs covered in the Research dimension of U multirank.
U multirank particularly wants to capture aspects of knowledge transfer performance. However, given the state of the art in measuring knowledge transfer (Holi et al.
and used as a source of information to inform the funding allocations to reward the UK universities'third stream activities.
2008) aims to create a ranking methodology for measuring university third mission activities along three subdimensions:
is regarded as relevant indicator by EGKTM. 3 University-industry joint publications Relative number of research publications that list an author affiliate address referring to a business enterprise or a private sector R&d unit;
Used in CWTS University-Industry Research Cooperation Scoreboard. 16 See also the brief section on the EUMIDA project,
which the university acts as an applicant related to number of academic staff Widely used in KT surveys.
CPD difficult to describe uniformly. 7 Co-patents Percentage of university patents for which at least one co-applicant is a firm,
HEIS not doing research in natural sciences/engineering/medical sciences hardly covered. 11 Co-patents Percentage of university patents for
Number of licences more robust than licensing income. 14 Patents awarded The number of patents awarded to the university related to number of academic staff Widely used KT indicator.
Not relevant for all fields. 15 University-industry joint publications Number of research publications that list an author affiliate address referring to a business enterprise or a private sector R&d unit,
The number of collaborative research projects (university-industry) is another example of a knowledge transfer indicator that was selected not for the Focused Institutional Ranking.
but bias towards certain disciplines and languages. 5 Number of joint degree programs The number of students in joint degree programs with foreign university (including integrated period at foreign university) as a percentage of total
number of students enrolled Important indicator of the internationalatmosphere'of a faculty/department. Addresses student mobility and curriculum quality.
Internationalization of programs Index including the attractiveness of the university's exchange programs, the attractiveness of the partner universities, the sufficiency of the number of exchange places;
the transfer of credits from exchange university; the integration of Addresses quality of the curriculum.
How well a higher education and research institution is engaged in the region is considered increasingly to be an important part of the mission of higher education institutions.
The latter two dimensions are covered in the U multirank dimensionKnowledge Transfer'.'Indicators for the social dimension of the third mission comprise indicators on international mobility (that are covered in the U multirank dimension International Orientation) and a very limited number of indicators on regional engagement.
Activities and indicators on regional and community engagement can be categorized in three groups: outreach, partnerships and curricular engagement18.
and provision of institutional resources for regional and community use, benefitting both university and the regional community.
learning and scholarship that engage faculty, students and region/community in mutual beneficial and respectful collaboration.
U multirank has suggested to start with the existing list of regions in the Nomenclature of Territorial Units for Statistics (NUTS) classification developed
'Co-patents with regional firms'reflect cooperative research activities between higher education institutions and regional firms. While data may be found in international patent databases,
The same holds for measures of the regional economic impact of a higher education institution, such as the number of jobs generated by the university.
Assessing what the higher education and research institutiondelivers'to the region (in economic terms) is seen as most relevant
and 7the pilot study on the empirical feasibility assessment of the U multirank tool and its various indicators will be discussed.
As a result of this pilot assessment the final list of indicators will be presented. 4 Constructing U Constructing U Constructing U Constructing U multirank:
and data collection instruments used in constructing U multirank. The first part is an overview of existing databases mainly on bibliometrics and patents.
and from students. 4. 2 Databases Existing databases 4. 2. 1one of the activities in the U multirank project was to review existing rankings
If existing databases can be relied on for quantifying the U multirank indicators this would be helpful in reducing the overall burden for institutions in handling the U-Multirank data requests.
For other aspects and dimensions, U multirank will have to rely on self-reported data. Regarding research output and impact, there are worldwide databases on journal publications and citations.
www. eumida. org) seeks to develop the foundations of a coherent data infrastructure (and database) at the level of individual higher education institutions.
Our analysis on data availability was completed with a brief online consultation with the group of international experts connected to U multirank (see section 4. 2. 5). The international experts were asked to give their assessment of the 21 The U multirank project was granted access to the preliminary
in order to learn about data availability in the countries covered by EUMIDA. 80 situation with respect to data availability in some of the non-EU countries included in U multirank Bibliometric databases 4. 2. 2there are a number of international databases
The bibliometric methodologies applied in international comparative settings such as U multirank usually draw their information from publications that are released in scientific and technical journals.
U multirank therefore makes use of international bibliometric databases to compile some of its research performance indicators
To compile the publications-related indicators in the U multirank pilot study, bibliometric data was derived from the October 2010 edition of the Web of Science bibliographical database.
This data processing of address information is done at the aggregate level of the entiremain'organization (not for sub-units such as departments or faculties.
All the selected institutions in the U multirank pilot study produced at least one Web of Science-indexed research publication during the years 1980-2010.
which mainly refer to discovery-orientedbasic'research of the kind that is conducted at universities and research institutes.
For the following six indicators selected for inclusion in the U multirank pilot test (see chapter 6) one can derive data from the CWTS/Thomson Reuters Web of Science database:
1. total publication output 2. university-industry joint publications 3. international joint publications 4. field-normalized citation rate 5. share of the world
#6) that were constructed specially for U multirank and that have never been used before in any international classification or ranking.
Patent databases 4. 2. 3as part of the indicators in the Knowledge Transfer dimension, U multirank selected the number of patent applications for
For U multirank, patent data were retrieved from the European Patent office (EPO. Its Worldwide Patent Statistical Database (version October 2009) 25, also known as PATSTAT, is designed
and by DG Research. 25 This version is held by the K. U. Leuven (Catholic University Leuven)
Data availability according to EUMIDA 4. 2. 4like the U multirank project, the EUMIDA project (see http://www. eumida. org) collects data on individual higher education and research institutions.
The EUMIDA and U multirank project teams agreed to share information on issues such as definitions of data elements
since U-Map aims to build activity profiles for individual institutions whereas U multirank constructs performance profiles.
The findings of EUMIDA point to the fact that for the more research intensive higher education institutions, data for the dimensions of Education and Research are covered relatively well
Table 4-1 below shows the U multirank data elements that are covered in EUMIDA and whether information on these data elements may be found in national databases (statistical offices, ministries, rectors'associations, etc.).
The table illustrates that information on only a few U multirank data elements is available from national databases and,
Data elements shared between EUMIDA and U multirank: their coverage in national databases Dimension EUMIDA and U multirank data element European countries where data element is available in national databases Teaching & Learning relative rate of graduate unemployment
CZ, FI, NO, SK, ES Research expenditure on research AT*,BE, CY, CZ*,DK, EE, FI, GR*,HU, IT, LV*,LT*,LU, MT*,NO, PL*,RO*,SI*,ES, SE, CH,
IE*,IT, LU, MT*,NO, NL (p), PL*,SI, ES, UK 85 International Orientation (no overlap between U multirank and EUMIDA) Regional Engagement (no overlap between U multirank and EUMIDA) Source:
or only for (some) research universities) The list of EUMIDA countries with abbreviations: Austria (AT), Belgium (BE),
) Expert view on data availability in non-European countries 4. 2. 5the Expert Board of the U multirank project was consulted to assess for their six countries all from outside Europe the availability of data
related to the U multirank indicators. 27 They gave their judgement on the question whether data was available in national databases and/or in the institutions themselves.
Availability of U multirank data elements in countries'national databases according to experts in 6 countries (Argentina/AR, Australia/AU, Canada/CA, Saudi arabia/SA, South africa/ZA
, United states/US) Dimension U multirank data element Countries where data element is available in national databases Countries where data element is available in institutional database Teaching & Learning
AU, CA, SA, ZA incentives for knowledge exchange AR AR, AU, CA, SA CPD courses offered AU, CA, SA, ZA university-industry
Based on U multirank expert survey If we look at the outcomes, it appears that for the Teaching
or definitions that differ from the ones used for the questionnaires applied in U multirank (see next section).
the U multirank project had to rely largely on self-reported data (both at the institutional
the U-Map questionnaire is an instrument for identifying similar subsets of higher education institutions within the U multirank sample.
Respondents from the institutions were advised to complete the U-Map questionnaire first before completing the other questionnaires. 89 4. 3. 1. 2 Institutional questionnaire By means of U multirank's institutional questionnaire28,
university hospital; students: enrolments; programme information: bachelor/master programmes offered; CPD courses; graduates: graduation rates;
Data elements from U-Map are transferred automatically to the U multirank questionnaire using atransfer tool'.
'The academic year 2008/2009 was selected as the default reference year. 4. 3. 1. 3 Field-based questionnaire The field-based questionnaire includes information on individual faculties/departments
The U multirank questionnaires were tested in terms of cultural/linguistic understanding, clarity of definitions of data elements and feasibility of data collection.
In selecting the institutions for the pre-test the U multirank team considered the geographical distribution and the type of institutions.
to cover all relevant issues on the five dimensions of U multirank and to limit the questionnaire in terms of length.
In order to come to a meaningful and comprehensive set of indicators at the conclusion of the U multirank pilot study we had to aim for a broad data collection to cover a broad range of indicators.
a number of supporting instruments were prepared for the four U multirank surveys. These instruments ensure that respondents will have a common understanding of definitions and concepts.
A glossary of indicators for the four surveys was published on the U multirank website. Throughout the data collection process the glossary was updated regularly.
This allowed questions to be asked concerning the questionnaires and for contact with the U multirank team on other matters.
A technical specifications protocol for U multirank was developed introducing additional functions in the questionnaire to ensure that a smooth data collection could take place:
the option to transfer data from the U-Map to the U multirank institutional questionnaire, and the option to have multiple users access the questionnaire at the same time.
We updated the U multirank website regularly and provided information about the steps/time schedules for data collection.
All institutions had clear communication partners from the U multirank team. 4. 4 A concluding perspective This chapter, providing a quick survey of existing databases,
The U multirank questionnaires therefore were accompanied by a glossary of definitions and an FAQ facility to improve the reliability of the answers.
Testing U multirank: pilot sample and data collectionmultirank: pilot sample and data collection Multirank: pilot sample and data collection Multirank:
and construction process for U multirank, we will describe the feasibility testing of this multidimensional ranking tool.
This test took place in a pilot study specifically undertaken to analyse the actual feasibility of U multirank on a global scale.
one of the basic ideas of U multirank is the link to U-Map. U-Map is an effective tool to identify institutional activity profiles of institutions similar enough to compare them in rankings.
which makes it insufficiently applicable for the selection of the sample of pilot institutions for the U multirank feasibility test.
The existing set of higher education institutions in the U-Map database was included. This offered a clear indication of a broad variety of institutional profiles. 98 Some universities applied through the U multirank website to participate in the feasibility study.
Their broad profiles were checked as far as is possible against the U-Map dimensions in order to be able to describe their profiles.
In most countriesnational correspondents'(a network created by the research team) were asked to suggest institutions that would reflect the diversity of higher education institutions in their country.
an Institute for Water and Environment, an agricultural university, a School of Petroleum and Minerals, a military academy, several music academies and art schools, universities of applied sciences and a number of technical universities.
The 159 institutions that agreed to take part in the U multirank pilot are spread over 57 countries.
Our national correspondents explained that Chinese universities are reluctant to participate in rankings when they cannot predict the outcomes of participation
In the US the U multirank project is perceived as strongly European-focused, which kept some institutions from participating.
The problems with some countries are an important aspect regarding the feasibility of a global implementation of U multirank.
All in all the intention to attain a sufficient international scope in the U multirank pilot study by means of a global sample can be seen as successful.
Regional distribution of participating institutions Region and Country Initial proposal for number of institutions Institutions in the final pilot selection Institutions that confirmed participation Institutions which delivered U multirank institutional data Institutions
which delivered U multirank institutional data and U-Map data July 2010 February 2011 April 2011 April 2011 I. EU 27 (population in millions) Austria (8m
Region and Country Initial proposal for number of institutions Institutions in the final pilot selection Institutions that confirmed participation Institutions which delivered U multirank institutional data Institutions
which delivered U multirank institutional data and U-Map data Netherlands (16m) 3 7 3 3 3 Poland (38m) 6 12 7 7 6
and Country Initial proposal for number of institutions Institutions in the final pilot selection Institutions that confirmed participation Institutions which delivered U multirank institutional data Institutions
which delivered U multirank institutional data and U-Map data Other Asia 5 2 The Philippines 1 1 1 Taiwan 1 1 0 Vietnam 2
Since the exact number of higher education institutions in the world is known not we use a rough estimate of 15,000 institutions worldwide.
In that case the top 500 comprises only 3%of all higher education institutions. In our sample 29%of the participating institutions are in the top 500,
o U multirank institutional questionnaire Field-based ranking: o U multirank field-based questionnaires o U multirank Student survey 104 Figure 5-1:
U multirank data collection process The institutions were given seven weeks to collect the data, with deadlines set according to the dates the institution confirmed their participation.
Thegrouping'criterion for this was the successful submission of the contact form. The next step to ensure a high response rate was to review
We advised the institutions to start working with the questionnaires in a certain order beginning with the U-Map and then the U multirank questionnaires
since a tool had been developed to facilitate the transfer of overlapping information from the U-Map questionnaire to the U multirank institutional questionnaire.
Organising a survey among students on a global scale was one of the major challenges in U multirank.
critical comments indicated some confusion about the relationship between the U-Map and U multirank institutional questionnaires.
regional engagement)( See Figure 5-4). 02468 10 12 very good good neutral poor very poor General procedures Communication with U multirank 0123456789
and the U multirank technical specification email (see appendices 10 and 11) with the institutions to ensure that a smooth data collection could take place.
all universities had clear communication partners in the U multirank team. The main part of the verification process consisted of the data cleaning procedures after receiving the data.
Note that this set includes four new performance indicators that have never been used before in any international ranking of higher education institutions.
The following four indicators have especially been designed for U multirank: International joint research publications; University-industry joint research publications;
Regional joint research publications; Highly cited research publications. Further information on each of the six bibliometric indicators used in the pilot study is presented below. 1) Total publication output Frequency count of research publications with at least one author address referring to the selected main organization.
This is an indicator of research collaboration with partners located in other countries. 3) University-industry joint research publications Frequency count of publications with at least one author address referring to the selected main organization
Statistical information on 500 universities worldwide is freely available at the CWTS website: www. socialsciences. leiden. edu/cwts/products-services/scoreboard. html 4) Regional joint research publications Frequency count of publications with at least one author address referring to the selected main organization
In a possible next stage of U multirank we expect to apply a different, and more flexible, way of delineating regions
'The bibliometric data in the pilot version of U multirank database refer to one measurement per indicator.
Although all the HEIS that participated in the U multirank pilot study produced at least one Wos-indexed research publication during the years 1980-2010
In follow-up stages of U multirank we plan to lower the threshold values for Wos-indexed publication output
The development of patent indicators on the micro-level of specific entities such as universities is complicated by the heterogeneity of patentee names that appear in patent documents within and across patent systems.
was developed by ECOOM (Centre for R&d Monitoring, Leuven University; partner in CHERPA), in partnership with Sogeti29, in the framework of the EUROSTAT work on Harmonized Patent Statistics.
Second, and specifically for the U multirank pilot, keyword searches were designed and tailored for each institute individually,
Several mostly European studies have compared the volumes of suchuniversity-invented'patents (invented by an academic scientist)
versusuniversity-owned'(with the university registered as applicant). Evidence from studies in France 0%10%20%30%40%50%60%01234567891213141518192021222631404547596071454459%of institutes from pilot (N=165) Annual average patent volume (2000
2007) suggests that about 60%of university-invented patents are owned not university. The available evidence from some US studies indicates much smaller percentages (approximately 20%)of university-invented patents that are owned not university (Thursby et al.
2007). ) Moreover, national and institutional differences in culture and legislation regarding intellectual property rights on university-created knowledge will cause the size of the consequentialbias'to vary between countries.
Institutional and national differences may concern the autonomy of institutions, the control they exercise over their academic staff,
academic patents in Europe (i e. patents invented by academic scientists) are much less likely to be owned'by universities
(i e. the university is registered as applicant) than in the USA, as European universities have lower incentives to patent
or generally have less control over their scientists'activities (Lissoni et al.,2008). ) This does not mean that European academic scientists do not effectively contribute to the inventive activity taking place in their countries,
as one might presume from considering only the statistics on university-owned patents. On the contrary, the data provided
where universities own the majority of academic patents, Europe witnesses the dominance of business companies,
one should at all times bear in mind the relatively sizable volume of university-invented patents that is not retrieved by the institution-level search strategy and institutional and national variations in the size of the consequential limitation bias.
We have argued that the field-based rankings of indicators in each dimension contribute significantly to the value and the usability of U multirank.
At present, however, the breakdown of patent indicators by the fields defined in the U multirank pilot study (business studies,
Therefore we were unable to produce patent analyses at the field-based level of U multirank. 6 Testing UTESTING U Testing U Testing U multirank:
results 6. 1 Introduction The main objective of the pilot study was to empirically test the feasibility of the U multirank instrument.
and the potential upscaling of U multirank to a globally applicable multidimensional ranking tool. 6. 2 Feasibility of indicators In the pilot study we analyzed the feasibility of the various indicators that were selected after the multi-stage process of stakeholder
and the Advisory Group. 122 Teaching & Learning 6. 2. 1the first dimension of U multirank is Teaching & Learning.
efforts should be made to enhance the data situation on cultural research outputs of higher education institutions. This cannot be done by 126 producers of rankings alone;
In general, the data delivered by faculties/departments revealed some problems in clarity of definition of staff data.
transfer A a Patents awarded**A b University-industry joint research publications*A a CPD courses offered per fte academic staff B b Start-ups per fte academic staff
Robustness Availability Preliminary rating Feasibility score Data availability Conceptual clarity Data consistency Recommendation University-industry joint research publications*A a Academic
and institutions do not record numbers of students with self-organized stays at foreign universities.
and could assess the support provided by their university. The indicatorinternational orientation of programs'is a composite indicator referring to several data elements;
Some universities had difficulties to identify their international staff based on this definition. Regional engagement 6. 2. 5up to now the regional engagement role of universities has not been included in rankings.
There are a number of studies on the regional economic impact of higher education and research institutions,
which caused some problems in non-European higher education institutions. But even within Europe NUTS regions are seen as problematic by some institutions,
and knowledge of local higher education institutions to be utilized in a regional context, in particular in small-and medium-sized enterprises.
the U-map questionnaire to identify institutional profiles the U multirank institutional questionnaire the U multirank field-based questionnaire We supported this data collection with extensive data cleaning processes,
The parallel institutional data collection for U-Map and U multirank caused some confusion. Although a tool was implemented to pre-fill data from U-Map into U multirank,
some confusion remained concerning the link between the two instruments. In order to test some varieties, institutional and field-based questionnaires were implemented with different features (e g. definition of international staff).
In some countries the U multirank student survey conflicted with existing national surveys, which in some cases are highly relevant for institutions.
It should be evaluated how far U multirank and national surveys could be harmonized in terms of questionnaires and, at least, in terms of timing.
To assess the feasibility of our bibliometric data collection we studied the potential effects of a bottom-up verification process via a special case study of six French universities.
For example, even a seemingly universal name such asuniversity'may not describe the same institutional reality in different systems in England or in the US
some so-calleduniversities'could be in fact umbrella organizations covering several autonomous universities, while in France many universities are thematic and issue from one comprehensiveroot'university.
Second, most of the national research systems tend to become more complex under the pressure of thefunding on project'policies that induce the setup of various networklike institutions such as consortia, platforms andpoles'.
For communication purposes, authors may prefer to replace the name of the university with the name of a 137 network.
In addition, in some countries like France, universities, schools and research institutions can be interwoven by many joint labs,
Several studies have shown that the volume of such university-invented patents is sizable (Azagra Caro et al.
is it possible to extend U multirank to a comprehensive global coverage and how easy would it be to add additional fields?
In terms of the feasibility of U multirank as a potential new global ranking tool, the results of the pilot study are positive,
and taking into account that it is clear that U multirank is based a Europe initiative, this represents a strong expression of worldwide interest.
Our single caveat concerns an immediate global-level introduction of U multirank. The pilot study suggests that a global multidimensional ranking is unlikely to prove feasible in the sense of achieving extensive coverage levels across the globe in the short term.
From their participation in the various stakeholder meetings, we can conclude that there is broad interest in the further development and implementation of U multirank.
and 2) the conceptual interpretation of the'user-driven approach'applied in U multirank. It would be interesting to involve LERU again during a follow-up project
The other aspect of the potential up-scaling of U multirank is the extension to other fields.
Any extension of U multirank to new fields must deal with two questions: the relevance and meaningfulness of existing indicators for those fields,
While the U multirank feasibility study focused on the pilot fields of business studies and engineering, some issues of up-scaling to other fields have been discussed in the course of the stakeholder consultation.
Following the user-and stakeholder-driven approach of U multirank, we suggest that field-specific indicators for international rankings should be developed together with stakeholders from these fields.
when additional fields are addressed in U multirank, some specific field indicators will have to be developed. Based on the experience of the CHE ranking this will vary by field with some fields requiring no additional indicators
we conclude that up-scaling in terms of addressing a larger number of fields in U multirank is certainly feasible. 7 Applying U Applying U Applying U multirank:
This approach implies the user-driven notion of ranking which also is a basic feature of U multirank.
The presentation of U multirank results outlined in this chapter strictly follows this user-driven approach. But by relating institutional profiles (created in U-Map) with multidimensional rankings
U multirank introduces a second level of interactive ranking beyond the user-driven selection of indicators:
internationally-oriented research universities. U multirank has a much broader scope and intends to include a wider variety of institutional profiles.
We argue that it does not make much sense to compare institutions across diverse institutional profiles.
Hence U multirank offers a tool to identify and select institutions that are truly comparable in terms of their institutional profiles. 7. 2 Mapping diversity:
combining U-Map and U multirank From the beginning of the U multirank project one of the basic aims was that U multirank should be in contrast to existing global rankings
which brought about a dysfunctional shortsightedness onworld-class research universities'a tool to create transparency regarding the diversity of higher education institutions.
The combination of U-Map and U multirank offers a new approach to user-driven rankings.
and hence the sample of institutions to be compared in U multirank. Figure 7-1: Combining U-Map and U multirank Our user-driven interactive web tool will imply both steps, too.
Users will be offered the option to decide if they want to produce a focused institutional ranking or a field-based ranking,
U multirank includes different ways of presenting the results. 143 7. 3 The presentation modes Presenting ranking results requires a general model for accessing the results,
In U multirank the presentation of data allows for both: a comparative overview on indicators across institutions,
U multirank produces indicators and results on different levels of aggregation leading to a hierarchical data model:
In U multirank we present the results alphabetically or by rank groups (see chapter 2). In the first layer of the table (field-based ranking),
6research 146 Personalized ranking tables 7. 3. 2the development of an interactive user-driven approach is a central feature of U multirank.
when applying U multirank. An intuitive, appealing visual presentation of the main results will introduce users to the performance ranking of higher education institutions.
Results at a glance presented in this way may encourage users to drill down to more detailed information.
so that there is a recognizable U multirank presentation style and users are confused not by multiple visual styles.
and discussed at a U multirank stakeholder workshop and there was a clear preference for thesunburst'chart similar to the one used in U-Map.
The colours symbolize the five U multirank dimensions, with the rays representing the individual indicators. In this chart the grouped performance scores of institutions on each indicator are represented by the length of the corresponding rays:
faculty/department (field) and program. 149 Figure 7-4: Text format presentation of detailed results (example) 7. 4 Contextuality Rankings do not
Context variables affecting the performance of higher education institutions. Context factors that may affect decision-making processes of users of rankings (e g. students, researchers) although not linked to the performance of institutions.
for prospective students intending to choose a university or a study program, low student satisfaction scores regarding the support by teaching staff in a specific university or program is relevant information,
although the indicator itself cannot explain the reasons behind this judgment. Rankings also have to be sensitive to context variables that may lead to methodological biases.
this includes legal regulations (e g. concerning access) as well as the existence of legal/officialclassifications'of institutions (e g. in binary systems, the distinction between universities and other forms of non-university higher education institutions.
While in most countries research is integrated largely in universities, in some countries like France or Germany non-university research institutions undertake a major part of the national research effort.
A particular issue with regard to the context of higher education refers to the definition of the unit of analysis. The vast majority of rankings in higher education are comparing higher education institutions.
A few rankings explicitly compare higher education systems, either based on genuine data on higher education systems, e g. the University Systems Ranking published by the Lisbon Council31,
or by simply aggregating institutional data to the system level (e g. the QS National System Strength Ranking.
Both the Shanghai ranking and the QS rankings for instance are including universities only. The fact that they do not include non-university research institutions,
which are particularly important in some countries (e g. in France, Germany), produces a bias when their results are interpreted as a comparative assessment of the performance or quality of national higher education and research systems.
U multirank addresses the issues of contextuality by applying the design principle of comparability (see chapter 2). In U multirank rankings are created only among institutions that have sufficiently similar institutional profiles.
Combining U-Map and U multirank produces an approach in which comparable institutions are identified before they are compared in one or more rankings.
In addition, U multirank intends to offer relevant contextual information on institutions and fields. Contextual information does not allow for causal analyses
During the further development of U multirank the production of contextual information will be an important topic. 31 See www. lisboncouncil. net 151 7. 5 User-friendliness U multirank is conceived as a user-driven and stakeholder
In U multirank a number of features are included to increase the user-friendliness. In the same way as there is no one-size-fits-all-approach to rankings in terms of indicators,
U multirank, as any ranking, will have to find a balance between the need to reduce the complexity of information on the one hand and, at the same time,
U multirank wants to offer a tailor-made approach to presenting results, serving the information needs of different groups of users and taking into account their level of knowledge about higher education and higher education institutions.
Basic access is provided by the various modes of presentation described above (overview tables, personalised rankings and institutional profiles.
In accordance with EU policies on eaccessiblity32 barriers to access to the U multirank results and data will be removed as much as possible.
'i e. users from within higher education will be able to use an English version of U multirank. In particular forlay users'(e g. prospective students) the existence of various language versions of U multirank would increase usability.
However, translation of the web tool and the underlying data is a substantial cost factor.
But at least an explanation of how to use U multirank and the glossary and definition of indicators and key concepts should be available in as many European languages as possible. 32 See http://europa. eu/legislation summaries/information society/l24226h en. htm (retrieved on 10 may 2011
and a feasible business model to finance U multirank (see chapter 8). Another important aspect of user-friendliness is the transparency about the methodology used in rankings.
For U multirank this includes a description of the basic methodological elements (institutional and field-based rankings,
the future 8. 1 Introduction An important aspect in terms of the feasibility of U multirank is the question of implementing the system on a widespread and regular basis
It is clear that the implementation of U multirank is a dynamic and only partially predictable process
and we must differentiate between a two-year pilot phase and a longer-term implementation/institutionalisation of U multirank.
One of our basic suggestions regarding transparency in higher education and research is the integration of U-Map and U multirank.
Therefore, many of the conclusions regarding the operational implementation in the final U-Map report (see www. u-map. eu) are also valid for U multirank.
global or European The pilot test showed some problems with the inclusion into U multirank of institutions from specific countries outside Europe.
Clearly, with participation in U multirank on a voluntary basis higher education institutions will have to be convinced of the benefits of participation.
This leads to the question of the scale of international scope that U multirank could and should attain.
We would argue that U multirank should aim to achieve a relatively wide coverage of European higher education institutions as quickly as possible during the next project Phase in Europe the feasibility
But U multirank should remain a global tool. There are institutions all over the world interested in benchmarking with European universities;
the markets and peer institutions for European universities are increasingly becoming global; and the impression that the U multirank instrument is 154 only there to serve European interest should be avoided.
The pilot study proves that U multirank can be applied globally. Based on the pilot results we suggest that the extension beyond Europe could best be organized systematically
and should not represent just a random sample. From outside Europe, the necessary institutions should be recruited to guarantee a sufficient sample of comparable institutions of different profiles.
For instance it could be an option to try and integrate the research-oriented, international universities scoring high in traditional rankings.
When this strategy leads to a substantial database within the next two years recruitment could be reinforced, at
which point the inclusion of these important peer institutions will hopefully motivate more institutions to join U multirank.
Based on our pilot project we believe that it is feasible to add five new fields in each of the first three years of continued implementation of U multirank.
theheart'of U multirank is the idea of creating a user-driven, flexible tool to obtain subjective ranking that are relevant from the perspective of the individual user.
with U multirank it is also possible to create so-calledauthoritative'ranking lists from the database.
An authoritative ranking could be produced by a specific association of higher education institutions. For instance international associations or consortia of universities (such as CESEAR, LERU or ELIA) might be interested in benchmarking
or ranking theirmembers'.'An authoritative ranking could be produced from the perspective of a specific stakeholder or client organization.
For instance, an international public organization might be interested in using the database to promote a ranking of the international, research-intensive universities in order to compare a sample of comparable universities worldwide.
On the other hand, in the first phase of implementation, U multirank should be perceived by all potential users as relevant for their individual needs.
or two international associations of higher education institutions and by conceptualizing one or more authoritative rankings with interested public and/or private partners.
however, is on establishing the flexible web tool. 156 8. 4 The need for international data systems U multirank is isolated not an system,
The development of the European database resulting from EUMIDA should take into account the basic data needs of U multirank.
A second aspect of integrated international data systems is the link between U multirank and national ranking systems.
U multirank implies a need for an international database of ranking data consisting of indicators which could be used as a flexible online tool
and rank comparable universities. Developing a European data system and connecting it to similar systems worldwide will strongly increase the potential for multidimensional global mapping and ranking.
In Spain we have the example of Fundacion CYD planning to implement a field-based ranking system based on U multirank standards.
In its operational phase the U multirank unit should develop standards and a set of basic indicators that national initiatives would have to fulfil
the U multirank unit will be able to pre-fill the data collection instruments and has to fill the gaps to attain European or worldwide coverage.
Finalisation of the various U multirank instruments 1. Full development of the database and web tool.
The prototypes of the instrument will demonstrate the outcomes and benefits of U multirank. 2. Setting of standards and norms and further development of underdeveloped dimensions and indicators.
These parts of the ranking model should be developed further. 3. Update of data collection tools/questionnaires according to the revision and further development of indicators and the experiences from the U multirank project.
In the first round of U multirank pre-filling proved difficult. The testing of national data systems for their pre-filling potential and the development of suggestions for the promotion of pre-filling are important steps to lower the costs of the system for the institutions.
and the international U multirank database should be realized. Roll out of U multirank across EU+countries 5. Invitation of EU+higher education institutions and data collection.
Within the next two years all identifiable European higher education institutions should be invited to 159 participate in the institutional as well as in the three selected field-based rankings.
The objective would be to achieve full coverage of institutional profiles and have a sufficient number of comparable institutions.
If we take into account the response rate of institutions in the pilot phase the inclusion of 700 institutions in the institutional and 500 in each field-based ranking appears realistic. 6. Targeted recruitment of higher education institutions outside Europe.
The combined U-Map/U multirank approach should be tested further by developing the means to produce the first authoritative ranking lists for universities with selected profiles.
for instance the profile of research orientation and a high degree of internationalization (international research intensive universities) and the profile of a strong focus on teaching
and alliances of higher education institutions willing to establish internal benchmarking processes and publish rankings of their membership.
If the objective is to establish U multirank as largely self-sustainable, a business plan is required. It could be a good idea to involve organizations with professional business expertise in the next project phase in order to work out a business plan,
the user-driven approach imbues U multirank with strong democratic characteristics and a role far from commercial interests,
if complete funding from nonprofit sources is unrealistic. 9. Formal institutionalization of the U multirank unit.
During the next project phase an operational organization to implement U multirank will need to be created and a governance and funding structure established,
The features of and opportunities offered by U multirank need to be communicated continuously. Since the success of U multirank requires institutions'voluntary participation a comprehensive promotion
and recruitment strategy will be needed, requiring the involvement of many key players 160 (governments, European commission, higher education associations, employer organizations, student organizations).
A crucial issue related to communication is the user-friendliness of U multirank. This could be guaranteed by the smoothness of data collection
12/2013 The 11 elements form the potential content of the next U multirank project phase, transforming U multirank from a feasible concept to a fully developed instrument already rolled out and ready for continuous operation. 8. 6 Criteria
and models of implementation An assessment of the various options for the organizational implementation of U multirank requires a set of analytical criteria.
The following criteria represent notions of good practice for this type of an implementation process such as governance
The ranking must be recognized open to higher education institutions of all types and from all participating countries, irrespective of their membership in associations, networks or conferences.
The ranking tool must be administered independent of the interests of higher education institutions or representative organizations in the higher education and research sector.
A key element of U multirank is the flexible, stakeholder-oriented, user-driven approach. The implementation has to ensure this approach,
In general, the involvement of relevant actors in both the implementation of U multirank and its governance structure is a crucial success factor.
those parties taking responsibility for the governance of U multirank should be accepted broadly by stakeholders. Those who will be involved in the implementation should allow their names to be affiliated with the new instrument
and take responsibility in the governing bodies of the organizational structure. 163 We identified four basic options for responsibility structures for U multirank:
In this model, governments would use their authority over higher education to organize the rankings of higher education institutions.
i e. student organizations and associations of higher education institutions, would be responsible for the operation of the transparency instrument.
Assessment of the four models for implementing U multirank Criteria Model Inclusiveness International orientation Independence Professionalism Sustainability Efficiency Service orientation Credibility Commercial--+Government++-Stakeholder-+-Independent
It is independent both from higher education institutions (and their associations) and from higher education funding bodies/politics.
Products for universities could be created, but the pricing policy mustn't destroy the willingness to participate.
A suggestion would be to organize the implementation of U multirank in such a way that basic ranking results can be provided for free to the participating institutions,
We believe that it is not reasonable in the initial phase of implementing U multirank to establish a new professional organization for running the system.
Once the extent of participation of higher education institutions is known, this option could be considered. The assumption is
There should be a next U multirank project phase before a ranking unit is established. 168 Figure 8-2:
Organizational structure for phase 1 (short term) We suggest that during the next two years of the project phase the current project structure of U multirank should be continued.
U multirank The following analysis of cost factors and scenarios is based on the situation of running U multirank as an established system.
Costs have been estimated based on this projection but will not become part of the final report. The cost estimations showed that U multirank is an ambitious project also in financial terms,
but in general it seems to be financially feasible. A general assumption based on the EU policy is that U multirank should become self-sustainable without long-term basic funding by the European commission.
EC contributions will decline over time and new funding sources will have to be found. However, from our calculations it became clear that there is no single financial source from
which we could expect to cover the whole costs of U multirank; the only option is diversified a funding base with a mix of financial sources.
If U multirank is not dependent on one major source a further advantage lies in the distribution of financial risks.
It is difficult to calculate the exact running costs associated with U multirank because these depend on many variables.
The major variable cost drivers of U multirank are: The number of countries and institutions involved. This determines the volume of data that has to be processed and the communication efforts.
The surveys that are needed to cover all indicators outlined in the data models of U multirank.
if universities have no e-mail-addresses of their students, requiring students to be addressed by letters.
The intention of the European commission is to develop U multirank into a self-sustaining instrument, requiring no EU funding after its implementation phase.
Nevertheless the European commission should consider the option of continued support of part of U multirank's basic funding in the long run
and ensure a formal role for the EC as a partner in U multirank. To promote transparency
To ensure students'free access to U multirank data the EC could provide also in the long run direct funding of user charges that would
Discussions with potential funders so far have shown that the funding of U multirank has to rely on a mix of income streams.
since U multirank with its related surveys is an expensive form of ranking and the commercial sources are limited.
Charges to the users of the U multirank web tool would seriously undermine the aim of creating more transparency in European higher education by excluding students for example;
if it would work for U multirank. The questions are: would paying for rankings produce avalue for money'attitude on the part of institutions (or countries?
EC, foundations, other sponsors) with a combination of a variety of market sources contributing cost coverage plus some cost reductions through efficiency gains. 8. 9 A concluding perspective U multirank
after a next project phase of two years institutionalisation of a U multirank unit could be organized for the longer term.
After two more years, the roll out of the system should include about 700 European higher education institutions and about 500 institutions in the field-based ranking for each of three fields.
either for the public or for associations of higher education institutions, should be developed because of their market potential.
But the organizational basis of U multirank should be the nonprofit model with elements of the other options included.
In particular, the aim of financial self-sustainability for U multirank makes the combination of some nonprofit basic funding with the offer of commercial products inevitable.
and governance structure of U multirank. The analysis of the fixed and flexible cost determinants could lead to a calculation of the cost,
the case of university education. European Journal of Marketing, 31 (7), 528-540. AUBR Expert Group.
) Assessing Europe's University-Based Research--Draft. s l. Brussels: European commission DG Research. Azagra-Caro, J. M.,de Lucio,
. & Gutierrez, G. A. 2003),University patents: Output and input indicators of what?''''Research Evaluation, 12 (2): 5 16.
Brandenburg, U & Federkeil, G. 2007) How to measure internationality and internationalisation of higher education institutions! Indicators and key figures, CHE Working paper No. 92 Brown, R. M,
Design phase of the project'Design and testing the feasibility of a multidimensional global university ranking'.'Enschede:
CHEPS, University of Twente. Clark, B. R. 1983. The higher education system: academic organization in cross-national perspective.
Berkeley, University of California Press. Cremonini, L.,Benneworth, P, . & Westerheijden, D. F. 2011). In the shadow of celebrity:
The impact of world class universities policies on national higher education systems. Paper presented at the CHER 23rd annual conference.
A Cross-National Analysis of University ranking Systems. Higher education, 49,495-533.178 Dulleck, U. and R. Kerschbamer (2006."
Reputation indicators in rankings of higher education institutions. In B. Kehm & B. Stensaker (Eds. University rankings, Diversity,
and The New Landscape of Higher education (pp. 19-34). Rotterdam; Taipeh: Sense Publishers. Furco, A. & Miller, W. 2009), Issues in Benchmarking
Forum Hochschule (3). Holi M. T.,Wickramasinghe, R. and van Leeuwen, M. 2008), Metrics for the evaluation of knowledge transfer activities at universities.
IAU, International Association of Universities (2005. Global Survey Report, Internationalization of Higher education: New Directions, New Challenges, Paris:
) Berlin Principles on Ranking of Higher education institutions. Retrieved 24.6,2006, from http://www. che. de/downloads/Berlin principles ireg 534. pdf 179 Ischinger, B. and Puukka, J. 2009), Universities for Cities and Regions:
Lessons from the OECD Reviews, Change: The Magazine of Higher Learning, Vol. 41, No. 3, p. 8-13.
Global university rankings: private and public goods. Paper presented at the 19th Annual CHER conference, Kassel Mcdonough, P m.,Antonio, A l,
A study of university-related patents and a survey of academic inventors''.''Scientometrics, 58: 321 350.
J. M.,Mora, F. 2008), Third Mission Ranking for World Class Universities: Beyond Teaching and Research, Higher education in Europe, Vol. 33, Nr. 2, pp. 259-271.
Global university rankings and their Impact. Brussels: European University Association. Sadlak, J, . & Liu, N c. Eds.).2007).
) The world-class university and ranking: Aiming beyond status. Bucharest; Shanghai; Cluj-Napoca. Salmi, J. 2009.
The Challenge of Establishing World-Class Universities. Washington, D c.:World bank. Saragossi, S, . & van Pottelsberghe, B. 2003),
What patent data reveal about universities: The case of Belgium''.''Journal of Technology Transfer, 28:47 51.
Schmiemann, M. and Durvy, J.-N. 2003), New approaches to technology transfer from publicly funded research in:
The CHE Ranking of European Universities: A Pilot Study in Flanders and The netherlands. 2008). ) s l. Gütersloh, Enschede, Leiden, Brussels:
CWTS Leiden University; Vlaamse Overheido. Document Number) Thibaud, A. 2009. Vers quel i i i classement de Shanghai et des autres classements internationaux (No.
& Thursby, M. 2007), US faculty patenting: Inside and outside the university. NBER Working Paper 13256.
Cambridge, MA: National Bureau of Economic Research. Tijssen, R. F. W. 2003. Scoreboards of research excellence.
and E. van Wijk,(2009) Benchmarking university-industry research cooperation worldwide: performance measurements and indicators based on co-authorship data for the world's largest universities, Research Evaluation, 18, pp. 13-24.
Tijssen, R. J. W.,Waltman, L, . and N. J. Van Eck (2011) Collaborations span 1, 553 kilometres, Nature, 473, p. 154.
A global survey of university league tables. Toronto: Educational Policy Institute. Van dyke, N. 2005. Twenty Years of University Report cards.
Higher education in Europe, 30 (2), 103-125. Van Raan, Anthony (2003: Challenges in the Ranking of Universities.
In: Jan Sadlak, Liu Nian Cai (eds.:The World-Class University and Ranking: Aiming Beyond Status. Bucharest:
UNESCO-CEPES. van Vught, F. A. 2008. Mission diversity and reputation in higher education. Higher education Policy 21 (2), 151-174. van Vught, F. A.,Kaiser, F.,File, J. M.,Gaethgens, C.,Peter, R,
The European Classification of Higher education institutions. Enschede: CHEPS. Waltman, L.,R. J. W. Tijssen, and N. J van Eck (2011) Globalisation of science in kilometres, Journal of Informetrics Westerheijden, D. F.,Beerkens, E.,Cremonini, L.,Huisman, J.,Kehm
Overtext Web Module V3.0 Alpha
Copyright Semantic-Knowledge, 1994-2011