Academic ranking of world universities (20) | ![]() |
Global university ranking (16) | ![]() |
Qs world university rankings (23) | ![]() |
U-multirank (323) | ![]() |
University ranking (50) | ![]() |
U multirank, European project, defined the right evaluation criteria, but it will be introduced to the educational and research system, will be practiced
and of the evolving labour market. 194 http://www. u-portal. org/u multirank/./Structured Dialogue The structured dialogue project allows young people
The CHERPA Network In cooperation with U multirank Project team Project leaders Frans van Vught (CHEPS)* Frank Ziegele (CHE)* Jon File (CHEPS)* Project co
U multirank final report 9 Table of contents Tables...13 Figures...14 Executive Summary...17 1 Reviewing current rankings...
1. 4 Impacts of current rankings 33 1. 5 Indications for better practice 35 2 Designing U multirank...
Methodological standards 43 2. 4. 1 User-driven approach 44 2. 4. 2 U-Map and U multirank 45 2. 4. 3
Grouping 46 2. 4. 4 Design context 46 2. 4. 5 3 Constructing U multirank: Selecting indicators...
3. 3. 5 4 Constructing U multirank: databases and data collection tools...79 4. 1 Introduction 79 10 4. 2 Databases 79 Existing databases 79 4. 2. 1 Bibliometric databases 80 4
perspective 94 5 Testing U multirank: pilot sample and data collection...97 5. 1 Introduction 97 5. 2 The global sample 97 5. 3 Data collection 102 Institutional self-reported data 103
115 6 Testing U multirank: results...119 6. 1 Introduction 119 6. 2 Feasibility of indicators 119 Teaching & Learning 122 6. 2. 1 Research 124 6. 2
and patent data 135 6. 3. 3 6. 4 Feasibility of up-scaling 137 11 7 Applying U multirank:
combining U-Map and U multirank 141 7. 3 The presentation modes 143 Interactive tables 143 7. 3. 1 Personalized ranking tables 146
151 8 Implementing U multirank: the future...153 8. 1 Introduction 153 8. 2 Scope: global or European 153 8. 3 Personalized and authoritative rankings 154 8. 4 The need for international data systems 156 8. 5 Content and organization of the next
and models of implementation 161 8. 7 Towards a mixed implementation model 167 8. 8 Funding U multirank 169 8. 9 A concluding perspective 176 9 List
Classifications and rankings considered in U multirank...26 Table 1-2: Indicators and weights in global university rankings...
Conceptual grid U multirank...42 Table 3-1: Indicators for the dimension Teaching & Learning in the Focused Institutional and Field-based Rankings...
Data elements shared between EUMIDA and U multirank: their coverage in national databases...84 Table 4-2:
Availability of U multirank data elements in countries'national databases according to experts in 6 countries (Argentina/AR, Australia/AU, Canada/CA, Saudi arabia/SA, South africa/ZA
U multirank data collection process...104 Figure 5-2: Follow up survey: assessment of data procedures and communication...
Combining U-Map and U multirank...142 Figure 7-2: User selection of indicators for personalized ranking tables...
Assessment of the four models for implementing U multirank...166 Figure 8-2: Organizational structure for phase 1 (short term...
Readers interested in a fuller treatment of many of the topics covered in this report are referred to the project web-site (www. u multirank. eu) where the project's three Interim Reports can be found.
We have called this new tool U multirank as this stresses three fundamental points of departure: it is multidimensional,
and key characteristics of Ucharacteristics of Ucharacteristics of Ucharacteristics of U characteristics of Ucharacteristics of U characteristics of U characteristics of Ucharacteristics of Ucharacteristics of U characteristics of U characteristics of U multirank Multirank
U multirank enables such comparisons to be made both at the level of institutions as a whole and in the broad disciplinary fields in
The integration of the already designed and tested U-Map classification tool into U multirank enables the creation of the user-selected groups of sufficiently comparable institutions.
U multirank includes a range of indicators that will enable users to compare the performance of institutions across five dimensions of higher education and research activities:
U multirank could provide its users with the on-line functionality to create two general types of rankings:
which institutions are active U multirank would also include the facility for users to create institutional
This personalised interactive ranking table reflects the user driven nature of U multirank. 20 Table 1:
In order to be able to apply the principle of comparability we have integrated the existing transparency tool the U-Map classification into U multirank.
and research organisations using a set of dimensions similar to those developed in U multirank. The underlying indicators differ as U-Map is concerned with understanding the mix of activities an institution is engaged in
while U multirank is concerned with an institution's performance in these activities (how well it does
Integrating U-Map into U multirank enables the creation of user-selected groups of sufficiently comparable institutions that can then be compared in focused institutional
of the UTHE findings of the U The findings of the UTHE findings of the UTHE findings of the U multirank pilot study Multirank pilot study Multirank pilot study Multirank pilot studymultirank pilot studymultirank pilot studymultirank pilot studymultirank
pilot study Multirank pilot studymultirank pilot studymultirank pilot study Multirank pilot studymultirank pilot study U multirank was tested in a pilot study involving 159 higher education institutions drawn from 57 countries:
and it is clear that U multirank is based a Europe project, this represents a strong expression of interest.
organisational and financial challenges, there are no inherent features of U multirank that rule out the possibility of such future growth.
and 22 operational feasibility we have developed a U multirankVersion 1. 0'that is ready to be implemented in European higher education
d implementation of Ud implementation of Ud implementation of U-Multirankmultirankmultirankmultirank Multirank Multirankmultirank The outcomes of the pilot study suggest some clear next steps in the further development of U multirank and its implementation
The refinement of U multirank instruments: Some modifications need to be made to a number of indicators and to the data gathering instruments based on the experience of the pilot study.
Roll out of U multirank across European countries: Given the need for more transparent information in the emerging European higher education area all European higher education
and research institutions should be invited to participate in U multirank in the next phase. Many European stakeholders are interested in assessing
Targeted recruitment of relevant peer institutions from outside Europe should be continued in the next phase of the development of U multirank.
Although U multirank has been designed to be driven user, this does not preclude the use of the tool
In terms of the organisational arrangements for these activities we favour a further two year project phase for U multirank.
In the longer term on the basis of a detailed analysis of different organisational models for an institutionalised U multirank our strong preference is for an independent nonprofit organisation operating with multiple sources of funding.
changing the tactics in the game (more attacks 1 See www. u multirank. eu 24 late in a drawn match),
Classifications and rankings considered in U multirank Type Name Classifications Carnegie classification (USA) U-Map (Europe) Global League tables and Rankings Shanghai Jiao Tong University's (SJTU
2010) and even already anticipating the current U multirank project, the situation has begun to change: ranking producers are becoming more explicit and reflective about their methodologies and underlying conceptual frameworks.
while practically all informed the design of U multirank. We already mentioned some of them. The full list includes:
to ensure that in the development of the set of indicators for U multirank we would not overlook any dimensions,
These general conclusions have been an important source of inspiration for how we designed U multirank, a new, global, multidimensional ranking instrument.
multidimensional global ranking tool that we have calledU multirank'.'First, we present the general design principles that to a large extent have guided the design process.
Finally, we outline a number of methodological choices that have a major impact on the operational design of U multirank. 2. 2 Design Principles U multirank aims to address the challenges identified as arising from the various currently existing ranking tools.
when designing and constructing U multirank. Our fundamental epistemological argument is that as all observations of reality are driven theory (formed by conceptual systems) anobjective ranking'cannot be developed (see chapter 1). Every ranking will reflect the normative design and selection criteria of its constructors.
'These principles underpin the design of U multirank, resulting in a user-driven, multidimensional and methodologically robust ranking instrument.
In addition, U multirank aims to enable its users to identify institutions and programs that are sufficiently comparable to be ranked,
For the design of U multirank we specify our own conceptual framework in the following section. 2. 3 Conceptual framework A meaningful ranking requires a conceptual framework
Conceptual grid U multirank Stages Functions & Audiences Enabling Performance Input Process Output Impact Functions context Teaching & Learning Research Knowledge Transfer Audiences
International Orientation Regional Engagement Using this conceptual framework we have selected the following five dimensions as the major content categories of U multirank:
In our design of U multirank we focused on the selection of output and impact indicators.
U multirank intends to be a multidimensional performance assessment tool and thus needs to imply indicators that relate to the performances of higher education
multidimensional ranking tool like U multirank can be developed. In this section we explain the various methodological choices made when designing U multirank.
Methodological standards 2. 4. 1in addition to the content-related conceptual framework, the new ranking tool and its underlying indicators must be based also on methodological standards of empirical research, validity and reliability
In addition, because U multirank is an international comparative transparency tool, it must deal with the issue of comparability across cultures and countries and finally,
U multirank has to address the issue of feasibility. Validity (Construct) validity refers to the evidence about
Feasibility The objective of U multirank is to design a multidimensional global ranking tool that is feasible in practice.
can U multirank be applied in reality and can it be applied with a favourable relation between benefits and costs in terms of financial and human resources?
We report on the empirical assessment of the feasibility of U multirank in chapter 6 of this report.
User-driven approach 2. 4. 2to guide the readers'understanding of U multirank, we now briefly describe the way we have worked methodologically out the principle of being driven user (see section 2. 2). We propose an interactive web-based approach,
U-Map and U multirank 2. 4. 3the principle of comparability (see section 2. 2) calls for a method that helps us in finding institutions the purposes
can be found in the connection of U multirank with U-Map (see www. u-map. eu). U-Map,
U-Map can prepare the ground for U multirank in the sense that it helps identify those higher education institutions that are comparable and for which,
therefore, performance can be compared by means of the U multirank ranking tool. A detailed description of the methodology used in this classification can be found on the U-Map website (http://www. u-map. eu/methodology doc/)and in the final report of the U-Map project,
U multirank focuses on the performance aspects of higher education and research institutions. U multirank shows how well the higher education institutions are performing in the context of their institutional profile.
Thus, the emphasis is on indicators of performance, whereas in U-Map it lies on the enablers of that performance the inputs and activities.
U-Map and U multirank share the same conceptual model. The conceptual model provides the rationale for the selection of the indicators in both U-Map and U multirank, both
of which are complementary instruments for mapping diversity, horizontal diversity in classification and vertical diversity in ranking.
As an alternative U multirank uses a grouping method. Instead of calculating exact league table positions we will assign institutions to a limited number of groups.
Design context 2. 4. 5in this chapter we have described the general aspects of the design process regarding U multirank.
we have described the conceptual framework from which the five dimensions of U multirank are deduced, and we have outlined a number of methodological approaches to be applied in U multirank.
Together these elements form the design context from which we have constructed U multirank. The design choices made here are in accordance with both the Berlin Principles and the recommendations by the Expert Group on the Assessment of University-based Research.
The Berlin Principles4 emphasize (a o.)the importance of being clear about the purpose of rankings and their target groups,
Based on our design context, in the following chapters we report on the construction of U multirank. 5 Expert Group on Assessment of University-Based Research (2010),
Brussels 3 Constructing U Constructing U Constructing U Constructing U multirank: Selecting indicatorsmultirank: Selecting indicators Multirank:
Selecting indicators 3. 1 Introduction Having set out the design context for U multirank in the previous chapter,
we now turn to a major part of the process of constructing U multirank: the selection and definition of the indicators.
The other important components of the construction process for U multirank are the databases and the data collection tools that allow us to actuallyfill'the indicators.
These will be discussed further in chapter 4 as we explain the design of U multirank in more detail.
In chapters 5 and 6 we report on the U multirank pilot study during which we analysed the data quality
presented to them as potential items in the five dimensions of U multirank (see 3. 3). In addition,
we invited feedback from international experts in higher education and research and from the Advisory board of the U multirank project.
the indicators selected for the pre-test phase in U multirank (see 6. 2) then were grouped into three categories:
The outcome of the pre-test was used then as further input for the wider pilot where the actual data was collected to quantify the indicators for U multirank at both the institutional and the field level.
As one of the main objectives of our U multirank project is to inform stakeholders such as students,
& Learning indicators that were selected for the pilot test of U multirank. The column on the right-hand side includes some of the comments
Given the increasing complexity of the research function of higher education institutions and its extension beyond Phd awarding institutions, U multirank adopts a broad definition of research,
an indicator reflecting arts-related output is included in U multirank as well. However, data availability is posing some challenges here.
Therefore it was decided to keep them in the list of indicators for U multirank's institutional ranking.
however, already under the research dimension in U multirank. In the case of texts, it is customary to distinguish between two forms:
While publications are part of the research dimension in U multirank, patents will be included under the Knowledge Transfer dimension.
& Learning and Regional Orientation dimensions included in U multirank. Knowledge transfer through people also takes place through networks
films and exhibition catalogues have been included in the scholarly outputs covered in the Research dimension of U multirank.
U multirank particularly wants to capture aspects of knowledge transfer performance. However, given the state of the art in measuring knowledge transfer (Holi et al.
The latter two dimensions are covered in the U multirank dimensionKnowledge Transfer'.'Indicators for the social dimension of the third mission comprise indicators on international mobility (that are covered in the U multirank dimension International Orientation) and a very limited number of indicators on regional engagement.
Activities and indicators on regional and community engagement can be categorized in three groups: outreach, partnerships and curricular engagement18.
U multirank has suggested to start with the existing list of regions in the Nomenclature of Territorial Units for Statistics (NUTS) classification developed
and 7the pilot study on the empirical feasibility assessment of the U multirank tool and its various indicators will be discussed.
As a result of this pilot assessment the final list of indicators will be presented. 4 Constructing U Constructing U Constructing U Constructing U multirank:
and data collection instruments used in constructing U multirank. The first part is an overview of existing databases mainly on bibliometrics and patents.
and from students. 4. 2 Databases Existing databases 4. 2. 1one of the activities in the U multirank project was to review existing rankings
If existing databases can be relied on for quantifying the U multirank indicators this would be helpful in reducing the overall burden for institutions in handling the U-Multirank data requests.
For other aspects and dimensions, U multirank will have to rely on self-reported data. Regarding research output and impact, there are worldwide databases on journal publications and citations.
Our analysis on data availability was completed with a brief online consultation with the group of international experts connected to U multirank (see section 4. 2. 5). The international experts were asked to give their assessment of the 21 The U multirank project was granted access to the preliminary
in order to learn about data availability in the countries covered by EUMIDA. 80 situation with respect to data availability in some of the non-EU countries included in U multirank Bibliometric databases 4. 2. 2there are a number of international databases
The bibliometric methodologies applied in international comparative settings such as U multirank usually draw their information from publications that are released in scientific and technical journals.
U multirank therefore makes use of international bibliometric databases to compile some of its research performance indicators
To compile the publications-related indicators in the U multirank pilot study, bibliometric data was derived from the October 2010 edition of the Web of Science bibliographical database.
All the selected institutions in the U multirank pilot study produced at least one Web of Science-indexed research publication during the years 1980-2010.
For the following six indicators selected for inclusion in the U multirank pilot test (see chapter 6) one can derive data from the CWTS/Thomson Reuters Web of Science database:
#6) that were constructed specially for U multirank and that have never been used before in any international classification or ranking.
Patent databases 4. 2. 3as part of the indicators in the Knowledge Transfer dimension, U multirank selected the number of patent applications for
For U multirank, patent data were retrieved from the European Patent office (EPO. Its Worldwide Patent Statistical Database (version October 2009) 25, also known as PATSTAT, is designed
Data availability according to EUMIDA 4. 2. 4like the U multirank project, the EUMIDA project (see http://www. eumida. org) collects data on individual higher education and research institutions.
The EUMIDA and U multirank project teams agreed to share information on issues such as definitions of data elements
since U-Map aims to build activity profiles for individual institutions whereas U multirank constructs performance profiles.
Table 4-1 below shows the U multirank data elements that are covered in EUMIDA and whether information on these data elements may be found in national databases (statistical offices, ministries, rectors'associations, etc.).
The table illustrates that information on only a few U multirank data elements is available from national databases and,
Data elements shared between EUMIDA and U multirank: their coverage in national databases Dimension EUMIDA and U multirank data element European countries where data element is available in national databases Teaching & Learning relative rate of graduate unemployment
CZ, FI, NO, SK, ES Research expenditure on research AT*,BE, CY, CZ*,DK, EE, FI, GR*,HU, IT, LV*,LT*,LU, MT*,NO, PL*,RO*,SI*,ES, SE, CH,
IE*,IT, LU, MT*,NO, NL (p), PL*,SI, ES, UK 85 International Orientation (no overlap between U multirank and EUMIDA) Regional Engagement (no overlap between U multirank and EUMIDA) Source:
) Expert view on data availability in non-European countries 4. 2. 5the Expert Board of the U multirank project was consulted to assess for their six countries all from outside Europe the availability of data
related to the U multirank indicators. 27 They gave their judgement on the question whether data was available in national databases and/or in the institutions themselves.
Availability of U multirank data elements in countries'national databases according to experts in 6 countries (Argentina/AR, Australia/AU, Canada/CA, Saudi arabia/SA, South africa/ZA
, United states/US) Dimension U multirank data element Countries where data element is available in national databases Countries where data element is available in institutional database Teaching & Learning
Based on U multirank expert survey If we look at the outcomes, it appears that for the Teaching
or definitions that differ from the ones used for the questionnaires applied in U multirank (see next section).
the U multirank project had to rely largely on self-reported data (both at the institutional
the U-Map questionnaire is an instrument for identifying similar subsets of higher education institutions within the U multirank sample.
Respondents from the institutions were advised to complete the U-Map questionnaire first before completing the other questionnaires. 89 4. 3. 1. 2 Institutional questionnaire By means of U multirank's institutional questionnaire28,
Data elements from U-Map are transferred automatically to the U multirank questionnaire using atransfer tool'.
The U multirank questionnaires were tested in terms of cultural/linguistic understanding, clarity of definitions of data elements and feasibility of data collection.
In selecting the institutions for the pre-test the U multirank team considered the geographical distribution and the type of institutions.
to cover all relevant issues on the five dimensions of U multirank and to limit the questionnaire in terms of length.
In order to come to a meaningful and comprehensive set of indicators at the conclusion of the U multirank pilot study we had to aim for a broad data collection to cover a broad range of indicators.
a number of supporting instruments were prepared for the four U multirank surveys. These instruments ensure that respondents will have a common understanding of definitions and concepts.
A glossary of indicators for the four surveys was published on the U multirank website. Throughout the data collection process the glossary was updated regularly.
This allowed questions to be asked concerning the questionnaires and for contact with the U multirank team on other matters.
A technical specifications protocol for U multirank was developed introducing additional functions in the questionnaire to ensure that a smooth data collection could take place:
the option to transfer data from the U-Map to the U multirank institutional questionnaire, and the option to have multiple users access the questionnaire at the same time.
We updated the U multirank website regularly and provided information about the steps/time schedules for data collection.
All institutions had clear communication partners from the U multirank team. 4. 4 A concluding perspective This chapter, providing a quick survey of existing databases,
The U multirank questionnaires therefore were accompanied by a glossary of definitions and an FAQ facility to improve the reliability of the answers.
Testing U multirank: pilot sample and data collectionmultirank: pilot sample and data collection Multirank: pilot sample and data collection Multirank:
and construction process for U multirank, we will describe the feasibility testing of this multidimensional ranking tool.
This test took place in a pilot study specifically undertaken to analyse the actual feasibility of U multirank on a global scale.
one of the basic ideas of U multirank is the link to U-Map. U-Map is an effective tool to identify institutional activity profiles of institutions similar enough to compare them in rankings.
which makes it insufficiently applicable for the selection of the sample of pilot institutions for the U multirank feasibility test.
This offered a clear indication of a broad variety of institutional profiles. 98 Some universities applied through the U multirank website to participate in the feasibility study.
The 159 institutions that agreed to take part in the U multirank pilot are spread over 57 countries.
In the US the U multirank project is perceived as strongly European-focused, which kept some institutions from participating.
The problems with some countries are an important aspect regarding the feasibility of a global implementation of U multirank.
All in all the intention to attain a sufficient international scope in the U multirank pilot study by means of a global sample can be seen as successful.
Regional distribution of participating institutions Region and Country Initial proposal for number of institutions Institutions in the final pilot selection Institutions that confirmed participation Institutions which delivered U multirank institutional data Institutions
which delivered U multirank institutional data and U-Map data July 2010 February 2011 April 2011 April 2011 I. EU 27 (population in millions) Austria (8m
Region and Country Initial proposal for number of institutions Institutions in the final pilot selection Institutions that confirmed participation Institutions which delivered U multirank institutional data Institutions
which delivered U multirank institutional data and U-Map data Netherlands (16m) 3 7 3 3 3 Poland (38m) 6 12 7 7 6
and Country Initial proposal for number of institutions Institutions in the final pilot selection Institutions that confirmed participation Institutions which delivered U multirank institutional data Institutions
which delivered U multirank institutional data and U-Map data Other Asia 5 2 The Philippines 1 1 1 Taiwan 1 1 0 Vietnam 2
o U multirank institutional questionnaire Field-based ranking: o U multirank field-based questionnaires o U multirank Student survey 104 Figure 5-1:
U multirank data collection process The institutions were given seven weeks to collect the data, with deadlines set according to the dates the institution confirmed their participation.
Thegrouping'criterion for this was the successful submission of the contact form. The next step to ensure a high response rate was to review
We advised the institutions to start working with the questionnaires in a certain order beginning with the U-Map and then the U multirank questionnaires
since a tool had been developed to facilitate the transfer of overlapping information from the U-Map questionnaire to the U multirank institutional questionnaire.
Organising a survey among students on a global scale was one of the major challenges in U multirank.
critical comments indicated some confusion about the relationship between the U-Map and U multirank institutional questionnaires.
regional engagement)( See Figure 5-4). 02468 10 12 very good good neutral poor very poor General procedures Communication with U multirank 0123456789
and the U multirank technical specification email (see appendices 10 and 11) with the institutions to ensure that a smooth data collection could take place.
all universities had clear communication partners in the U multirank team. The main part of the verification process consisted of the data cleaning procedures after receiving the data.
The following four indicators have especially been designed for U multirank: International joint research publications; University-industry joint research publications;
In a possible next stage of U multirank we expect to apply a different, and more flexible, way of delineating regions
'The bibliometric data in the pilot version of U multirank database refer to one measurement per indicator.
Although all the HEIS that participated in the U multirank pilot study produced at least one Wos-indexed research publication during the years 1980-2010
In follow-up stages of U multirank we plan to lower the threshold values for Wos-indexed publication output
Second, and specifically for the U multirank pilot, keyword searches were designed and tailored for each institute individually,
We have argued that the field-based rankings of indicators in each dimension contribute significantly to the value and the usability of U multirank.
At present, however, the breakdown of patent indicators by the fields defined in the U multirank pilot study (business studies,
Therefore we were unable to produce patent analyses at the field-based level of U multirank. 6 Testing UTESTING U Testing U Testing U multirank:
results 6. 1 Introduction The main objective of the pilot study was to empirically test the feasibility of the U multirank instrument.
and the potential upscaling of U multirank to a globally applicable multidimensional ranking tool. 6. 2 Feasibility of indicators In the pilot study we analyzed the feasibility of the various indicators that were selected after the multi-stage process of stakeholder
and the Advisory Group. 122 Teaching & Learning 6. 2. 1the first dimension of U multirank is Teaching & Learning.
the U-map questionnaire to identify institutional profiles the U multirank institutional questionnaire the U multirank field-based questionnaire We supported this data collection with extensive data cleaning processes,
The parallel institutional data collection for U-Map and U multirank caused some confusion. Although a tool was implemented to pre-fill data from U-Map into U multirank,
some confusion remained concerning the link between the two instruments. In order to test some varieties, institutional and field-based questionnaires were implemented with different features (e g. definition of international staff).
In some countries the U multirank student survey conflicted with existing national surveys, which in some cases are highly relevant for institutions.
It should be evaluated how far U multirank and national surveys could be harmonized in terms of questionnaires and, at least, in terms of timing.
is it possible to extend U multirank to a comprehensive global coverage and how easy would it be to add additional fields?
In terms of the feasibility of U multirank as a potential new global ranking tool, the results of the pilot study are positive,
and taking into account that it is clear that U multirank is based a Europe initiative, this represents a strong expression of worldwide interest.
Our single caveat concerns an immediate global-level introduction of U multirank. The pilot study suggests that a global multidimensional ranking is unlikely to prove feasible in the sense of achieving extensive coverage levels across the globe in the short term.
From their participation in the various stakeholder meetings, we can conclude that there is broad interest in the further development and implementation of U multirank.
and 2) the conceptual interpretation of the'user-driven approach'applied in U multirank. It would be interesting to involve LERU again during a follow-up project
The other aspect of the potential up-scaling of U multirank is the extension to other fields.
Any extension of U multirank to new fields must deal with two questions: the relevance and meaningfulness of existing indicators for those fields,
While the U multirank feasibility study focused on the pilot fields of business studies and engineering, some issues of up-scaling to other fields have been discussed in the course of the stakeholder consultation.
Following the user-and stakeholder-driven approach of U multirank, we suggest that field-specific indicators for international rankings should be developed together with stakeholders from these fields.
when additional fields are addressed in U multirank, some specific field indicators will have to be developed. Based on the experience of the CHE ranking this will vary by field with some fields requiring no additional indicators
we conclude that up-scaling in terms of addressing a larger number of fields in U multirank is certainly feasible. 7 Applying U Applying U Applying U multirank:
This approach implies the user-driven notion of ranking which also is a basic feature of U multirank.
The presentation of U multirank results outlined in this chapter strictly follows this user-driven approach. But by relating institutional profiles (created in U-Map) with multidimensional rankings
U multirank introduces a second level of interactive ranking beyond the user-driven selection of indicators:
U multirank has a much broader scope and intends to include a wider variety of institutional profiles.
Hence U multirank offers a tool to identify and select institutions that are truly comparable in terms of their institutional profiles. 7. 2 Mapping diversity:
combining U-Map and U multirank From the beginning of the U multirank project one of the basic aims was that U multirank should be in contrast to existing global rankings
The combination of U-Map and U multirank offers a new approach to user-driven rankings.
and hence the sample of institutions to be compared in U multirank. Figure 7-1: Combining U-Map and U multirank Our user-driven interactive web tool will imply both steps, too.
Users will be offered the option to decide if they want to produce a focused institutional ranking or a field-based ranking,
U multirank includes different ways of presenting the results. 143 7. 3 The presentation modes Presenting ranking results requires a general model for accessing the results,
In U multirank the presentation of data allows for both: a comparative overview on indicators across institutions,
U multirank produces indicators and results on different levels of aggregation leading to a hierarchical data model:
In U multirank we present the results alphabetically or by rank groups (see chapter 2). In the first layer of the table (field-based ranking),
6research 146 Personalized ranking tables 7. 3. 2the development of an interactive user-driven approach is a central feature of U multirank.
when applying U multirank. An intuitive, appealing visual presentation of the main results will introduce users to the performance ranking of higher education institutions.
so that there is a recognizable U multirank presentation style and users are confused not by multiple visual styles.
and discussed at a U multirank stakeholder workshop and there was a clear preference for thesunburst'chart similar to the one used in U-Map.
The colours symbolize the five U multirank dimensions, with the rays representing the individual indicators. In this chart the grouped performance scores of institutions on each indicator are represented by the length of the corresponding rays:
U multirank addresses the issues of contextuality by applying the design principle of comparability (see chapter 2). In U multirank rankings are created only among institutions that have sufficiently similar institutional profiles.
Combining U-Map and U multirank produces an approach in which comparable institutions are identified before they are compared in one or more rankings.
In addition, U multirank intends to offer relevant contextual information on institutions and fields. Contextual information does not allow for causal analyses
During the further development of U multirank the production of contextual information will be an important topic. 31 See www. lisboncouncil. net 151 7. 5 User-friendliness U multirank is conceived as a user-driven and stakeholder
In U multirank a number of features are included to increase the user-friendliness. In the same way as there is no one-size-fits-all-approach to rankings in terms of indicators,
U multirank, as any ranking, will have to find a balance between the need to reduce the complexity of information on the one hand and, at the same time,
U multirank wants to offer a tailor-made approach to presenting results, serving the information needs of different groups of users and taking into account their level of knowledge about higher education and higher education institutions.
In accordance with EU policies on eaccessiblity32 barriers to access to the U multirank results and data will be removed as much as possible.
'i e. users from within higher education will be able to use an English version of U multirank. In particular forlay users'(e g. prospective students) the existence of various language versions of U multirank would increase usability.
However, translation of the web tool and the underlying data is a substantial cost factor.
But at least an explanation of how to use U multirank and the glossary and definition of indicators and key concepts should be available in as many European languages as possible. 32 See http://europa. eu/legislation summaries/information society/l24226h en. htm (retrieved on 10 may 2011
and a feasible business model to finance U multirank (see chapter 8). Another important aspect of user-friendliness is the transparency about the methodology used in rankings.
For U multirank this includes a description of the basic methodological elements (institutional and field-based rankings,
the future 8. 1 Introduction An important aspect in terms of the feasibility of U multirank is the question of implementing the system on a widespread and regular basis
It is clear that the implementation of U multirank is a dynamic and only partially predictable process
and we must differentiate between a two-year pilot phase and a longer-term implementation/institutionalisation of U multirank.
One of our basic suggestions regarding transparency in higher education and research is the integration of U-Map and U multirank.
Therefore, many of the conclusions regarding the operational implementation in the final U-Map report (see www. u-map. eu) are also valid for U multirank.
global or European The pilot test showed some problems with the inclusion into U multirank of institutions from specific countries outside Europe.
Clearly, with participation in U multirank on a voluntary basis higher education institutions will have to be convinced of the benefits of participation.
This leads to the question of the scale of international scope that U multirank could and should attain.
We would argue that U multirank should aim to achieve a relatively wide coverage of European higher education institutions as quickly as possible during the next project Phase in Europe the feasibility
But U multirank should remain a global tool. There are institutions all over the world interested in benchmarking with European universities;
and the impression that the U multirank instrument is 154 only there to serve European interest should be avoided.
The pilot study proves that U multirank can be applied globally. Based on the pilot results we suggest that the extension beyond Europe could best be organized systematically
which point the inclusion of these important peer institutions will hopefully motivate more institutions to join U multirank.
Based on our pilot project we believe that it is feasible to add five new fields in each of the first three years of continued implementation of U multirank.
theheart'of U multirank is the idea of creating a user-driven, flexible tool to obtain subjective ranking that are relevant from the perspective of the individual user.
with U multirank it is also possible to create so-calledauthoritative'ranking lists from the database.
On the other hand, in the first phase of implementation, U multirank should be perceived by all potential users as relevant for their individual needs.
however, is on establishing the flexible web tool. 156 8. 4 The need for international data systems U multirank is isolated not an system,
The development of the European database resulting from EUMIDA should take into account the basic data needs of U multirank.
A second aspect of integrated international data systems is the link between U multirank and national ranking systems.
U multirank implies a need for an international database of ranking data consisting of indicators which could be used as a flexible online tool
In Spain we have the example of Fundacion CYD planning to implement a field-based ranking system based on U multirank standards.
In its operational phase the U multirank unit should develop standards and a set of basic indicators that national initiatives would have to fulfil
the U multirank unit will be able to pre-fill the data collection instruments and has to fill the gaps to attain European or worldwide coverage.
Finalisation of the various U multirank instruments 1. Full development of the database and web tool.
The prototypes of the instrument will demonstrate the outcomes and benefits of U multirank. 2. Setting of standards and norms and further development of underdeveloped dimensions and indicators.
These parts of the ranking model should be developed further. 3. Update of data collection tools/questionnaires according to the revision and further development of indicators and the experiences from the U multirank project.
In the first round of U multirank pre-filling proved difficult. The testing of national data systems for their pre-filling potential and the development of suggestions for the promotion of pre-filling are important steps to lower the costs of the system for the institutions.
and the international U multirank database should be realized. Roll out of U multirank across EU+countries 5. Invitation of EU+higher education institutions and data collection.
Within the next two years all identifiable European higher education institutions should be invited to 159 participate in the institutional as well as in the three selected field-based rankings.
The combined U-Map/U multirank approach should be tested further by developing the means to produce the first authoritative ranking lists for universities with selected profiles.
If the objective is to establish U multirank as largely self-sustainable, a business plan is required. It could be a good idea to involve organizations with professional business expertise in the next project phase in order to work out a business plan,
the user-driven approach imbues U multirank with strong democratic characteristics and a role far from commercial interests,
if complete funding from nonprofit sources is unrealistic. 9. Formal institutionalization of the U multirank unit.
During the next project phase an operational organization to implement U multirank will need to be created and a governance and funding structure established,
The features of and opportunities offered by U multirank need to be communicated continuously. Since the success of U multirank requires institutions'voluntary participation a comprehensive promotion
and recruitment strategy will be needed, requiring the involvement of many key players 160 (governments, European commission, higher education associations, employer organizations, student organizations).
A crucial issue related to communication is the user-friendliness of U multirank. This could be guaranteed by the smoothness of data collection
12/2013 The 11 elements form the potential content of the next U multirank project phase, transforming U multirank from a feasible concept to a fully developed instrument already rolled out and ready for continuous operation. 8. 6 Criteria
and models of implementation An assessment of the various options for the organizational implementation of U multirank requires a set of analytical criteria.
The following criteria represent notions of good practice for this type of an implementation process such as governance
A key element of U multirank is the flexible, stakeholder-oriented, user-driven approach. The implementation has to ensure this approach,
In general, the involvement of relevant actors in both the implementation of U multirank and its governance structure is a crucial success factor.
those parties taking responsibility for the governance of U multirank should be accepted broadly by stakeholders. Those who will be involved in the implementation should allow their names to be affiliated with the new instrument
and take responsibility in the governing bodies of the organizational structure. 163 We identified four basic options for responsibility structures for U multirank:
Assessment of the four models for implementing U multirank Criteria Model Inclusiveness International orientation Independence Professionalism Sustainability Efficiency Service orientation Credibility Commercial--+Government++-Stakeholder-+-Independent
A suggestion would be to organize the implementation of U multirank in such a way that basic ranking results can be provided for free to the participating institutions,
We believe that it is not reasonable in the initial phase of implementing U multirank to establish a new professional organization for running the system.
There should be a next U multirank project phase before a ranking unit is established. 168 Figure 8-2:
Organizational structure for phase 1 (short term) We suggest that during the next two years of the project phase the current project structure of U multirank should be continued.
U multirank The following analysis of cost factors and scenarios is based on the situation of running U multirank as an established system.
Costs have been estimated based on this projection but will not become part of the final report. The cost estimations showed that U multirank is an ambitious project also in financial terms,
but in general it seems to be financially feasible. A general assumption based on the EU policy is that U multirank should become self-sustainable without long-term basic funding by the European commission.
EC contributions will decline over time and new funding sources will have to be found. However, from our calculations it became clear that there is no single financial source from
which we could expect to cover the whole costs of U multirank; the only option is diversified a funding base with a mix of financial sources.
If U multirank is not dependent on one major source a further advantage lies in the distribution of financial risks.
It is difficult to calculate the exact running costs associated with U multirank because these depend on many variables.
The major variable cost drivers of U multirank are: The number of countries and institutions involved. This determines the volume of data that has to be processed and the communication efforts.
The surveys that are needed to cover all indicators outlined in the data models of U multirank.
The intention of the European commission is to develop U multirank into a self-sustaining instrument, requiring no EU funding after its implementation phase.
Nevertheless the European commission should consider the option of continued support of part of U multirank's basic funding in the long run
and ensure a formal role for the EC as a partner in U multirank. To promote transparency
To ensure students'free access to U multirank data the EC could provide also in the long run direct funding of user charges that would
Discussions with potential funders so far have shown that the funding of U multirank has to rely on a mix of income streams.
since U multirank with its related surveys is an expensive form of ranking and the commercial sources are limited.
Charges to the users of the U multirank web tool would seriously undermine the aim of creating more transparency in European higher education by excluding students for example;
if it would work for U multirank. The questions are: would paying for rankings produce avalue for money'attitude on the part of institutions (or countries?
EC, foundations, other sponsors) with a combination of a variety of market sources contributing cost coverage plus some cost reductions through efficiency gains. 8. 9 A concluding perspective U multirank
after a next project phase of two years institutionalisation of a U multirank unit could be organized for the longer term.
But the organizational basis of U multirank should be the nonprofit model with elements of the other options included.
In particular, the aim of financial self-sustainability for U multirank makes the combination of some nonprofit basic funding with the offer of commercial products inevitable.
and governance structure of U multirank. The analysis of the fixed and flexible cost determinants could lead to a calculation of the cost,
< Back - Next >
Overtext Web Module V3.0 Alpha
Copyright Semantic-Knowledge, 1994-2011