Secondly, the university's continuous capacity to provide students with new ideas, skills and entrepreneurial talent has become a major asset in the Knowledge Society.
Students are not only the new generations of professionals in various scientific disciplines, business, culture etc. but they can also be trained
and job creation (see, for example Startx, Stanford's student start-up accelerator, which in less than a year 6 trained 90 founders and 27 companies4,
or the Team Academy-the Entrepreneurship Centre of Excellence of JAMK University of Applied sciences in Jyväskylä, Finland, where students run their own cooperative businesses based on real-life projects5).
albeit not always harmonious coexistence of tacit and codified knowledge and is translated in different modes of learning and innovation, e g. the Science, Technology and Innovation (STI) mode, based on the production and use of codified scientific and technical knowledge,
The fashion department of the Antwerp Academy in Belgium encourages students to create and explore innovative forms,
and decided to create a university campus with advanced academic research groups in order to revive paper industry-one of the local traditional industries (Svensson, Klofsten and Etzkowitz, 2011).
In 2010, Kista Science City counted over 1, 000 ICT companies and over 5, 000 ICT students and scientists, a high concentration of expertise, innovation and business opportunities within ICT
p=1546&t=h401&l=en. 27 6. RELEVANCE OF TRIPLE HELIX SYSTEMS FOR KNOWLEDGE-BASED REGIONAL INNOVATION STRATEGIES Regional innovation policies have focused traditionally on the promotion of localized learning processes
and economic growth in evolutionary systems where institutions and learning processes are of central importance (Freeman, 1987,1988;
and localised learning (Lundvall, 1992), but became increasingly blurred due to business and technology internationalisation extending technological capabilities beyond national borders,
a set of regional actors aiming to reinforce regional innovation capability and competitiveness through technological learning (Doloreux and Parto, 2005),
Localized Learning and industrial Competitiveness. Cambridge Journal of Economics 23,167-185. Mason, C. and Harrison, R. 1992.
which is a typical starting point in many of the related approaches, such as learning regions or innovative milieus.
technical and vocational qualifications are often more important with this respect (Gray, 2006). Over 58 percent of the entrepreneurs participating in this study had not been educated beyond elementary school.
Antonelli, C. and Que're',M. 2002), The governance of interactive learning within innovation systems, Urban Studies, Vol. 39 Nos 5-6, pp. 1051-63.
a new perspective on learning and innovation, Administrative Science Quarterly, Vol. 35 No. 1, pp. 128-52.
Towards a Theory of Innovation and Interactive Learning, Pinter, London. Macpherson, A. and Holt, R. 2007), Knowledge, learning and small firm growth:
a systematic review of the evidence, Research Policy, Vol. 36 No. 2, pp. 172-92.
Malmberg, A. and Maskell, P. 2006), Localized learning revisited, Growth and Change, Vol. 37 No. 1, pp. 1-18.
Appropriateness of knowledge accumulation across growth studies, Entrepreneurship Theory & Practice, Vol. 33 No. 1, pp. 105-23.
Tidd, J.,Bessant, J. and Pavitt, K. 2002), Learning through alliances, in Henry, J. and Mayle, D. Eds), Managing Innovation and Change, 2nd ed.,Sage
and a professor of entrepreneurship and regional development at the Department of Business and Management, University of Kuopio, Finland (2003-2009) and from 2009 a professor of entrepreneurship and regional development at the Department of health Policy and Management
49 3. 1 Introduction 49 3. 2 Stakeholders'involvement 49 3. 3 Overview of indicators 52 Teaching and learning 52 3. 3
questionnaire 89 Student survey 90 4. 3. 2 Pretesting the instruments 91 4. 3. 3 Supporting instruments 94 4. 3. 4 4. 4 A concluding
119 6. 1 Introduction 119 6. 2 Feasibility of indicators 119 Teaching & Learning 122 6. 2. 1 Research 124 6. 2
Feasibility of data collection 133 Self-reported institutional data 133 6. 3. 1 Student survey data 135 6. 3. 2 Bibliometric
Indicators for the dimension Teaching & Learning in the Focused Institutional and Field-based Rankings...
Teaching & Learning...122 Table 6-2: Field-based ranking indicators: Teaching & Learning (departmental questionnaires...
123 Table 6-3: Field-based ranking indicators: Teaching & Learning (student satisfaction scores...124 Table 6-4:
Focused institutional ranking indicators: Research...125 Table 6-5: Field-based ranking indicators: Research...126 Table 6-6:
transparent and comparable information would make it easier for students and teaching staff, but also parents and other stakeholders, to make informed choices between different higher education institutions and their programmes.
Education and Culture but other experts drawn from student organisations, employer organisations, the OECD, the Bologna Follow-up Group and a number of Associations of Universities.
Stakeholder workshops were held four times during the project with an average attendance of 35 representatives drawn from a wide range of organisations including student bodies, employer organisations, rectors'conferences, national university associations and national representatives.
or field based rankings. student staff ratiograduation ratequalification of academic staffresearch publication outputexternal research incomecitation index%income third party fundingcpd courses offeredstartup firmsinternational academic
staff%international studentsjoint international publ. graduates working in the regionstudent internships in regional co-publicationinstitution 4institution 8--Institution 3-Institution 5-Institution 1--Institution
While indicators on teaching and learning, research, and internationalisation proved largely unproblematic, in some dimensions (particularly knowledge transfer
The conceptual frameworks behind sports league tables are accepted usually well: rules of the game define who the winner is
of being guided by a (nonexistent) theory of the quality of higher education. We do not accept that position.
although the current transparency tools especially university league tables are controversial, they seem to be here to stay,
and that especially global university league tables have a great impact on decision-makers at all levels in all countries,
especially in the research universities that are the main subjects of the current global league tables.
Yet major concerns remain as to league tables'methodological underpinnings and to their policy impact on stratification rather than on diversification of mission.
Quality assurance, evaluation or accreditation, also produces information to stakeholders (review reports, accreditation status) and in that sense helps to achieve transparency.
As the information function of quality assurance is not very elaborate (usually only informing if basic quality,
e g. the accreditation threshold, has been reached) and as quality assurance is too ubiquitous to allow for an overview on a global scale in this report,
Classifications and rankings considered in U multirank Type Name Classifications Carnegie classification (USA) U-Map (Europe) Global League tables and Rankings Shanghai Jiao Tong University's (SJTU
) Academic ranking of world universities (ARWU) Times Higher education (Supplement)( THE) QS (Quacquarelli Symonds Ltd) Top Universities Leiden Ranking National League tables and Rankings US News & World Report (USN≀
The netherlands) Specialized League tables and Rankings Financial times ranking of business schools and programmes (FT; global) Businessweek (business schools, USA+global) The Economist (business schools;
global) The major dimensions along which we analysed the classifications, rankings and league tables included: Level: e g. institutional vs. field-based Scope:
e g. students vs. institutional leaders vs. policy-makers Methodology and producers: which methodological principles are applied and
implying but often not explicating different conceptions of quality of higher education and research. Most are presented as league tables;
especially the most influential ones, the global university rankings are all league tables. The relationship of indicators collected
and their weights in calculating the league table rank of an institution are not based on explicit let alone scientifically justifiable conceptual frameworks.
ignoring that they are about different dimensions and sometimes use different scales The problem of league tables:
most rankings are presented as league tables, assigning each institution at least those in the top-50, unique places, suggesting that all differences in indicators are valid and of equal weight (equidistant positions).
and focusing on performance Rankings for students such as those of CHE and Studychoice123, which have a clear focus based on a single target group,
and with a transparent methodology Qualifications frameworks and Tuning Educational Structures, showing that at least qualitatively it is possible to define performances regarding student learning
thus strengthening the potential information base for other dimensions than fundamental research Comparative assessment of higher education student's learning outcomes (AHELO):
this feasibility project of the OECD to develop a methodology extends the focus on student learning introduced by Tuning and by national qualifications frameworks into an international comparative assessment of undergraduate students,
much like PISA does for secondary school pupils. Recent reports on rankings such as the report of the Assessment of University-Based Research Expert Group (AUBR Expert Group, 2009) which defined a number of principles for sustainable collection of research data,
faculty member (20%)Two versions of size-independent, field-normalized average impact('crown indicator'CPP/FCSM,
P*CPP/FCSM Citations-per-publication indicator (CPP) Quality of education Alumni of an institution winning Nobel prizes and Fields Medals (10%)Phds awarded per staff (6%)Undergraduates admitted per staff
(4. 5%)Income per staff (2. 25%)Ratio Phd awards/bachelor awards (2. 25%)Faculty student ratio (20%)31 HEEACT 2010
and Fields Medals (20%)Highly cited researchers in 21 broad subject categories (20%)Reputation Peer review survey (19.5+15=34.5%)International staff score (5%)International students score (5
staff and students (5%)Industry income per staff (2. 5%)International faculty (5%)International students (5%)Website http://ranking. heeact. edu. tw
Surveys among stakeholders such as staff members, students, alumni or employers. Surveys are strong methods to elicit opinions such as reputation or satisfaction,
Student satisfaction and to a lesser extent satisfaction of other stakeholders is used in national rankings, but not in existing global university rankings.
Student demand. There is evidence that student demand and enrolment in study programmes rises after positive statements in national, student-oriented rankings.
Both in the US and Europe rankings are used not equally by all types of students (Hazelkorn, 2011:
less by domestic undergraduate entrants, more at the graduate and postgraduate levels. Especially at the undergraduate level, rankings appear to be used particularly by students of high achievement and by those coming from highly educated families (Cremonini, Westerheijden & Enders, 2008;
Heine & Willich, 2006; Mcdonough, Antonio & Perez, 1998. Institutional management. Rankings strongly impact on the management in higher education institutions.
The majority of higher education leaders report that they use potential improvement in rank to justify claims on resources (Espeland & Saunder, 2007;
'The reputation race (van Vught, 2008) implies the existence of an ever-increasing search by higher education and research institutions and their funders for higher positions in the league tables.
Quality of higher education and research institutions. Rankings'incomplete conceptual and indicator frameworks tend to get rooted as definitions of quality (Tijssen, 2003.
i e. a situation where already strong institutions are able to attract more resources from students (e g. increase tuition fees),
'Institutional leaders are under great pressure to improve their institution's position in the league tables.
Most of the effects discussed above are rather negative to students, institutions and the higher education sector.
Similarly, rankings may provide useful stimuli to students to search for the best-fitting study programmes
but that the current rankings and league tables seem to invite overreactions on too few dimensions
For some target groups, in particular students and researchers, information has to be based field; for others, e g. university leaders and national policy-makers, information about the higher education institution as a whole has priority (related to the strategic orientation of institutions;
Rankings should not use league tables from 1 to n but should differentiate between clear and robust differences in levels of performance.
Higher education and research institutions are predominantly multipurpose, multiple-mission organizations undertaking different mixes of activities (teaching and learning, research, knowledge transfer, regional engagement,
students, potential students, their families, academic staff and professional organizations. These stakeholders are interested mainly in information about a particular field.
the production of league tables and the denial of contextuality. In addition it should minimise the incentives for strategic behaviour on the part of institutions togame the results'.
or its transfer to stakeholders outside the higher education and research institutions (knowledge transfer) or to various groups oflearners'(education).
or functions ofteaching and learning, research, and knowledge transfer'is a simplification of the complex world of higher education and research institutions.
One of the reasons why there is so much criticism of league tables is exactly the point that from similar sets of inputs,
If users are interested in the value added of a degree program on the labor market, information on how well a class is taught is not relevant.
Other stakeholders (students and institutional leaders are prime examples) are interested precisely in what happens inside the box.
For instance, students may want to know the quality of teaching in the field in which they are interested.
as they may consider this as an important aspect of their learning experience and their time in higher education (consumption motives).
Students might also be interested in the long-term impact of taking the program as they may see higher education as an investment
For different dimensions (research, teaching & learning, knowledge transfer) and different stakeholders/users the relevance of information about different aspects of performance may vary.
Conceptual grid U multirank Stages Functions & Audiences Enabling Performance Input Process Output Impact Functions context Teaching & Learning Research Knowledge Transfer Audiences
Teaching & Learning Research Knowledge Transfer International Orientation Regional Engagement In chapter 3 we will discuss the various indicators to be used in these five dimensions.
An important factor in the argument against rankings and league tables is the fact that often their selection of indicators is guided primarily by the (easy) availability of data rather than by relevance.
budgets, personnel, students, facilities, etc. Then too, inputs and 43 processes can be influenced by managers of higher education and research institutions.
but in the end it rests with the students to learn and, after graduation, work successfully with the competencies they have acquired.
Similarly, higher education and research institution managers may make facilities and resources available for research, but they cannot guarantee that scientific breakthroughs are created'.
and processes and the dissatisfaction among users of most current league tables and rankings is because they often are interested more in institutional performance
the percentage of international students is a valid indicator only if scores are influenced not heavily by citizenship laws.
Using the nationality of the qualifying diploma on entry has therefore a higher validity than using citizenship of the student.
Reliability Reliability refers to the consistency of a set of measurements or measuring instrument. A measure is considered reliable if
This is particularly an issue with survey data (e g. among students, alumni, staff) used in rankings.
what/who is a professor). Additional problems arise from differing national academic cultures. Indicators, data elements and underlying questions have to be defined
if we know that doctoral students are counted as academic staff in some countries and as students in others,
we need to ask for the number of doctoral students counted as academic staff in order to harmonise data on academic staff (excluding doctoral students).
Feasibility The objective of U multirank is to design a multidimensional global ranking tool that is feasible in practice.
Compared to existing league tables we see this as one of the advantages of our approach.
Grouping 2. 4. 4u-Multirank does not calculate league tables. As has been argued in chapter 1, league table rankings have severe flaws
teaching & learning, research, knowledge transfer, international orientation, regional engagement. This chapter provides an overview of the sets of indicators selected for the five dimensions,
Various categories of stakeholders (student organizations, employer organizations, associations and consortia of higher education institutions, government representatives, international organizations) have been involved in an iterative process of consultation to come to a stakeholder-based assessment of the relevance
the five subsections that follow present the indicators for the five dimensions (teaching & learning, research, knowledge transfer, international orientation, regional engagement).
education comprises all processes to transmit knowledge, skills and values to learners (colloquially: students). ) Education can be conceived as a process subdivided in enablers (inputs, 6 process7) and performance (outputs and outcomes8.
Teaching and learning ideally lead to the impacts or benefits that graduates will need for a successful career in the area studied and a successful, happy life as an involved citizen of a civil society.
Career and quality of life are complex concepts, involving lifelong impacts. Moreover, the pace of change of higher education and research institutions means that long-term performance is of low predictive value for judgments on the future of those institutions.
Students'learning outcomes after graduation would be a good measure of outcomes. However measures of learning outcomes that are internationally comparable are only now being developed in the AHELO project (see chapter 1) 9. At this moment such measures do not exist,
but if the AHELO project succeeds they would be a perfect complementary element in our indicator set.
in order to reflect performance in the teaching and learning dimension. Teaching & learning can be looked at from different levels and different perspectives.
As one of the main objectives of our U multirank project is to inform stakeholders such as students,
their perspective is important too. From their point of view the output to be judged is the educational process,
and student quality and quantity. 7 The process of education includes design and implementation of curricula, with formal teaching, self study,
peer learning, counselling services, etc. 8 Outputs are direct products of a process, outcomes relate to achievements due to the outputs. 9 http://www. oecd. org/document/22/0, 3343, en 2649 35961291 40624662 1 1 1 1,
00. html. 53 Another approach to get close to learning outcomes lies in assessing the quality of study programs.
The qualifications frameworks currently being developed in the Bologna process and in the EU may come to play a harmonising role with regard to educational standards in Europe,
but they are not yet effective (Westerheijden et al.,2010) and of course they do not apply in the rest of the world.
Besides, measures of students'progressing through their programs can be seen as indicators for the quality of their learning.
'indicators for quality can be sought in student and graduate assessments of their learning experience. The student/graduate experience of education is conceptually closer to
what those same students learn than judgments by external agents could be. Students'opinions may derive from investment or from consumption motives
but it is an axiom of economic theories as well as of civil society that persons know their own interest (and experience) best.
Therefore we have chosen indicators reflecting both. An issue might be whether student satisfaction surveys are prone to manipulation:
do students voice their loyalty to the institution rather than their genuine (dissatisfaction? This is not seen as a major problem as studies show that loyalty depends on satisfaction (Athiyaman, 1997;
Brown & Mazzarol, 2009; OECD, 2003. Nevertheless we should remain vigilant to uncover signs of university efforts to manipulate their students'responses;
in our experience, including control questions in the survey on how and with which additional information students were approached to participate gives a good indication.
Non-plausible student responses (for instance an extremely short time to complete the online questionnaire) could be eliminated.
Another issue about using surveys in international comparative studies concerns differences in culture that affect tendencies to respond in certain ways.
however, that student surveys can give valid and reliable information in a European context. One of the questions that we will return to later on in this report is whether a student survey about 10 http://www. eurostudent. eu:
8080/index html. 54 their own program/institution can produce valid and reliable information on a global scale.
& Learning indicators that were selected for the pilot test of U multirank. The column on the right-hand side includes some of the comments
Indicators for the dimension Teaching & Learning in the Focused Institutional and Field-based Rankings Focused Institutional Ranking Definition Comments 1 Expenditure on teaching Expenditure on teaching activities
Stakeholders questioned relevance. 2 Graduation rate The percentage of a cohort that graduated x years after entering the program (x is stipulated the normal')time expected for completing all requirements for the degree times 1. 5
) Graduation rate regarded by stakeholders as most relevant indicator. Shows effectiveness of schooling process. More selective institutions score better compared to (institutions in) open access settings.
and sensitive to economic circumstances. 3 Interdisciplinarity of programs The number of degree programs involving at least two traditional disciplines as a percentage of the total number of degree programs Based on objective statistics.
shows teaching leads to broadly-educated graduates. But sensitive to regulatory (accreditation) and disciplinary context.
Data collection and availability problematic. 4 Relative rate of graduate (un) employment The rate of unemployment of graduates 18 months after graduation as a percentage of the national rate of unemployment
of graduates 18 months after graduation)( for bachelor graduates and master graduates) Reflects extent to which institution isin sync'with environment.
Sensitive to discipline mix in institution and sensitive to (regional) economic circumstances. Data availability poses problem. 55 5 Time to degree Average time to degree as a percentage of the official length of the program (bachelor and master) Reflects effectiveness of teaching process.
Field-based Ranking Definition Comments 6 Student-staff ratio The number of students per fte academic staff Fairly generally available.
Sensitive to definitions ofstaff'and to discipline mix in institution. 7 Graduation rate The percentage of a cohort that graduated after x years after entering the program (x is stipulated the normal')time expected for completing all requirements for the degree times
1. 5) See above institutional ranking 8 Investment in laboratories for Engineering FBR Investment in laboratories (average over last five years, in millions in national currencies) per student High
and definitions ofstaff'10 Relative rate of graduate (un) employment The rate of unemployment of graduates 18 months after graduation as a percentage of the national rate of unemployment of graduates 18
months after graduation)( for bachelor graduates and master graduates) See above institutional ranking 11 Interdisciplinarity of programs The number of degree programs involving at least two traditional disciplines as a percentage of the total number of degree programs See above institutional
project based learning; joint courses/projects with business students (engineering; business knowledge (engineering; project management;
presentation skills; existence of external advisory board (including employers) Problems with regard to availability of data. 56 13 Inclusion of work experience into the program Rating based on duration (weeks/credits) and modality
access to computer support Data easily available. 15 Student gender balance Number of female students as a percentage of total enrolment Indicates social equity (a balanced situation is considered preferable.
But indicator of social context, not of educational quality. Student satisfaction indicators Indicators reflecting students'appreciation of several items related to the teaching & learning process.
Student satisfaction is of high conceptual validity. It can be made available in a comparative manner through a survey.
An issue might be whether student satisfaction surveys are prone to manipulation: do students voice their loyalty to the institution rather than their genuine (dissatisfaction?
Global comparability problematic: Cross-cultural differences may affect the students'answers to the questions. 16 Student satisfaction:
Overall judgment of program Overall satisfaction of students with their program and the situation at their higher education institution Refers to single question to give anoverall'assessment;
no composite indicator. 17 Student satisfaction: research orientation of educational program Index of four items: research orientation of the courses, teaching of relevant research methods,
opportunities for early participation in research and stimulation to give conference papers. 18 Student satisfaction:
Evaluation of teaching Satisfaction with regard to student's role in the evaluation of teaching, including prevalence of course evaluation by students,
relevance of issues included in course evaluation, information about evaluation outcomes, impact of evaluations 57 19 Student satisfaction:
Facilities The satisfaction of students with respect to facilities, including: Classrooms/lecture halls: Index including: Availability/access for students;
number of places; technical facilities/devices; Laboratories: Index including: Availability/access for students; number of places;
technical facilities/devices; Libraries: Index including: availability of literature needed; access to electronic journals; support services/e-services. 20 Student satisfaction:
Organization of program The satisfaction of students with the organization of a program, including possibility to graduate in time,
access to classes/courses, class size, relation of examination requirements to teaching 21 Student satisfaction:
Promotion of employability (inclusion of work experience) Index of several items: Students assess the support during their internships, the organization, preparation and evaluation of internships, the links with the theoretical phases 22 Student satisfaction:
Quality of courses Index including: Range of courses offered, coherence of modules/courses, didactic competencies of staff, stimulation by teaching quality of learning materials, quality of laboratory courses (engineering) 23 Student satisfaction:
Social climate Index including: Interaction with other students Interaction with teachers Attitude towards students in city Security 24 Student satisfaction:
Support by teachers Included items: Availability of teachers/professors (e g. during office hours, via email;
informal advice and coaching; feedback on homework, assignments, examinations; coaching during laboratory/IT tutorials (engineering only;
support during individual study time (e g. through learning platforms; suitability of handouts. 25 Student satisfaction:
Opportunities for a stay abroad Index made up of several items: The attractiveness of the university's exchange programs and the partner universities;
availability of exchange places; support and guidance in preparing for stay abroad; financial 58 support (scholarships, exemption from study fees;
transfer of credits from exchange university; integration of the stay abroad into studies (no time loss caused by stay abroad) and support in finding internships abroad. 26 Student satisfaction:
Student services Quality of a range of student services including: general student information, accommodation services,,
financial services, career service, international office and student organizations/associations 27 Student Satisfaction: University webpage Quality of information for students on the website.
Index of several items including general information on institution and admissions, information about the program, information about classes/lectures;
English-language information (for international students in non-English speaking countries) One indicator dropped from the list during the stakeholder consultation is graduate earnings.
Although the indicator may reflect the extent to which employers value the institution's graduates,
it was felt that this indicator is very sensitive to economic circumstances and institutions have little influence on labor markets.
In addition, data availability proved unsatisfactory for this indicator and comparability issues negatively affect its reliability.
For our field-based rankings, subject-level approaches to quality and educational standards do exist. In business studies, thetriple crown'of specialized
voluntary accreditation by AACSB (USA), AMBA (UK) and EQUIS (Europe) creates a build up of expectations on study programs in the field.
In the field of engineering, the Washington Accord is aninternational agreement among bodies responsible for accrediting engineering degree programs.
It recognizes the substantial equivalency of programs accredited by those bodies and recommends that graduates of programs accredited by any of the signatory bodies be recognized by the other bodies as having met the academic requirements for entry to the practice of engineering'(www. washingtonaccord. org).
In general, information on whether programs have acquired one or more of these international accreditations presents an overall, distant proxy to their educational quality.
However, the freedom to opt for international accreditation in business studies may differ across countries, which makes an accreditation indicator less suitable for international comparative ranking.
In engineering, adherence to the Washington Accord depends on national-level agencies, not on individual higher education institutions'59 strategies.
These considerations have contributed to our decision not to include accreditation-related indicators in our list of Teaching & Learning performance indicators.
Instead, the quality of the learning experience is reflected in the student satisfaction indicators included in Table 3-1. These indicators can be based on a student survey carried out among a sample of students from Business studies and Engineering.
As shown in the bottom half of Table 3-1, this survey focuses on provision of courses, organization of programs and examinations, interaction with teachers, facilities, etc.
Stakeholders'feedback on the student satisfaction indicators revealed that they have a positive view overall of the relevance of the indicators on student satisfaction.
However, it was also felt that the total number of indicators is quite high and should be reduced in the final indicator Set in the field-based rankings,
objective indicators are used in addition to the student satisfaction indicators. Most are similar to the indicators in the focused institutional rankings.
Some additional indicators are included to pay attention to the facilities and services provided by the institution to enhance the learning experience (e g. laboratories, curriculum).
Research 3. 3. 2selecting indicators for capturing the research performance of a higher education and research institution or a disciplinary unit (e g. department,
works within academic standards. 12 See: http://www. kowi. de/Portaldata/2/Resources/fp/assessing-europe-university-based-research. pdf 61 Table 3-2:
awards and scholarships won by employees for research work and in (international cultural competitions, including awards granted by academies of science.
However, research findings are published not just in journals. 12 Doctorate productivity Number of completed Phds per number of Professors (head count)* 100 (three-year average) Indicates aspects of the quantity and quality of a unit's research.
People, including students and researchers, Artefacts, including equipment, protocols, rules and regulations, Money. Texts are an obvious knowledge transfer channel.
& Learning and Regional Orientation dimensions included in U multirank. Knowledge transfer through people also takes place through networks
of which have an option of accreditation. 67 Money flows are an important interaction channel, next to texts and people.
and promote international mobility of students and staff, Activities to develop and enhance international cooperation,
The increasing emphasis on the need to prepare students international labor markets and to increase their international cultural awareness,
as a percentage of the total number of programs offered Signals the commitment to international orientation in teaching and learning.
Nationality not the most precise way of measuring international orientation. 72 nationality, employed by the institution or working on an exchange base 3 International doctorate graduation rate The number of doctorate degrees awarded to students
but bias towards certain disciplines and languages. 5 Number of joint degree programs The number of students in joint degree programs with foreign university (including integrated period at foreign university) as a percentage of total
enrolment Integration of international learning experiences is central element of internationalization. Data available. Indicator not often used.
Field-based Ranking Definition Comments 6 Incoming and outgoing students Incoming exchange students as a percentage of total number of students and the number of students going abroad as a percentage of total
number of students enrolled Important indicator of the internationalatmosphere'of a faculty/department. Addresses student mobility and curriculum quality.
Data available. 7 International graduate employment rate The number of graduates employed abroad or in an international organization as a percentage of the total number of graduates employed Indicates the student preparedness on the international labor market.
Data not readily available. No clear international standards for measuring. 8 International academic staff Percentage of international academic staff in total number of (regular) academic staff See above institutional ranking 9 International
research grants Research grants attained from foreign and international funding bodies as a percentage of total income Proxy of the international reputation and quality of research activities.
Stakeholders question relevance. 10 Student satisfaction: Internationalization of programs Index including the attractiveness of the university's exchange programs, the attractiveness of the partner universities, the sufficiency of the number of exchange places;
but no problems of disciplinary distortion because comparison is made within the field. 12 Percentage of international students The number of degree-seeking students with a foreign diploma on entrance as percentage of total enrolment in degree programs.
Reflects attractiveness to international students. Data available but sensitive to location (distance to border) of HEI.
Stakeholders consider the indicator important. 13 Student satisfaction: International orientation of programs Rating including several issues:
existence of joint degree programs, inclusion of mandatory stays abroad, international students (degree and exchange), international background of staff
It should be pointed out here that one of the indicators is a student satisfaction indicator:Student satisfaction:
Internationalisation of programs'.'This describes the opportunities for students to go abroad. Students'judgments about the opportunities to arrange a semester
or an internship abroad are an aspect of the internationalization of programs. This indicator is relevant for the field level.
An indicator that was considered, but dropped during the stakeholders'consultation process isSize of international office'.
Partnerships focus on collaborative interactions with the region/community and related scholarship for the mutual beneficial exchange, exploration, discovery and application of knowledge, information and resources.
learning and scholarship that engage faculty, students and region/community in mutual beneficial and respectful collaboration.
Are there visible structures that function to assist with region-based teaching and learning? Is there adequate funding available for establishing and deepening region-based activities?
Are there courses that have a regional component (such as service-learning courses? are sustained there mutually beneficial
How much does the institution draw on regional resources (students, staff, funding) and how much does the region draw on the resources provided by the higher education and research institution (graduates and facilities)?
Clarification is required as to what constitutes a region. U multirank has suggested to start with the existing list of regions in the Nomenclature of Territorial Units for Statistics (NUTS) classification developed
Indicators for the dimension Regional Engagement in the Focused Institutional and Field-based Rankings Focused Institutional Ranking Definition Comments 1 Graduates working in the region The number of graduates working in the region,
as a percentage of all graduates employed Frequently used in benchmarking exercises. Stakeholders like indicator.
New type of indicator. 5 Student internships in local/regional enterprises The number of student internships in regional enterprises as a percentage of total enrolment (with defined minimum of weeks
Indicator hardly ever used. 7 Graduates working in the region The number of graduates working in the region,
as a percentage of all graduates employed See above institutional ranking. 8 Regional participation in continuing education Number of regional participants (coming from NUTS3 region where HEI is located) as percentage of total number
Indicator hardly ever used. 9 Student internships in local/regional enterprises Number of internships of students in regional enterprises (as percentage of total students See above institutional ranking,
but disciplinary bias not problematic at field level. 10 Summer school/courses for secondary education students Number of participants in schools/courses for secondary school students as a percentage of total enrolment
and from students. 4. 2 Databases Existing databases 4. 2. 1one of the activities in the U multirank project was to review existing rankings
The table shows that EUMIDA primarily focuses on the Teaching & Learning and Research dimensions,
their coverage in national databases Dimension EUMIDA and U multirank data element European countries where data element is available in national databases Teaching & Learning relative rate of graduate unemployment
Table 4-2 shows that the Teaching and Learning dimension scores best in terms of data availability.
, United states/US) Dimension U multirank data element Countries where data element is available in national databases Countries where data element is available in institutional database Teaching & Learning
graduation rate AR, CA, US, ZA AR, AU, SA, ZA relative rate of graduate unemployment AU, CA
staff ZA, US AR, AU, CA, SA, US, ZA joint degree programmes AR AR, AU, CA, US international doctorate graduation rate
US AR, CA, SA, US Regional Engagement income from regional sources AU, CA, SA, ZA student internships in local/regional enterprises AU
, SA, US, ZA graduates working in the region US research contracts with regional business AR, CA, ZA co-patents with regional firms ZA CA, ZA
& Learning indicators the situation is rather promising (graduation rate, time to degree). In the Research dimension, Expenditure on Research and Research Publication Output data are represented best in national databases.
three for the institutions and one for students. 88 The four surveys are: U-Map questionnaire institutional questionnaire field-based questionnaire student survey.
In designing the questionnaires, emphasis was placed on the way in which questions were formulated. It is important that they can only be interpreted in one way
students: numbers; modes of study and age; international students; students from region; graduates: by level of program;
subjects; orientation of degrees; graduates working in region; staff data: fte and headcount; international staff; income:
total income; income by type of activity; by source of income; expenditure: total expenditure; by cost centre;
use of full cost accounting; research & knowledge exchange: publications; patents; concerts and exhibitions; start-ups. The academic year 2008/2009 was selected as the default reference year.
Respondents from the institutions were advised to complete the U-Map questionnaire first before completing the other questionnaires. 89 4. 3. 1. 2 Institutional questionnaire By means of U multirank's institutional questionnaire28,
students: enrolments; programme information: bachelor/master programmes offered; CPD courses; graduates: graduation rates; graduate employment;
staff: fte and headcount; international staff; technology transfer office staff; income: total; income from teaching; income from research;
income from other activities; expenditure: total expenditure; by cost centre; coverage; research & knowledge transfer:
number of professors; international visiting/guest professors; professors offering lectures abroad; professors with work experience abroad;
number Phds; number post docs; funding: external research funds; license agreements/income; joint R&d projects with local enterprises;
students: total number (female/international degree and exchange students; internships made; degree theses in cooperation with local enterprises;
regional engagement: continuing education programmes/professional development programmes; summer schools/courses for secondary education students; description:
accreditation of department; profile with regard to teaching & learning, profile with regard to research. A second part of the questionnaire asks for details of the individual study programmes to be included in the ranking.
In particular the following information was collected: basic information about the programme (e g. degree, length; interdisciplinary characteristics;
full time/part time; number of students enrolled in the programme; number of study places and level of tuition fees;
periods of work experience integrated in programme; international orientation; joint study programme; credits earned for achievements abroad;
number of exchange students from abroad; courses held in foreign language; special features; number of graduates;
information about labor market entry. Student survey 4. 3. 2for measuring student satisfaction (see section 3. 3. 1),
the main instrument is an online student survey. In order to assure that students are pressured not by their institution/teachers to rate their own institution favorably,
the institutions were asked to invite their students individually to participate in the survey either by mail or email rather than having them complete the survey in the classroom.
Access to the questionnaire was controlled by individual passwords. The student questionnaire uses a combination of open questions
and predefined answers and asks for the students'basic demographic data and information on their programme.
The main focus of the survey is on the assessment of the teaching and learning experience
and on the facilities of the institution. 91 In order to control for possible manipulation by institutions,
a number of control questions were included in the questionnaire. Students were asked for information on how they received the invitation
and whether there were any attempts by teachers, deans or others to influence their ratings. In relation to the student survey, the delimitation of the sample is important.
As students were asked to rate their own institution and programme, students who had started just their degree programme were excluded from the sample.
Hence students from the second year onwards in bachelor and master programmes and from the third year onwards in long (pre-Bologna) programmes were meant to be included.
In order to have a sample size that allows for analysis, the survey aimed to include up to 500 students by institution and field.
Pretesting the instruments 4. 3. 3a first version of the three new data collection instruments (the institutional questionnaire,
department questionnaire and student questionnaire) was tested between June and September 2010. The U-Map questionnaire had already been tested.
The U multirank questionnaires were tested in terms of cultural/linguistic understanding, clarity of definitions of data elements and feasibility of data collection.
Ten institutions were invited to complete and comment on the institutional and departmental questionnaire and to distribute 20 student questionnaires.
The selection was based on the list of institutions that had expressed their interest in participating in the project.
In selecting the institutions for the pre-test the U multirank team considered the geographical distribution and the type of institutions.
Since not all institutions responded fully to the pre-test, alight version'was sent to an additional 18 institutions.
Instead of asking them to provide all the data on a relatively short notice, these institutions were contacted to offer their feedback on the clarity of the questions and on the availability of data.
According to the pre-test results, the general format and structure of the institutional questionnaire seemed to be clear and user-friendly.
The pre-test showed, however, two types of problems for some indicators. Several indicators require a more precise specification
Teaching and learning. Questions about student numbers and study programmes seem to be unproblematic in most cases.
Problems emerge however with some output-related 92 data elements such as graduate employment, where often data is collected not at the institutional level.
This was the case forgraduates working in the region'andstudent internships in regional enterprises'.
Information on international students and staff, as well as on programmes in a foreign language was largely available.
As expected, the question of how to define aninternational student'came up occasionally. In sum, the institutional questionnaire worked well in terms of its structure and usability.
Problems with regard to the availability of data were reported mainly on issues of academic staff (e g. fte data, international staff), links to business (in education/internships and research) and the use of credits (ECTS.
The definition of the categories of academic staff(professors'other academic staff')clearly depends on national legislation
The student survey was pretested on a sample of over 80 students. In general, their comments were very positive.
and captured relevant issues of the students'teaching and learning experience/environment. Some students would have preferred more questions about the social climate at the institution
and about the city or town in which it was situated; a number of reactions (also from pre-test institutions) indicated that the questionnaire should not be any lengthier, however.
A major challenge deduced from these comments is how to compare across cultures students'assessment of their institutions.
Based on approved instruments from other fields (e g. surveys on health services) we have usedanchoring vignettes'to test sociocultural differences in assessing specific constellations of services/conditions in higher education with respect to teaching and learning.
For the student questionnaire the conclusion was that there is no need for changes in the design.
one will have to rely to a large extent on data collected by means of questionnaires sent to representatives of institutions, their students and possibly their 95 graduates.
institutions, representatives of departments in the institution and students. Sampling techniques (selecting/identifying institutions, departments/programmes,
their representatives and their students) are crucial, as is the intelligent use of technology (internet, visualisation techniques, supporting tools).
As has been explained, the field pilot study included a student satisfaction survey. Participating institutions were asked to send invitations to their bachelor
and master's students to take part in a survey. 106 departments agreed to do so. Some institutions decided to submit the information requested in the departmental questionnaire
but not to participate in the student survey as they did not want it to compete with their own surveys or effect participation in national surveys.
students were on holiday or taking examinations during the pilot study survey window. In some cases the response rate was very low
770 students provided data via the online questionnaire. After data cleaning we were able to include 5, 901 student responses in the analysis:
45%in business studies; 23%in mechanical engineering; and 32%in electrical engineering. 5. 3 Data collection The data collection for the pilot study took place via two different processes:
the collection of self-reported data from the institutions involved in the study (including the student survey) and the collection of data on these same institutions from existing international databases on publications/citations and patents.
o U multirank field-based questionnaires o U multirank Student survey 104 Figure 5-1: U multirank data collection process The institutions were given seven weeks to collect the data,
Organising a survey among students on a global scale was one of the major challenges in U multirank.
There are some international student surveys (such asEurostudent) 'but these usually focus on general aspects of student life and their socioeconomic situation.
To the best of our knowledge there is no global survey asking students to assess aspects of their own institutions and programmes.
So we had no way of knowing whether students from different countries and cultures would assess their institutions in comparable ways.
In Chapter 8 (8. 2) we will discuss the flexibility of our approach to a global scale student survey.
The data collection through the student survey was organized by the participating institutions. They were asked to send invitation letters to their students,
either by regular mail or by email. We prepared a standard letter to students explaining the purpose of the survey/project
and detailing the URL and personal password they needed to access the online questionnaire. Institutions were able to download a package including the letter and a list of passwords (for email invitation) and a form letter (for printed mail invitations.
If the letters were sent by post, institutions covered the costs of postage. No institution indicated that it did not participate in the student survey because of the cost of inviting the students.
In some countries (e g. Australia) the students were taking examinations or were on vacation at the time the survey started.
As a consequence some institutions decided not to participate in the survey; others decided to postpone the survey.
770 students participated in the survey, of this total 5, 901 could be included in the analysis. 106 5. 3. 1. 2 Follow-up survey After the completion of the data collection process we asked those institutions that submitted data to share their experience of the process
Field questionnaire Mechanical engineering 14 1. 0 20 6. 0 Organization of student survey 18 0. 2 21 4. 4 The analysis also showed that European institutions
5 Field questionnaire Mechanical engineering 4. 6 7 7. 0 4 Organization of student survey 4. 1 7 7. 9 7 107 Figure
Most pilot institutions reported no major problems with regard to student, graduate and staff data. If they had problems these were mostly with research and third mission data (knowledge transfer,
The student survey For the student survey, after data checks we omitted the following elements from the gross student sample:
Missing data on the students'institution Missing data on their field of study (business studies, mechanical engineering, electrical engineering) Students enrolled in programs other than bachelor/short national first degree programs
and master/long national first degree programs Students who had spent little time on the questionnaire and had responded not adequately.
Students had to answer at least parts of the questions that are used to calculate indicators and give the necessary information about their institution,
Students who reported themselves as formally enrolled but not studying actively Students reporting that they had moved just to their current institution Students who obviously did not answer the questionnaire seriously
In addition we performed a recoding exercise for those students who reported their field of study asother'.
'Based on their explanation and on the name of the programme they reported, the field was recoded manually in all cases where a clear attribution was possible.
As a result of these checks the data of about 800 student questionnaires have been omitted from the sample. International databases 5. 3. 2the data collection regarding the bibliometric and patent indicators took place by studying the relevant international databases
and the Advisory Group. 122 Teaching & Learning 6. 2. 1the first dimension of U multirank is Teaching & Learning.
Teaching & Learning TEACHING & LEARNING Rating of indicators (pre-pilot) Feasibility score (post-pilot) Focused institutional ranking Relevance Concept/construct validity Face validity Robustness Availability Preliminary rating
Feasibility score Data availability Conceptual clarity Data consistency Recommendation Graduation Rate A b Time to Degree B b Relative Rate of Graduate (Un) employment
Much to our surprise there were few comments on the indicators on graduation rate and time to degree.
The fact that in many countries/institutions different measurement periods (other than 18 months after graduation) are used seriously hampers the interpretation of the results on this indicator.
the indicators that have been built using the information from departmental questionnaires and the indicators related to student satisfaction data. 123 Table 6-2:
Teaching & Learning (departmental questionnaires) TEACHING & LEARNING Rating of indicators (pre-pilot) Feasibility score (post-pilot) Field-based ranking Departmental questionnaire Relevance Concept/construct validity Face validity Robustness Availability Preliminary
rating Feasibility score Data availability Conceptual clarity Data consistency Recommendation Student/staff ratio A a Graduation rate A b Qualification of academic staff
use different time periods in measuring employment status (e g. six, 12 or 18 months after graduation).
As normally the rate of employment is increasing continuously over time, particularly during the first year after graduation,
Teaching & Learning (student satisfaction scores) TEACHING & LEARNING Rating of indicators (pre-pilot) Feasibility score (post-pilot) Field-based ranking Student survey Relevance Concept/construct validity Face validity Robustness Availability Preliminary
rating Feasibility score Data availability Conceptual clarity Data consistency Recommendation Organization of programme A a Inclusion of work experience A a Evaluation of teaching A a
problems with regard to the feasibility of individual indicators from the student survey. General aspects of feasibility of a global student survey are discussed in section 6. 3. Research 6. 2. 2indicators on research include bibliometric indicators (institutional
and field-based) as well as indicators derived from institutional and field-based surveys. In general the feasibility of the research indicators,
Feasibility score Data availability Conceptual clarity Data consistency Recommendation External research income A a Total publication output*A a Student satisfaction:
publications*A a Percentage of international staff B A Percentage of students in international joint degree programs A b International doctorate graduation rate B A Percentage foreign degree
-seeking students New indicator B Percentage students coming in on exchanges New indicator A Percentage students sent out on exchanges New indicator A*Data source:
There were some problems reported with availability of information on nationality of qualifying diploma and students in international joint degree programs.
Robustness Availability Preliminary rating Feasibility score Data availability Conceptual clarity Data consistency Recommendation Percentage of international students A a Incoming and outgoing students
A a-B Opportunities to study abroad (student satisfaction) A b International orientation of programs A b International academic staff B A-B International joint research publications
*B A International research grants B b International doctorate graduation rate B A*Data source: Bibliometric analysis Observations from the pilot test:
Not all institutions have clear data on outgoing students. In some cases only those students participating in institutional or broader formal programs (e g.
ERASMUS) are registered and institutions do not record numbers of students with self-organized stays at foreign universities.
Availability of data was relatively low regarding the student satisfaction indicator as only a few students had participated already in a stay abroad
and could assess the support provided by their university. The indicatorinternational orientation of programs'is a composite indicator referring to several data elements;
Robustness Availability Preliminary rating Feasibility score Data availability Conceptual clarity Data consistency Recommendation Percentage of income from regional sources A c In Percentage of graduates
working in the region B c In Research contracts with regional partners B b Regional joint research publications*B A Percentage of students in internships in local enterprises B c In*Data source:
Both in institutional and in field-based data collection information on regional labor market entry of graduates could not be delivered by most institutions.
validity Robustness Availability Preliminary rating Feasibility score Data availability Conceptual clarity Data consistency Recommendation Graduates working in the region B c In Regional participation in continuing
education B c Out Student internships in local enterprises B b-C In Degree theses in cooperation with regional enterprises B b-C In Summer schools C C
While far from good, the data situation on student internships in local enterprises and degree theses in cooperation with local enterprises turned out to be less problematic in business studies than that found in the engineering field.
and in many non-metropolitan regions they play an important role in the recruitment of higher education graduates. 6. 3 Feasibility of data collection As explained in section 5. 3 data collection during the pilot
In some countries the U multirank student survey conflicted with existing national surveys, which in some cases are highly relevant for institutions.
While a field period of four to six weeks after sending out invitations to students seems appropriate at individual institutions
the time window to organize a student survey across all institutions has to be at least six months
and data quality this problem will be mitigated. 135 Student survey data 6. 3. 2one of the major challenges regarding the feasibility of our global student survey is
whether the subjective evaluation of their own institution by students can be compared globally or whether there are differences in the levels of expectations or respondent behavior.
In our student questionnaire we usedanchoring vignettes'to control for such effects. Anchoring vignettes is designed a technique to ameliorate problems that occur
For a detailed description see appendix 9) Our general conclusion from the anchoring vignettes analysis was that no correlation could be found between the students'evaluation of the situation in their own institutions
This implies that the student assessments were influenced not systematically by differences in levels of expectation (related to different national backgrounds or cultures),
and thus that the feasibility of the data collection through a global-level student survey is sufficiently feasible.
While students can be asked about their learning experience in the same way across different fields
and clinical education are relevant indicators in the teaching and learning dimension. Following the user-and stakeholder-driven approach of U multirank,
sorted by indicatorresearch publication output student staff ratio graduation rate qualification of academic staff research publication output external research income citation index
%income third party funding CPD courses offered startup firms international academic staff%international students joint international publ. graduates working in the region student internships in local enterprise
& Learning Research Knowledge transfer international orientation Regional engagement student staff ratio graduation rate qualification of academic staff research publication output external
research income citation index%income third party funding CPD courses offered startup firms international academic staff%international students joint international publ. graduates working in the region
student internships in local enterprise regional copublication Institution 2-Institution 4 Institution 1--Institution 3-Institution 7---Institution 8--Institution 9 Institution 5-Institution 6-Teaching & Learning Research Knowledge
transfer international orientation Regional engagement 145 In chapter 1 we discussed the necessity of multidimensional and user-driven rankings for epistemological reasons.
An example is a detailed view on the results of a department (the following screenshot shows a sample business administration study program at bachelor and masters level.
Context factors that may affect decision-making processes of users of rankings (e g. students, researchers) although not linked to the performance of institutions.
for prospective students intending to choose a university or a study program, low student satisfaction scores regarding the support by teaching staff in a specific university or program is relevant information,
although the indicator itself cannot explain the reasons behind this judgment. Rankings also have to be sensitive to context variables that may lead to methodological biases.
In addition access to and navigation through the web tool will be made highly user-driven by specificentrances'for different groups of users (e g. students, researchers/academic staff, institutional administrators, employers) offering specific information
In particular forlay users'(e g. prospective students) the existence of various language versions of U multirank would increase usability.
which fields to add it would make sense to focus on fields with significant numbers of students enrolled
such as staff data (the proper and unified definition of full-time equivalents and the specification of staff categories such asprofessor'is an important issue for the comparability of data),
or data related to students and graduates. EUMIDA could contribute to improve the data situation regarding employment-oriented outcome indicators.
Although 157 mobility of students is increasing, the majority of in particular undergraduate students will continue to start higher education in their home country.
Hence field-based national rankings and cross-national regional rankings (such as the CHE ranking of German
requiring the involvement of many key players 160 (governments, European commission, higher education associations, employer organizations, student organizations).
The implementation has to separate ranking from higher education policy issues such as higher education funding or accreditation.
i e. student organizations and associations of higher education institutions, would be responsible for the operation of the transparency instrument.
Doubts about commitment to social values of European higher education area (e g. no free access for student users?.
Major cost factors are for instance the realisation of student and graduate surveys or the use of databases charged with license fees, e g. bibliometric and patent data.
for instance a student survey is much more expensive if universities have no e-mail-addresses of their students,
requiring students to be addressed by letters. The frequency of the updating of ranking data. A multidimensional ranking with data from the institutions will not be updated every year;
the best timespan for rankings has to take into account the trade-off between obtaining up to date information
For instance the EC could take on the role of promoter of students'interests and could see the delivery of a web tool free of charge to students as its responsibility.
To ensure students'free access to U multirank data the EC could provide also in the long run direct funding of user charges that would
otherwise have to be imposed upon the students. There are a number of opportunities to find funding sources for U-Multirank:
a) Basic funding by the governing institutions in form of a lump sum. This is realistic for the government model (e g. basic funding by EU) and for the 173 independent,
b) Funding/sponsorship from other national and international partners interested in the system. c) Charges from the ranking users (students, employers etc..
To keep the web tool free of charges, especially for students, an equivalent to the charges could be paid by the EC.
Charges to the users of the U multirank web tool would seriously undermine the aim of creating more transparency in European higher education by excluding students for example;
The EC could pay for student user charges. Project-based funding for special projects, for instance new methodological developments or rankings of a particulartype'of institution offer an interesting possibility with chances of cross-subsidization.
Linking student satisfaction and service quality perceptions: the case of university education. European Journal of Marketing, 31 (7), 528-540.
The importance of institutional image to student satisfaction and loyalty within higher education. Higher education, 58 (1), 81-95.
Academic quality, League tables, and Public Policy: A Cross-National Analysis of University ranking Systems. Higher education, 49,495-533.178 Dulleck, U. and R. Kerschbamer (2006."
Promoting Civil Society Through Service-Learning. Norwell, Mass.:Kluwer. IAU, International Association of Universities (2005.
The Magazine of Higher Learning, Vol. 41, No. 3, p. 8-13. Iversen, E. J.,Gulbrandsen, M,
Determinants of Student Loyalty in Higher education: A Tested Relationship Approach in Latin america. Latin american Business Review, 10 (1), 21-39.
A global survey of university league tables. Toronto: Educational Policy Institute. Van dyke, N. 2005. Twenty Years of University Report cards.
Overtext Web Module V3.0 Alpha
Copyright Semantic-Knowledge, 1994-2011