Synopsis: Data:


ART72.pdf

Information from the Delphi method and scenario is converged using text mining to position scientific and technological areas in a big picture.

and base or general-purpose technologies tend to have little chance of being mentioned. 3. 2 Procedure of combination Text mining is employed to combine information from two sources, i e.

Scores are assigned to all the keywords by the term frequency-inverted document frequency (TF-IDF) method shown below that is generally used in text mining;

Correspondence analysis is used a widely method to grasp the relations between two different categories of data.


ART74.pdf

supported by data and literature. The fiches draw out potential disruptive factors that will constrain or accelerate the phenomenon described.


ART75.pdf

B data analysis; and B discussion and dissemination of results. More than 2, 000 experts from 40 Russian regions took part in the Delphi survey,

As the result of this foresight exercise a large database of promising S&tareas was created, with integrated scores for all selected criteria.

''‘Technologies for environmentally safe processing and recycling of consumer and industrial waste''and‘‘Geoinformation database of forest fires in Russia, allowing monitoring of fire situations in real time''.

and wastes 3 Geoinformation database of forest fires in Russia, allowing monitoring of fire situations in real time (number of fires

natural and anthropogenic disasters and their consequences based on monitoring data and advanced understanding of their origins and development 57.1 1. 71 Techniques for prospecting natural resources,

in the FS1 framework a large database of promising S&t areas was created; this allowed policy makers to derive a wide range of an information, for example:

and others A large database of promising S&tareas, estimated by nine criteria by 2, 000 experts.

Table IV The influence of the foresight studies on policy decision-making Influence on policy-making Evaluation of influence on policy-making FS1 The foresight data were used as an information source for many political purposes:


ART76.pdf

The objective of this paper is to develop an integrative method for systematically clustering, analyzing and visualizing the path for technology development and transformation.

competitor profiling, early warning assessment, scientometrics, science mapping, scenarios, network analysis and so forth (Calof and Smith, 2010).

There is long history in economics of the use of patent data to understand the process of invention and innovation (Griliches, 1990;

While some scientific literature databases have been reclassified by using IPC code, this PAGE 70 jforesight jvol. 15 NO. 1 2013 kind of capability gap identification becomes easier.

For example, The Inspec Database, produced by the Institution of Engineering and Technology (IET), contains records from the world's technical

and confirmed further by other complementary data or sources. Note 1. Nodexl is an online free tool for social network analysis,

He works on data processing and text mining, and adopts these mechanisms to conduct research into science and technology development trends.

His research interests include foresight, data mining, and learning technologies. Cheng-Hua Ien received A MS degree in Food Science and Technology from Taiwan University in 1983.


ART78.pdf

We define data as quantitative when consisting of numerical information and a methodology as quantitative when applying statistical/mathematical tools.

In contrast, we define data as qualitative when consisting of non-numerical information (such as text, images,

A participatory method, regardless of the qualitative or quantitative data it uses, is one in which the outcome requires the active interaction of different types of stakeholders.

1 foresight practitioners have concentrated traditionally on participatory methods based on qualitative data, on the grounds that quantitative extrapolation from past data is not sufficient to address the uncertainties of the future

and that emerging changes in the socioeconomic and technological landscapes need to be taken into account.

Tuomi 14 suggests that the ontological unpredictability4 of innovative process cannot be removed by more accurate data or incremental improvements in existing predictive models.

and Cahill 1 for further details on the origin and definition of the acronym FTA. 2 Quantitative participatory methods could for instance relate to the online sharing of big amounts of data,

or the engagement of a wider group of participants in data analysis (for the latter, see Cooke and Buckley 7). 3 In this regard,

who see FTA EXERCISES as attempts to collect knowledge about‘posits'or possible futures, their plausibility and limits, their internal consistency and conformity with models and data, their consistency with expert judgement,

Qualitative data can provide additional evidence to quantitative models by inclusion of new indicators created from quantified expert judgments.

as there are sources of bias in the EFMN database (see Keenan and Popper 26 for a critical assessment of the exercise).

Järvanpää et al. 34 analyse the use of bibliometric data for distinguishing between science-based and conventional technologies,

Visualisation of quantitative data 36 can be a useful way of bringing these data to a workshop or another qualitative process.

For example, web 2. 0 tools allow for the collection of both quantitative and qualitative data or for the quantitative analysis of qualitative data (such as statistical analysis of stakeholder opinions or networking behaviour.

Such exercises push experts in quantitative and qualitative techniques closer to each other, therefore enhancing cross-disciplinary learning.

Examples of current and upcoming FTA practices Internet-based tools allowing for integration of data of various sorts Online sharing of perspectives on different data types:

new technologies such as web 2. 0 can be used by FTA to streamline operations by increasing interactive participation of stakeholders, speeding-up the provision of information and feedbacks and integrating data of different sorts (pictures

, documents, numerical data, free text, videos. 6 The Risk assessment and Horizon scanning Initiative (Singapore) developed a Service Oriented Based Horizon scanning Architecture (SOSA) allowing sharing perspectives on data sets

in order to amplify data outliers and help users avoid getting blind-sided through premature convergence 40.

It consists of an intranet based network of people, tools and data (from unstructured text from internet to reports uploaded by experts),

and the sharing of perspectives across the network is supported by a set of perspective visualisation tools.

Online analysis of data and creation of knowledge repositories: Cooke and Buckley 7 believe that web 2. 0 tools can be used to make data of all sorts accessible to respondents and researchers:

Respondents no longer merely respond to signals: they generate the data, they edit it, via their communal participation, revising it in response to others,

irrespective of whether the others are researchers, clients or respondents (p. 289). However, to date no concrete examples of this approach could be identified,

/Technological forecasting & Social Change 80 (2013) 386 397 Other tools and disciplines that can serve as interface to facilitate the use of qualitative and quantitative approaches and data Social network analysis:

When used in combination with foresight data collected online, network analysis can be used to enable robust analysis of foresight data,

which are often complex to present and codify. Nugroho and Saritas 42 propose a framework for this, building on online foresight survey data,

and by pointing at benefits in the various phases of a foresight process: it reveals the structural features of the data

and can inform the foresight process on emerging links or relationships, groups or clusters. The implication for foresight methods is that network analysis can introduce a‘systemic'perspective emphasising relationships between actors,

key issues and trends. Visualisation techniques and strategic design: During the 2011 International Seville Conference on FTA, the use of images and visualisation techniques was suggested as a tool,

even in projects combining quantitative and qualitative methods, data 7 At first sight, this method is more suitable for FTA for businesses.

or qualitative, depending on the type of data they rely on. They may generate as an output, informed estimates about the future.

the data (advantages and limitations) that have been used, and the alternatives (or lack thereof) amongst which the analyst had to choose.

'11 Interestingly, our reflection on the epistemology skills trust triangle is in line with Bryman 62 who focuses on the integration of qualitative and quantitative data and methods in social sciences.

Based on interviews with social scientists he identifies eight barriers to integration of qualitative and quantitative data and methods,

perceptions on the expectations of different audiences, methodological preferences of the (mixed methods) researcher, structure of the research project, different timelines for different method types, skill specialisms, the nature of the data, ontological differences,

In this way it is possible to make more intelligible how data are collected processed and analysed in the process.

I. Sakata, K. Matsushima, Detecting emerging research fronts in regeneration medicine by the citation network analysis of scientific publications, Technol.

Nonreactive Research in the Social sciences, Rand mcnally, Chicago, 1966.50 S. Sarantakos, Social research, Macmillan, Basingstoke, 1993.51 D. Silverman, Interpreting Qualitative data:

methods research and data analyses, J. Mixed Methods Res. 4 (4)( 2010) 342 360.60 R. B. Johnson, A j. Onwuegbuzie, Mixed methods research:


ART79.pdf

, fitting a curve to the historical data under the assumption that whatever forces are collectively driving the trend will continue into the future unabated.

and data source The most fundamental and challenging task is to select suitable indicators and data sources.

Thirteen indicators are selected for TLC assessment (Table 2). All the data of the indicators are extracted by priority year (the first filing date year for a patent application

In this research, we choose the Derwent Innovation Index (DII) as the data source and Vantagepoint (VP) for data cleaning and extraction.

three kinds of dates are included in the DII database: application year, priority year, and basic year.

Author Indicator Robert J Watts, Alan L Porter 14 Number of items in databases such as Science Citation Index number of items in databases such as Engineering

Index number of items in databases such as U s. patents Number of items in databases such as Newspaper Abstracts Daily Issues raised in the Business

But the patent information in the early years is unavailable (patent data in DII covers 1963 to the present.

and analyse indicator data. 2. 4. Data process First, we develop a map for 13 indicators of each training technology.

It is common to process multidimensional data by matrix. The original data are extracted by Vantagepoint

and imported into MS Excel 13 rows of indicators, 30 columns (years) for TFT-LCD (from 1978 to 2007), 36 columns (years) for CRT (from 1972 to 2008),

/Technological forecasting & Social Change 80 (2013) 398 407 We propose a normalisation method with two steps to pre-process the original data.

The first step is data smoothing by calculating three-year moving averages. The original data are defined as A A1;

A2: ð1þ Here A1, A2 represent the original data of TFT-LCD and CRT respectively.

Then the smoothed data of TFT-LCD and CRT are defined as A A1; A2 h i ð2þ 0 500 1000 1500 2000 2500 3000 3500 1978 1979 1980 1981 1982 1983 1984 1985 1986

1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006

A2 represent the smoothed data of TFT-LCD and CRT respectively. The next step is to divide the smoothed data by their maximums.

The normalised data are defined as A A1; A2 h i ð5þ A1 i; j ð Þ A1 i;

j ð Þ maxj A1 i; j ð Þ; i 1; 13; j 1; 30 ð6þ A2 i;

A2 represent the normalised data of TFT-LCD and CRT respectively. We then apply the same normalisation steps to the NBS data.

The smoothed data and the final normalised data of NBS are defined as B b respectively, B i;

k ð Þ B i; k þ 1 ð Þþb i; k ð Þþb i;

ð9þ Then the nearest neighbour (NN) classifier is applied to the normalised data to measure the stage status of NBS.

In the paper, we employ it to process the multidimensional (13-D) data. The normalised data of TFT-LCD and CRT form the training set O (O R13),

and the normalised data of NBS are considered as a test set (R13). There are 30 training points in the TFT-LCD training set,

36 training points in the CRT training set, and 24 test points in the NBS test set.

Technology managers might informtheir NBS R&d investments by analysing patent application data from 1997 to the present to identify hot research topics or technological gaps.

to process the 13-D data by calculating the nearest distance among the test point

since data of the all indicators can be downloaded from most patent databases. Certainly, our study possesses limitations.

and obtain more data to validate the method. Second, we did not consider the technology type.

European Management Forum, Davos, 1981.10 H. Ernst, The use of patent data for technological forecasting: the diffusion of CNC-technology in the machine tool industry, Small Bus. Econ. 9 (4)( 1997) 361 381.11 T. H. Lee, N. Nakicenovic, Life cycle of technology

use of patent data, IEEE in Beijing, 2008.22 M. Meyer, Does science push technology? Patents citing scientific literature, Res.


ART8.pdf

complex networks, simulation modeling of CAS and the search of vast databases. Such convergence has conducted to a rejuvenation and growth in FTA METHODS and practice,

which otherwise open the way to the revival of Joseph Schumpeter's ideas of a evolutionary global economy driven by the clustering of basic innovations

Complex network analysis. This is a new and emergent scientific branch that is finding increasing application in a wide range of fields, from the physical sciences, to life sciences and to social sciences.


ART80.pdf

and explore thousands of plausible scenarios using simulation models, data mining techniques, and robust optimization. The proposed approach,

Res. 41 (1993) 435 449.39 J. H. Friedman, N. I. Fisher, Bump Hunting in high-dimensional data, Stat.

1998) 769 805.58 A. Ben-Tal, A. Nemirovski, Robust solutions of linear programming problems contaminated with uncertain data, Math.


ART81.pdf

EMA is first and foremost an alternative way of using the available models, knowledge, data, and information.

Another example is the case where there is ample data available but also disagreement or uncertainty about which data to use.

EMA can be used to identify the extent to which the choice of data influences the model outcomes.

Instead of debating the choice of the right data, the debate can then shift to the development of policies or plans that produce satisfying results across the alternative sets of data.

Other possible uses of EMA include the identification of extreme cases, both positive and negative,

in order to get insight into the bandwidth of expected outcomes, and the identification of conditions under which significant shifts in performance can be expected.

Therefore, there is a need for data reduction techniques. One way of analyzing the results is to identify runs that share the same dynamic behavior over time.

Clustering then takes place on the basis of the concatenations. If a hard clustering algorithm is used,

which is to say that the entire concatenation needs to be identical, then Table 2 is the result.

There is an emerging field that studies the clustering of time series data. A wide variety of methods and techniques are being explored 34.

Classification trees are employed a frequently data mining technique 46. They are used to predict class membership based on a set of attributes.

else it is coded as 1. Fig. 6 shows a classification tree that results from this analysis. The tree was generated using the open source data mining package Orange 47.

This is A c++library with python bindings to many useful data mining and machine learning algorithms. This tree can be used to see how the uncertainties jointly affect the extent of a transition towards more sustainable generation

and different sources and types of information and data. EMA offers practitioners a model-based method for handling such situations.

The three cases also illustrate the need for combining EMA with machine learning or data mining techniques

That is, the systematic exploration of a wide variety of uncertainties produces large datasets that need to be analyzed further using machine learning or data mining techniques in order to extract decision relevant information from it.

Chang. 17 (2007) 73 85.33 J. H. Friedman, N. I. Fisher, Bump hunting in high-dimensional data, Stat.

Clustering of time series data a survey, Pattern Recog. 38 (2005) 1857 1874.35 J. H. Kwakkel, W. E. Walker, V. A w. J. Marchau

I. Bratko, G. Shaulsky, B. Zupan, Microarray data mining with visual programming, Bioinformatics 21 (2005) 369 398.48 B. P. Bryant, R. J. Lempert


ART82.pdf

it is not sensible to extrapolate the future from data and relationships of the past.

unstructured data that implicate potential discontinuities 72. In addition, including perspectives from the different stakeholders can reveal new areas for innovation 73.4.2.3.

not only deal with the collection of data and models; they also involve the interaction of the stakeholders, their ideas, values and capacities for social change.


ART84.pdf

and to create scenarios by clustering trends that are assumed to occur simultaneously. The INFU project followed a similar approach by combining the inductive scenario building concept with a weak signal scanning activity.

and to suggest a clustering of the visions. Finally, it was discussed which visions were most interesting

Amplification Today data on the behaviour of people is collected already constantly and used for individdua marketing based on user behaviour.

At the same time more and more companies look into diverse databases and use crowd sourcing to foster their innovation, to get inspiration and to benchmark creative dynamic in their sectors.


ART87.pdf

but apparently, the authors still find the clustering of such regions useful. We recognise, of course, that for some authors,

However, data on this dimension are only available for a much smaller number of countries,

and no data are available for Denmark. Two dimensions are of special interest for this paper:

This fact challenges Keenan and Popper's factors for explaining variations and similarities in regional foresight data.

a discussion paper was prepared that contained the government's overall objectives for the theme and key data and prerequisites.

The expert panel had relative freedom to carry out the clustering of the themes, but it was stressed that their work should reflect the main thrust of


ART88.pdf

and data sources We compared experiences at the local level with experiences at the national level.

and whether there was sufficient willingness to cooperate with the study and access to civil servants for interviews and other data sources.

One of the authors conducted both national level inquiries that were used as data sources for this article.


ART89.pdf

Such understanding may be supported by earlier research and available databases. At the same time, a spectrum of foresight methods can be applied to develop a better understanding of possible future developments of the systems under analysis 7. In this context

patent databases as well as existing research and worldwide roadmaps on manufacturing. The mapping results were brought together with partners'and stakeholders'experience,

dissemination activities Research Partners'databases Formal Online surveys, wiki platform, website, dissemination activities Personal contacts and Internet Informal C. Cagnin,


ART9.pdf

He focuses on text mining for technology intelligence, forecasting and assessment. Michael Rader, Dr. Phil, Sociologist: studied sociology, psychology, political science and economics.


ART90.pdf

After clustering, this resulted in 13 unique ideas:(1) 3d images,(2) community-functions through DTV,(3) DTV as an embedded open source platform where everyone can develop applications,

In the first, exploratory phase, both primary and secondary data sources were used to scan the TV industry

and‘fleshed'based on the secondary data that were gathered. Personas are‘fictitious, specific, concrete representations of target users'that are used for conveying information about a (future) user population in product design and innovation processes 27.

Personas are usually based on empirical data and real-world observations. In most cases however, researchers need to fall back on secondary sources 28

which were explored further in the cultural probing. 3. 2. 2. 2. Phase 2. Fig. 2 provides a schematic overview of the different personas that were developed in phase 2, based on the gathered data on current

Appendix A. Supplementary data Supplementary material related to this article can be found, in the online version, at http://dx. doi. org/10.1016/j. futures. 2014.01.009.


ART91.pdf

and the regularity by which related and relevant data is collected 32.5. Strategic management of initiatives: these refer to the actions selected in each of the four BSC perspectives to achieve the defined strategic targets (step 1 above.


ART92.pdf

To collect data for the EICT and EIT ICT Labs case studies a participant-observer approach was utilized. 2 In both cases

data collection instruments included access to key documents, such as reports, internal documents, presentations and meeting minutes and observations through active participation within the organizations and, to some extent, in the build up Phase in the WINN

Also, a stand-alone and self-sustaining foresight process run by EICT could draw on the broad data basis available through the involvement of all partners.

It should be noted that this article is based on data from three cases. Although these give important impulses for research addressing foresight


Science.PublicPolicyVol37\1. Introduction to a special section.pdf

threats) and scorecard analyses (Sripaipan, 2006), analytical hierarchy process, data envelopment analysis, multicriteria decision analyses Combinations Scenario-simulation (gaming),

and data of three governmental horizon Table 2. FTA scores for modelling and horizon scanning FTA score for modelling FTA score for horizon scanning Characteristic Score Comment Characteristic

The analysis leads to specific process recommendatiion for national horizon scannings related to how data are gathered, analysed, synthesised and used.

It concludes with a proposal to build a European netwoor for using joint scan data


Science.PublicPolicyVol37\2. Joint horizon scanning.pdf

not only during the collecctio of data, but also to guide the interpretation and synthesis of data and to create support for the implementation of results.

Who engages in horizon scanning? Horizon scans are initiated and used by different privaat and public organisations, mainly for strategic reasons.

compare basic data (lists of issues and issue descripptions from the horizon scans of the UK, The netherlands and Denmark;

develop a model for continuous data sharing and comparison; compare working methods and methodologies used by the different horizon scans

and the ways in which the scan data were used. Joint horizon scanning Science and Public policy February 2010 10 Joining up the data To compare the data of the different scans

and create a common corpus for further analysis a joint database was developed on basis of the Sigma Scan of the UK Foresight HSC.

This database was adapted to incorporrat the data from the Danish and Netherlands horiizo scans. Comparison of the scan data The comparison of data was based on the data of the UK HSC Sigma Scan11 and Delta Scan12 as publisshe on the internet

and the data in the report on Denmark (OECD, 2007) and The netherlands'Horizzo Scan Report 2007 (In't Veld et al.

2008). ) To facilitate the comparison, some relabelling of the categories that were used was necessary (see Table 1). From these categories13 we derived the followwin set of main categories:

society (including demographical issues) without public services; S&t (including S&t policy; economy and finance (including its governance;

Analysing data Data were compared on the subcategory level. An attempt was made also to select some issue clusters with estimated high impact to investigate the usefulnees of joint horizon scanning as preparation for more in-depth foresight to design common policies

The possible use of the horizon scan data at the European commission (EC) level was discussed in interviews with representattive of different directorates within the EC.

-and decision-makers (by supplying systematically gathered and analyyse data on opportunities, challenges and optioons to provide the basis for resilient

Development of the national horizon scans Data collection All three scans were developed in phases. In the first phase

these essays were published then in the Sigma and Delta Scan databases. In The netherlands, papers were drafted after issue clusters had been devellope using creative group thinking exercises.

The gathering of data for the UK Sigma Scan was facilitated by Outsights-Ipsos MORI, while the Delta Scan of the S&t developments was carried out by the Institute for the Future.

The primary data for the Danish scan were deliverre by the OECD International Futures Programme Unit with support from DASTI,

and Public policy February 2010 12 discussions with representatives from different ministrries The primary data for The netherlands scan were collected by the COS Horizon scanning team

After completion, the data in the OECD DASTI scan were published on the OECD website. Principal use of the scan data The UK horizon scan (see Figure 1) has tended to be used as part of a client-oriented project approach

where the starting point is a client (for example, a government departmeent reflecting on its strategic direction or policy.

Part of the HSC engagement with the client will be an analysis of scan data (and data from other speciaalis sources) relevant to the client's policy domaain Depending on the issues encountered in this analysis,

workshops may be organised with different stakeholders, providing a broad range of inputs to the policy and creating relevant new networks that cross not only policy domains

In Denmark, the scan issues were used as input for the selection of prioritised research themes in a four-year cycle of research funding (see Figure 3). The scan data were used alongside the outcome of a public internet‘hearing'process that delivered an additional input

of 366 proposals from the general Sigma Scan development Activity Engagements Output Scan the scans Categorise data Create the E-database Society

and synthesis Workshop Cross-linkages With policy Themed Scenarios Extranet Peer review Discussion groups Updated database Finalised Themed Scenarios Final database Reports and multimedia Output data analysis phase

The launch of the scan data was covered well by the UK and some international press.

Joint database A joint database has been established containing 430 issues of which 159 are from The netherlands horizzo scan, 125 from the Danish scan and 146 from the UK scan.

The distribution of the issues over the different categories in each scan is shown in Table 3. Analysis of the joint data

the data can quite easily be compared at the revised subcategory level. This comparison led to the conclusion that the scans contained many similar issues that were closely relaate that were taken up in all three scans (or at least in two.

if data can be incorporated from scans developed by countries on the other side of the world, at different stages of economic development or with contrasting political (and geopolitical) systems.

or link all scan data in one central database that will be used to develop proposals for joint foresiigh on common themes (through EC

but also that the shared scan data provide a common basis for further joint foresight to develop joint research programs and even policies.

Cooperation The use of joint scan data at the European level could offer a useful way of addressing the complex challenges the world

and which will focus on the use of scan data to address particular challenges that were indicated in the EC's World 2025 exercise (Fauroult, 2009).

and countries and organisations that contribute relevant data and experttise This network would then be available to poliic groups within the EC (and other international groups),


< Back - Next >


Overtext Web Module V3.0 Alpha
Copyright Semantic-Knowledge, 1994-2011