Data

Clustering (43)
Data (486)
Data analysis (6)
Data gathering (7)
Data mining (18)
Database (127)
Network analysis (12)
Qualitative data (8)
Quantitative data (13)
Text mining (31)

Synopsis: Data: Data:


ART1.pdf

The main proposal in terms of process advanced the view that full use should be made of ICT in enabling data collection

There was no discussion of data based systems only judgement based systems. A wide range of F. Scapolo/Technological forecasting & Social Change 72 (2005) 1059 1063 1060 techniques and tools were used in complex combinations

and moulding expert opinions into good conclusions remains an elusive goal. 4. Tales from the frontier The contributions to this session had a fairly common theme in that they focussed on the establishment of databases and the associated data collection,


ART11.pdf

the Project Team analyzed issues based on the assessment data. For each issue, key statistics were calculated (e g.,


ART12.pdf

Threats to health, safety and the environment can be identified by searches both in the patent data for 501 K. Blind/Technological forecasting

Blind 25 shows, based on international and inter-sectoral cross-section data, that the output of formal standardisation bodies can be explained significantly by the patent applications as a reliable indicator for the dynamics in the respective technologies.

Some studies based on OECD data and other internationally comparable data investigated the influence of the regulatory framework on R&d activities 27 or product innovation 28.

Data requirements/indicators: The simple quantitative use of science and technology indicators in order to detect future challenges for the regulatory framework is not sufficient.

and regions Collection of survey and preparation of data set; Definition of goal variables of the organisation depending on the possible requirements for regulations and standards;

whose data permits the assessment of the future needs for and impacts of regulations and standards.

who used the data to analyse the interrelationship between standardisation, research and export activities, taking subjective attitudes into account.

which reveals that standards for safety aspects, data security, data formats and customer interaction are most important for the surveyed German service companies.

furthermore standards to improve data security and information systems in general. In addition the impacts of standards on central issues and assets of service companies have also been asked for

since they require the development of a questionnaire, the performance of a survey either via traditional postal mail or via online survey, the collection and cleaning of the data and finally, the analysis of the data.

Data requirements/indicators: The main advantage of surveys is that they allow the consideration of very specific regulatory challenges in the future,

Hence, they are able to provide unique data in this respect. Depending on the size of these surveys,

and lead to representative results, the data can be combined with indicator-based approaches representing the universe in science and technology.

and automatically checking the content of image data unsuitable for children which are available over networks.

Peta bps per optical fibre. 2011 4. 20 4. 29 2. 39 2. 14 3. 61 Widespread use of a SCM (supply chain management system to handle data

. 40 Practical use of systems capable of understanding and automatically checking the content of image data unsuitable for children

which confirms the positive linkage found in historical data 25, whereas the statistical connection between R&d support and regulation is rather vague. 3. 3. 4. General assessment In general,

Data requirements/indicators: The application of the Delphi method to the issue of regulations and standards requires the development of questionnaires,

and assessment of regulatory foresight methodologies Methodology Type Data requirements Strengths Limitations Indicators Quantitative also providing qualitative information Adequate science

and even stakeholders Influence of non-technology-related factors cannot be considered Surveys Quantitative Micro data of the respondent

the universe Processing and analysis of data requires large human resources Identification of adequate samples Some types of information are difficult to obtain (answers to counterfactual questions

and semiquantiitativ data from Delphi surveys Consensus-building to reduce uncertainty about regulatory priorities and impacts Impossibility to detect major technological breakthroughs and their regulatory requirements Semiquantiitativ In case of conflicting interests, missing-consensus about priorities Identification of experts Uncertainty increases with complexity of the context (technology, markets


ART13.pdf

both the integration of multiple functions and automated analysis and data handling remain to be accomplished in a selfconttaine cell-on-a-chip.


ART15.pdf

Second, both employment and financial data, that is, spending on R&d activities by research performing sectors, suggest a great diversity in terms of the‘weight'of these sectors.

38, p. 65). 20 Space limits prevent presenting data here; an extensive statistical annex can be found in the original report for the DG Research, EC, on which this article draws.

This section, in turn, relies on OECD data, published in 39. A detailed analysis of some recent trends in universities'research activities can be found in 5. 21 Other key trends

Data also indicate that universities not only conduct basic research and it is not only universities who conduct basic research (on average,


ART17.pdf

Swanson demonstrates integrative capability by demonstrating new links between technologies, inherent in the data, which were not readily apparent to the respective scientific communities.

Thus, there is a rich basis of theoretical support for structuring technological component data in a hierarchical format.

The network data in the raw is not useful for this purpose. A structured representation of technology is needed for multiple reasons:

Without a theory of the data the technology analyst cannot distinguish between meaningful structure and possibly accidental corruption of the knowledge base.

Therefore, without a generative model of the data, the interpretation of the data may not be robust.

A structured representation of the data provides a principled account of where technological change is most likely to occur. 1139 S w. Cunningham/Technological forecasting & Social Change 76 (2009) 1138 1149 The article

and comparing and analyzing these results given observed data. Real world evidence is used to tune and parameterize the specific model representations used.

Parent nodes are not directly observable in the data. Children nodes can be observed directly in the data,

and are given the corresponding labels. 3. 1. Example of hierarchical random graph An example of a hierarchical random graph is presented below.

which cannot be observed directly in the data, represent morphological principles actively at work in structuring the data.

In the example given below there is a 70%chance that nodes C and D are linked with nodes A and B;

and how concerned we are with a robust representation of the data in the presence of noise. 1140 S w. Cunningham/Technological forecasting

& Social Change 76 (2009) 1138 1149 The hierarchical representation of the data grows more attractive as the network grows larger,

Thus, the hierarchical random graph is a very expressive formalism capable of capturing many possible network relationships. 3. 3. Fitting graphs to data For each of these structures we must also estimate the associated probabilities of network linkages

The best fit is achieved by fitting probabilities according to the actual proportion of linkages observed in the data.

This is the maximum likelihood estimate of the model parameters given the data 21. With only fifteen possibilities, we can exhaustively search the space of possible network structures.

Every possible network consistent with the data can be enumerated, and the likelihood of each network model given the data can be calculated.

The analyst can then choose the network or networks which provide the best fit to the data.

Larger networks prevent this exhaustive search process. Nonetheless, a systematic technique for searching through the space of models is still necessary.

include grid computing, the ipod and iphone, virtualization and LAMPP. 4. 1. Data collection and comparative analysis For the case study we collect data about Ajax and component technologies from the Internet.

even if a strict structuralist account of the data is adopted not. A graphical presentation of a subset of the Wikipedia network near AJAX is given.

which can be interpreted only in light of a more elaborate model of the data. A concluding section of the paper reflects upon the sociology of science,

in an effort to confront the observed data with sociological theory. In short, some descriptive statistics of the network are provided

despite the fact that the author does not endorse a structuralist account of the data. The network grows rapidly in size.

of hierarchical analysis of data 21. These include a metabolic network, a grassland ecology, and a social network of terrorists.

Fitting the data The component technologies of Ajax may be represented in hierarchical random graph form. We apply the Monte carlo simulation procedure of Clauset 21 to fit the 41 pages within one hop of Ajax (Programming) into a hierarchical random graph.

data. 1144 S w. Cunningham/Technological forecasting & Social Change 76 (2009) 1138 1149 The resultant hierarchical random graph usefully distinguishes between high-level concepts

while external technologies do not reveal much hierarchical structure, at least in this sample of the data. One challenge to classification revealed by Figs. 5 and 6 is the placement of the various web browsers.

Claim Claimant Data Scientific and technical knowledge consists of a set of interdependent claims Popper 31 Networks of knowledge can be structured readily from science

Furthermore, the structured representation of the data may help identify areas where competences may need to be strengthened further or even completely restored.

and case studies. 6. Interpretations from the philosophy and sociology of science The hierarchical random graph is one possible model of science, technology and innovation data.

such as patenting data, would be an interesting item for extending the method. Acknowledgements The author gratefully acknowledges the use of C++ code

He has worked as a data miner for large database companies, developed patents in the fields of pricing and promotion algorithms, been a research fellow at the Technology policy Assessment Center of Georgia Tech,


ART18.pdf

we have developed two graphical representations of the assessment data. The first one relates social preference for each option in each scenario to its potential social conflict level (see Fig. 2). For simplicity's sake,

However, these aspects can be taken into account in the interpretation of the specific empirical data.

we introduce a second visual representation of the data (see Fig. 3). As a first dimension,

uncertainties, trade-offs and decision making The data generated in the workshops and core team sessions are synthesized finally by the core team into a recommendation for strategic planning.


ART19.pdf

and interpreting existing data, information and expert opinions. Creating shared understandings among the stakeholders about the possible future developments is also important in each field;

o data on the system being analysed and on all the associated substances, o operational model of the system under analysis, o systematic hazard identification procedure and risk estimation techniques,

Relevant probability data is seldom available and as such, fully quantitative risk estimations are performed not normally in industry.


ART2.pdf

g) The potential offered by new sources of social data. D 2005 American Council for the United nations University.

There are few attempts to aggregate futures data and build current work on proven prior work. The result, for better or worse, is that the field lacks the consistency and coherence that mark more scientific fields.

through networks, with diverse and changing sets of people, continually cross-referencing data, and monitoring decisions.

the possible use of behavioral data from which values may be inferred, the use of large numbers of computer generated scenarios to optimize policy choices 2,

First, analysts should recognize that random appearing data and bizarre behavior may not be what they seem.

In the old days validity was tested by building models with data through some date in the past

8. New sources of social data As large scale data bases become available in the future it will be possible to perform cluster analyses

These data will also be a stimulant to the search for correlates: what kind of behavior, for example, leads to propensity to particular diseases.


ART20.pdf

This attainment raised national interest and critical debate of the reliability of the data basis and methodologies used in comparisons.

The criticism is related to the ways data and methodologies are used in comparisons. For example, one problem of comparisons based on composite indicators is that they give a backward looking mirror perspective,

i e. they are based only on past and often outdated data, and not on examination of future development.

Consequently the barometer gives both a compilation of ex-post data and strategic perspectives on howwell the Finnish innovation environment is positioned now

The purpose of a technology barometer is to give data of how favorable and competitive the Finnish innovation environment is assessed to be now and in the future.

The data used by the barometer illustrate transitional phases and provide an overall image of how far the developed nations have come in a journey towards a knowledge-value society.

albeit the most important data is related to outcomes and impacts of inputs, like embedding of ICT into private

Developments which have taken already place are depicted in one element based on statistical data. The indicator-based data can be used for the generation of index figures to display the nations'techno-scientific base and level of societal development in comparison with the reference group.

The reference group used in the first three implementation rounds consisted of Denmark, Finland, Germany, Japan, The netherlands, Sweden,

and the barometer publications consist of a lot of complementary and comparative data and analysis of considered indicators.

The processes for analyzing the collected data and synthesizing it into meaningful conclusions remain among the key tasks in technology barometer exercises.

calls for a high transparency of the methods used as well as transparency of all the utilized data. Transparency is of paramount importance for retaining the attention of the target groups

Implementing change and guiding desired actions through the decision-making chain requires sound analysis based on quantifiable data that is presented in an understandable format.

Recent relatively radical changes of Finnish innovation policy are challenging data basis and indicators of research and innovation,

and needs new data and novel indicators to be included in the barometer. In Finland, the sectoral research system of government administrations will be renewed,


ART22.pdf

which would start a new cycle. 3 The European Environment Agency is a specialised agency of the European union with the prime task of providing targeted, timely, relevant and reliable data and information on the state and prospects of Europe's environment.

There are also data available on the types of businesses that use scenarios most often large firms in capital-intensive industries with long (greater than 10 years) planning horizons.

This has been confirmed by studies that gather data on individual participants in a scenario planning project 20.


ART24.pdf

This framework which can help in structuring large amounts of heterogeneous data, aid the construction of complexity scenarios,

a firm developing food-packaging sensors uses the blog to collect data on user preferences allowing targeting strategies.

but confidentiality of development hampers transparency (issues of competition) and thus watchdogs find it difficult to access data to assess practices.


ART27.pdf

international benchmarking data, and future-oriented‘intelligence'),'the organisation of dialogic spaces that are hijacked not solely'by special interests

the collection of statistical data and bibliometric data; and a series of face-to-face interviews with stakeholders, including senior researchers within Luxembourg and abroad,

Setting Context/Identifying Priorities Data collection Bibliometrics Interviews International research trends Evaluation of FNR programmes Mapping of Lux.

to avoid shorttermmism to collect necessary background data and to ensure its use in the process;

the availability and use of background data; and the nature of processes of deliberation. 5. 1. Variety and change in the meanings of Foresight The FNR Foresight was born out of the necessity for the FNR to define new research programmes.

The consultants employed to coordinate the exercise in Phase 1 did a sterling job in such a limited time to collect baseline data

and benchmarking data and more time spent on data collection and analysis. Similar shortcomings have also been noted by Meyer 2008 who comments that Luxembourg's‘current science policy appears to be almost too ambitious,..

too impatient in wanting to implement change'.'Everyone (finally) realised that further discussions would be needed with the research communnit before agreement could be reached on priorities

Second, it is clear that a forward-looking process like foresight needs to be underpinned by sufficient and appropriate‘objectivised'data, e g. publication data, statistics on the national R&d environment, reports on the state of economy, environment or society

as much national data was missing while international benchmarking was limited of use owing to Luxembourg's small size.

A productive use of data requires a thorough scanning of what's available; its analysis and preparation in order to capture its essence;

and its introduction into the foresight process at specifically designed points in order to supply participants with the necessary data as and when required.


ART29.pdf

participation as well as structured/non-structured conversations and interviews are equally important sources of data'(Thygesen 2009,56, n7).

The following data, however, shows that the collection of young people's contributions was preceded by the construction of a specific image of them as stakeholders.

adding survey data and material from other sources. These future pictures were presented then in a workshop with communal and cultural organisations to discuss which of these were most desirable.

and recommendations need to be based upon sound data of the past and present, as well projections of those trends that can be projected with reasonable confidence of accuracy,


ART3.pdf

Namely, many aspects of data cleaning, statistical analyses, trend analyses, and information visualization can be done quite briskly.

That implies bworking backq from the decision support requirements to the data. It makes less sense for a bdata miningq mindset in

which we muck around in the data looking for interesting things that might be of interest,

One would adapt these to one's data sources and managerial concerns to posit particular indicators.

Often, an information services unit handles all data requests. In the past this typically meant that a researcher

Dialog. 3 We accessed data via Dialog, a leading gateway to over 400 different databases.

and Dialog for access to the fuel cell data. A l. Porter/Technological forecasting & Social Change 72 (2005) 1070 1081 1073 Whatever the route

Well, it turned out that actual data were better. Compiling and making available performance histories for machines

There would be no bsix Sigmaq quality standards without empirical manufacturing process data and analyses thereof.

Technology management, somewhat surprisingly, is among the least data-intensive managerial domains. One would think that scientists, engineers,

A script runs data fusion and duplicate removal. An additional script profiles the leading researchers at each of the btop 3+Georgia Techq American universities in the SOFC domain.


ART30.pdf

so that it supported the other panels by collecting statistical data on R&i systems and economic forecasts.


ART4.pdf

The open intelligence concept contrasts sharply with the more common concept of targeted intelligence or the understanding of business intelligence as an analytical function dealing with internal corporate data.

and synergies among massive amounts of data and inputs. The Scan process provides a framework with

In the first method, a cluster of several abstracts characterizes a conceptual overlay that an organization can lift off the scanning data


ART42.pdf

We can examine their plausibility and limits, their internal consistency and conformity with models and data,

and be reexamined in the light of emerging data on circumstances and trends, and on ways of thinking about problem situations.

than on more systematic accumulation of data about comparable cases, varying in terms of specific features. Thus the FTA field itself resembles many of the challenging problems,

and indeed many specific methods involve cycles of data production and analysis, modelling, choice among alternatives,

(whether using codified statistical data or inputs may be more based on group or individual judgement), as opposed to techniques that are designed to foster creative thinking

although it may be less easy to capture in a structured way as would be the information from, for example, statistical data or trend extrapolations.

and another differentiation between KM strategies emphasising codification (these are centred ON IT systems, with extensive organisation of data and information resources,

and visualising data and information. Miles et al. 23 discuss numerous ways in which new IT is liable to be employed in FTA in coming years.


ART43.pdf

and process relevant data makes its relevance obscure 31. Advocates of CSR have put forward pragmatic arguments that its pursuit would limit regulation


ART44.pdf

and data from which relevant Foresight information might be inferred. Sometimes, mistakenly, wild cards and weak signals are considered as synonyms,

Data set Total surveys submitted: 293; substantive completion: 106 (about 50%of FTA Conference attendees;

Analysing the data, the following observations were made (Fig. 6): Strong emphasis again on ecology-environment and economy with Society and Culture and S&t close behind;

virtual science discredited for unreliable biased data Biochips for human implants Nanotechnology radically changes production methods

The results reveal that the data is both useful and quite insightful and diverse. More data and analysis will be required to fully develop the potential of this survey

but an excellent base now exists, one that could provoke a more consistent and comprehensive response over time.

subsequent work will concentrate more on the interpretation of the rich data set that has been acquired

but a limited attempt at further interpretation of the BPS data has already been made by using social network analysis in a paper by Nugroho and Saritas 17.

even though time did not permit a full analysis of the data. Further analysis will include:(

Acknowledgement We are grateful to our colleague Phd researcher Ms. Graciela Sainz de la Fuente for her valuable contribution to the analysis of the Big Picture Survey data.


ART47.pdf

These tools are used to integrate data of different character and sources. There are a huge variety of possible combinations in this field,

as long as solid data on relevant factors and the relation between these factors is available. Limitations of models and other quantitative approaches have to be discussed in relation to the data that is included in the process.

Grunwald (2009, p. 1129) argues in relation to quantitative tools:‘‘‘‘quantitative''is equated often with‘‘objective''.''Subjective questioning of evaluation should be‘‘objectivised''.

The approach is not solving problems such as inaccuracy in data; it does not provide directly for new knowledge;


ART49.pdf

based on this new data, rewritten the four scenario drafts each depicting one possible path to the desired sustainable future (no higher rise than two Celsius degree in average earth temperature).


ART5.pdf

A promising application of nanotubes is to use them as electromechanical8 components in nonvolatile memories. 9 Nonvolatile means that the data remains intact

but there is virtually no environmental or toxicological data on them. Q As well as the ETC group 22, page 72 who propose that:

To conclude, the method proposed in this paper appeared useful to organise the data and to structure it into a credible story.


ART50.pdf

Nowadays, GIS technology provides a wide array of functionalities to display alphanumeric data on a digital map.


ART51.pdf

speeding the present towards the future by providing knowledge about tomorrow through data about today.

a modelling system with the ambitious plan of turning massive amounts of data into knowledge and technological progress.

The project proposes using real time data (financial transactions, health records, logistics data, carbon dioxide emissions, or knowledge databases such as Wikipedia) to construct a model of society capable of simulating what the future holds for us.

collecting data in real-time and allowing‘‘one to do reality mining on a global scale and to measure the socioeconomic-environmental footprint of human actions,

At a more general level, the increasing availability of information in electronic form and the computing techniques and processes for exploiting such data constitute the most recent methodological developments in the field of FTA.

''in this respect, has been used in the US to define the work of computer scientists in exploring data models that predict

interpreting patterns of data to better deploy police resources. Constrained to do more with less, predictive policing marks a paradigm shift in fighting crime,

and compiling data are necessary but not sufficient to increase public safety. The public safety community relies heavily on reporting

) As Beck (2009) explains,‘‘b y bringing all crime and arrest data together by category and neighbourhood,

and simulate different data-models of the future world. Jurists will then be able to assert

and the empirical data required for this new generation of evidence-based legislative procedures and policy actions,

and blindly on data crunching exercises. Laws should not come out of calculators but from qualified and sensitive human beings.

or through data model analysis or simulation platform) captured and colonized in favour of particular interests,

Here, it is important not to overrate the importance of the data output achieved through such tools,

as laws should not be made dependent on data crunching mechanisms, but use them as a valuable and supportive instrument.

www. futurict. ethz. ch/data/Whatfuturictwilldo4media. pdf HIIL (2011),‘Law scenarios to 2030. Signposting the legal space of the future'',available at:

and technology (including biotechnology, neuroscience, artificial intelligence, genetics and genomics, digital environments, ambient intelligence), data protection and privacy law, intellectual property, philosophy of law and legal theory.


ART64.pdf

'and showing how innovation leads to unpredictability that cannot be removed by more accurate data or incremental improvements in existing predictive models.


ART65.pdf

'and shows how innovation leads to unpredictability that cannot be removed by more accurate data or incremental improvements in existing predictive models.

and innovation, instead of relying on data collected using historically important categories and measurement instruments. Economic and social trends measure what used to be important

If we only had accurate data and models, we could have good predictions. In this view, our data and models are only approximations,

and epistemic progress can occur through incremental improvement. Although there may be cognitive and economic limitations, in this view,

and we become able to start to gather facts and data about the new phenomenon.

or data that could be used to model imagined futures; we are, however, perfectly able to imaginatively expand current ontologies

it is believed often that conflict can be reduced by decision processes that emphasise data and facts. The above discussion indicates that such approaches have limited only potential in future-oriented analysis.

formal models cannot be made more accurate by collecting more data or measuring the observables more accurately.

In practice, many future-oriented models are based on time-series data. Such data can be collected only if the ontology and its encodings and the measurement instruments that generate the data remain stable.

In general, the data required for formal models are available only in domains where innovation has not been important,

and it will have predictive value only if innovation remains unimportant. For example, data on phone calls or callers could not have been used to predict industry developments

when short messaging became the dominant source of growth in the industry. Similarly historical data on national accounts can tell very little about future economic developments,

as the data are collected on categories that used to be important in the industrial economies and value production models of the twentieth century.

Although many researchers believe that methodologically sound research requires that they stick to well-known and frequently used historical data sets,

this approach cannot lead to methodologically robust predictions. Similarly, reactive what if models can only provide predictive value

if innovation is unimportaant Specifically, there is little reason to believe that conventional‘impact analysis'models could lead to useful insights if innovation matters.

extrapolations from demographic data lead to an unsustainable state. These assumptions however, are difficult to maintain

and time-series data and instead facilitate creativity and embrace innovation. Notes 1. Uncertainty, of course, has been a central theme in much of economic theory since Knight.


< Back - Next >


Overtext Web Module V3.0 Alpha
Copyright Semantic-Knowledge, 1994-2011