Synopsis: Data:


ART30.pdf

so that it supported the other panels by collecting statistical data on R&i systems and economic forecasts.

Management and modelling of biological knowledge 7. Information and communications Sensor technology applications Data mining, analysis, management and retrieval Bio-information technology 8. Understanding and human interaction Multicultural


ART39.pdf

Divergence is interpreted as the relative position of sub-networks or clusterings. And complementarity is assessed through the analysis of intensity of relations within the network 45.

Table 3 summarises the network analysis-based toolbox designed for characterising search regimes dimensions with an initial focus on the cells highlighted in grey. 5. Tailoring Foresight to Knowledge dynamics In this section,

evolutionary theory, network analysis and postsocialism, Regional Studies 31 (5)( 1997) 533 544.36 G. C. Unruh, Understanding carbon lock in, Energy Policy 28 (12)( 2000


ART4.pdf

The open intelligence concept contrasts sharply with the more common concept of targeted intelligence or the understanding of business intelligence as an analytical function dealing with internal corporate data.

and synergies among massive amounts of data and inputs. The Scan process provides a framework with

the database administrator closes off submissions for the month and directs the continuing stream of abstracts into a new set for the next month.

In the first method, a cluster of several abstracts characterizes a conceptual overlay that an organization can lift off the scanning data

This type of clustering allows companies to gain ideas from other industries or other product domains.


ART40.pdf

Fig. 1 summarises the results of an analysis of 50 foresight exercises described in the European foresight monitoring Network (EFMN) database. 1 These exercises listed a total of 199 objectives


ART41.pdf

The model and modelling techniques in use guided the data gathering of the system analysis part. Autonomous There was still a significant degree of freedom to adapt to the perceived needs during the process and the development of roadmaps and scenarios.

It consisted of data gathering and combination of qualitative scenarios and quantitative modelling. Exclusive The project was conducted mainly by the research partners.


ART42.pdf

We can examine their plausibility and limits, their internal consistency and conformity with models and data,

and be reexamined in the light of emerging data on circumstances and trends, and on ways of thinking about problem situations.

and displaying information, such as network analysis, dynamic graphs and charts, even computer simulations and use of images and dramatic interpretations of posits

than on more systematic accumulation of data about comparable cases, varying in terms of specific features. Thus the FTA field itself resembles many of the challenging problems,

and indeed many specific methods involve cycles of data production and analysis, modelling, choice among alternatives,

(whether using codified statistical data or inputs may be more based on group or individual judgement), as opposed to techniques that are designed to foster creative thinking

what is at stake is ways of handling‘‘hard facts'',through, for example, websites, databases, publications, dissemination procedures, etc.

although it may be less easy to capture in a structured way as would be the information from, for example, statistical data or trend extrapolations.

''There may well be more clustering of ideas, with discussion about the connections between ideas proving a good basis for exchanging information about implicit models and theories.

and another differentiation between KM strategies emphasising codification (these are centred ON IT systems, with extensive organisation of data and information resources,

and stored in databases for rapid access) and those emphasising personalization (centred on direct person-to-person contacts IT systems are used here to help communication and location of key informants).

and visualising data and information. Miles et al. 23 discuss numerous ways in which new IT is liable to be employed in FTA in coming years.

Developments in text mining, visualisation, and related techniques will allow for enhanced use of new IT in scanning for scientific and technological developments and their implications.


ART43.pdf

and process relevant data makes its relevance obscure 31. Advocates of CSR have put forward pragmatic arguments that its pursuit would limit regulation


ART44.pdf

and data from which relevant Foresight information might be inferred. Sometimes, mistakenly, wild cards and weak signals are considered as synonyms,

Data set Total surveys submitted: 293; substantive completion: 106 (about 50%of FTA Conference attendees;

Analysing the data, the following observations were made (Fig. 6): Strong emphasis again on ecology-environment and economy with Society and Culture and S&t close behind;

virtual science discredited for unreliable biased data Biochips for human implants Nanotechnology radically changes production methods

The results reveal that the data is both useful and quite insightful and diverse. More data and analysis will be required to fully develop the potential of this survey

but an excellent base now exists, one that could provoke a more consistent and comprehensive response over time.

subsequent work will concentrate more on the interpretation of the rich data set that has been acquired

but a limited attempt at further interpretation of the BPS data has already been made by using social network analysis in a paper by Nugroho and Saritas 17.

even though time did not permit a full analysis of the data. Further analysis will include:(

Acknowledgement We are grateful to our colleague Phd researcher Ms. Graciela Sainz de la Fuente for her valuable contribution to the analysis of the Big Picture Survey data.


ART47.pdf

and/or produce qualitative or quantitative data. We prefer to use‘‘structurally closed''and‘‘structurally open''as main categories,

Quantitative data might well play a role but the main characteristics of these approaches is the way in

These tools are used to integrate data of different character and sources. There are a huge variety of possible combinations in this field,

as long as solid data on relevant factors and the relation between these factors is available. Limitations of models and other quantitative approaches have to be discussed in relation to the data that is included in the process.

Grunwald (2009, p. 1129) argues in relation to quantitative tools:‘‘‘‘quantitative''is equated often with‘‘objective''.''Subjective questioning of evaluation should be‘‘objectivised''.

The approach is not solving problems such as inaccuracy in data; it does not provide directly for new knowledge;


ART49.pdf

based on this new data, rewritten the four scenario drafts each depicting one possible path to the desired sustainable future (no higher rise than two Celsius degree in average earth temperature).


ART5.pdf

Fig. 1 shows the growing attention in journals for a certain topic and indicates that the term dnanotubest was used increasingly in the titles of scientific articles (extracted from the Picarta database.

A promising application of nanotubes is to use them as electromechanical8 components in nonvolatile memories. 9 Nonvolatile means that the data remains intact

but there is virtually no environmental or toxicological data on them. Q As well as the ETC group 22, page 72 who propose that:

To conclude, the method proposed in this paper appeared useful to organise the data and to structure it into a credible story.


ART50.pdf

Nowadays, GIS technology provides a wide array of functionalities to display alphanumeric data on a digital map.


ART51.pdf

speeding the present towards the future by providing knowledge about tomorrow through data about today.

a modelling system with the ambitious plan of turning massive amounts of data into knowledge and technological progress.

The project proposes using real time data (financial transactions, health records, logistics data, carbon dioxide emissions, or knowledge databases such as Wikipedia) to construct a model of society capable of simulating what the future holds for us.

For this purpose Futurict will build a sophisticated simulation, visualisation and participation platform, called the Living Earth Platform,

collecting data in real-time and allowing‘‘one to do reality mining on a global scale and to measure the socioeconomic-environmental footprint of human actions,

(along with its data mining procedures) to better enable and assist decision making and political participation processes. In effect, the use of modelling systems corresponds to one of the most recent trends in FTA.

At a more general level, the increasing availability of information in electronic form and the computing techniques and processes for exploiting such data constitute the most recent methodological developments in the field of FTA.

''in this respect, has been used in the US to define the work of computer scientists in exploring data models that predict

interpreting patterns of data to better deploy police resources. Constrained to do more with less, predictive policing marks a paradigm shift in fighting crime,

and compiling data are necessary but not sufficient to increase public safety. The public safety community relies heavily on reporting

) As Beck (2009) explains,‘‘b y bringing all crime and arrest data together by category and neighbourhood,

and simulate different data-models of the future world. Jurists will then be able to assert

and the empirical data required for this new generation of evidence-based legislative procedures and policy actions,

and monitoring their performance (i e. data gathering and reporting strategies) and practices to review existing regulations (Blind, 2006).

and blindly on data crunching exercises. Laws should not come out of calculators but from qualified and sensitive human beings.

Based on data mining techniques, intelligence-based tactics and information communication strategies, predictive policing demonstrates that Law,

or through data model analysis or simulation platform) captured and colonized in favour of particular interests,

Here, it is important not to overrate the importance of the data output achieved through such tools,

as laws should not be made dependent on data crunching mechanisms, but use them as a valuable and supportive instrument.

as its interface facilitates data analysis, display of forecasting results and scenario analysis. For further details see Hughes et al.

For an overview of data mining technologies and their use for competitive advantages, see Porter and Cunningham (2005.

www. futurict. ethz. ch/data/Whatfuturictwilldo4media. pdf HIIL (2011),‘Law scenarios to 2030. Signposting the legal space of the future'',available at:

and technology (including biotechnology, neuroscience, artificial intelligence, genetics and genomics, digital environments, ambient intelligence), data protection and privacy law, intellectual property, philosophy of law and legal theory.


ART64.pdf

'and showing how innovation leads to unpredictability that cannot be removed by more accurate data or incremental improvements in existing predictive models.


ART65.pdf

'and shows how innovation leads to unpredictability that cannot be removed by more accurate data or incremental improvements in existing predictive models.

and innovation, instead of relying on data collected using historically important categories and measurement instruments. Economic and social trends measure what used to be important

If we only had accurate data and models, we could have good predictions. In this view, our data and models are only approximations,

and epistemic progress can occur through incremental improvement. Although there may be cognitive and economic limitations, in this view,

and we become able to start to gather facts and data about the new phenomenon.

or data that could be used to model imagined futures; we are, however, perfectly able to imaginatively expand current ontologies

it is believed often that conflict can be reduced by decision processes that emphasise data and facts. The above discussion indicates that such approaches have limited only potential in future-oriented analysis.

formal models cannot be made more accurate by collecting more data or measuring the observables more accurately.

In practice, many future-oriented models are based on time-series data. Such data can be collected only if the ontology and its encodings and the measurement instruments that generate the data remain stable.

In general, the data required for formal models are available only in domains where innovation has not been important,

and it will have predictive value only if innovation remains unimportant. For example, data on phone calls or callers could not have been used to predict industry developments

when short messaging became the dominant source of growth in the industry. Similarly historical data on national accounts can tell very little about future economic developments,

as the data are collected on categories that used to be important in the industrial economies and value production models of the twentieth century.

Although many researchers believe that methodologically sound research requires that they stick to well-known and frequently used historical data sets,

this approach cannot lead to methodologically robust predictions. Similarly, reactive what if models can only provide predictive value

if innovation is unimportaant Specifically, there is little reason to believe that conventional‘impact analysis'models could lead to useful insights if innovation matters.

extrapolations from demographic data lead to an unsustainable state. These assumptions however, are difficult to maintain

and time-series data and instead facilitate creativity and embrace innovation. Notes 1. Uncertainty, of course, has been a central theme in much of economic theory since Knight.


ART66.pdf

including data before its transformation and its integration, via learning, appreciation and anticipation, into a systemic outcome.

Transformation of quantitative data from science, technology and pseudo-science into information then plays a role, in conjunction with thesteepv constituents,

Questioning quantitative data to understand its genesis needs to occupy a prime place in FTA. How often the nature of measurements is dissected according to the NUSAP2 system (Funtowicz and Ravetz 1990),

For example, the location and nature of measuring instruments can have important implications for the data reported.

objective data), while subjectivism (deriving explanation from interpretation and artificial reconstruction of reality) lies at the artificial pole.

and external validity (e g. surveys provide reliable data distributions but their validity in actually measuring constructs is suspect).

and the use of quantitative data that needs verification and due diligence (e g. following the lines of NUSAP (Funtowicz

while the outcome will persist for very much longer Data mining of very large (and open) databases including blogs;

(and sometimes less than conventional) databases arising from academia; business government and elsewhere, is already possible but limited by human and search engine factors.

Data mining is far from a new idea, the possibilities of which far exceed those of the 1960s or 1970s.

'More advanced search engines and massively fast computers, with architectures not unlike the human brain, are likely to change data mining out of all recognition possibly specifying how FTA should be conducted.

in all its STEEPV components, towards that fuzzy‘event horizon'beyond which lie the unknown unknowns (6) The quantitative data that features in FTA do not escape from behavioural influences in its transformation into information

the known unknown category having been banished by algorithmic searches through immense unstructured but interconnected sets of databases.

whereas the more common‘fail-safe'principle is akin to the other two forms of ignorance 2. The NUSAP system examines quantitative data as follows:


ART67.pdf

The promised future situation contains sequencing of genes, characterisation of proteins, databases, dynamic models and so on.

more data Downloaded by University of Bucharest at 05:02 03 december 2014 776 H. van Lente and more developments (Konrad 2006.


ART68.pdf

Methods and data The research design is based on an inductive and multiple-case study of a group of selected firms.

Data were collected through the combination of various sources and through an iterative proceess First, we collected publicly available data on the industry

and the selected firms, including historical annual reports, financial analysts'reports, conference presentations by top managers,

and technical papers supplemented publicly available data. Third, we interviewed a sample of senior and mid-level managers

Data analysis was highly iterative and used traditional approaches for inductive research (Eisenhardt 1989; Yin 2003.

Macro forces and their likely evolution are described in BASF‘Global economy Scenarios',where econometric models elaborate basic data in both qualitative and quantitative terms,

A model for uncertainty and strategic foresight In the prior sections, we sketched the strategic foresight approaches that emerged from our data through

Data collection and data analyses were designed in order to improve the construct and internna validity of our conceptual framework.


ART69.pdf

heavily dependent on the flow of ideas, data and information into a business and its network decision-making in its place in society.

and allow data-sharing stimulus to support decisions-Information based; IT used to build applications centred on processes rather than on functions,


ART7.pdf

Recently, information visualization techniques have been used with corporate data to map several LDRD investment areas for the purpose of understanding strategic overlaps and identifying potential opportunities for future development outside of our current technologies.

and given the availability of relevant data, we have embarked on a program to map our LDRD IAS.

This paper describes the project plan, detailed processes, data sources, tool sets, and sample analyses and validation activities associated with the mapping of Sandia's LDRD IAS. 2. Project plan The original plan associated with this assessment activity consisted of several steps,

Meetings with the Table 1 Data used to produce Sandia-specific and DOE LDRD maps

and mapping between fields from different data sources Calls (RFP) New proposals Continuation proposals Project reports Publications U s. DOE LDRD data ID number ID

a second set of visualizations was created to include data on all U s. Department of energy (DOE)- funded R&d activities related to the IAS.

Copies of the data, visuals, and navigation tools were provided also to IA leaders to allow them to explore the data independently. 3. Process,

data, and tools Two different types of visualizations, each designed to provide different types of information,

were created for this activity. The first can be described as a landscape map, which is suited particularly to looking for patterns and trends in large data sets.

The second type is a link analysis map, which is valuable for identifying specific topic-based relationships within large data sets.

The landscape maps were created using a process consistent with commonly accepted methods of mapping knowledge domains 1 (see Fig. 1:!

and combined in a database.!Latent semantic analysis (LSA) 2, 3 was used on the titles

The same textual records and database described for the landscape maps above were used here.!A rule-based unstructured text tagging module was used on titles

N. Rahal/Technological forecasting & Social Change 72 (2005) 1122 1136 1124 3. 1. Data collection Two different sets of data were compiled from multiple sources

The data for the Sandia-specific visualizations consisted of 1209 records from the five IAS

These data are proprietary to Sandia, and are not generally available externally. To create the DOE LDRD visualizations,

FY2003 data were not yet available. Of these 180 duplicated existing Sandia-specific records and another 200 had no titles or descriptive text,

and were removed thus from the data set. A total of 990 of the new records had both titles and descriptive text,

With the Sandia-specific and additional DOE data, this set consisted of 5112 records. 3. 2. Similarity calculation LSA is a technique based on the vector space model that has found recent application in information retrieval.

We also used an optimized stopword list prior to construction of the initial term Fig. 1. Process of putting data into a Vxinsight map. 3 FY=fiscal year,

This step is referred to as ordination rather than clustering because Vxord generates x, y coordinates for each record (calls, proposals, reports, etc.)

the data set is loaded into Vxinsight for exploration and analysis. Vxinsight is a tool that allows visualization and navigation of an abstract information space,

or restricting the data displayed to a certain time span and sliding through sequences of years with a slider.

Relationships among the individual data records may be displayed as arrows between documents and understood at many levels of detail.

Details about any data record are also available upon demand. Effective use of the labels, zooming,

Fig. 4 shows the same data in a scatterplot view, where different symbols are used for the different IAS.

The Vxinsight views are meant more for active navigation of data than for presentation of results.

and data sets on his computer so that he could explore the data independently and draw his own conclusions related to both assessment

and potential future directions. 4. 2. Link analysis of IAS The analyses of the visualizations in Section 4. 1 tend to strongly convey the patterns

compare, and leverage objective technological strengths to attract new external customers. 4. 3. Landscape mapping of DOE LDRD A map of the DOE LDRD data set was created using the same technique described previously

and is shown in Fig. 6. The purpose of this map was primarily to identify additional opportunities by comparison of Sandia IA data with work of national interest that is being funded at other DOE laboratories.

The roughly 3800 records added to the Sandia IA data add significant context and content that provide fodder for new ideas.

However, such a map would take much more data and time to construct. Fig. 6 shows that significant areas of the graph, especially at the top and right, are covered not at all by any of the Sandia IAS.

and circles within the dashed region of Fig. 6. All of the non-Sandia records have been marked as black dots in Fig. 7. Examination shows several small clusters of data in areas that are very related to our computational

The data used for this analysis consisted of LDRD calls, proposals, and projects for the IAS,

but also much data from industry and academia. This will allow us to broaden the technology intelligence that forms the context of our maps

in interpretation of clustering, Proc. -IEEE Inf. Vis. 2001 (2001) 23 30.5 K. W. Boyack, B. N. Wylie, G. S. Davidson, Domain visualization using Vxinsight for science and technology management, J. Am.


ART70.pdf

or cognitive mapping could provide useful data for the identification of potential‘boundary'competencies. Third, research should pay more attention to the systemic and temporal relativity of the organisations, that is, to how the interplay of past, present,


ART71.pdf

http://www. tandfonline. com/loi/ctas20 Text mining of information resources to inform Forecasting Innovation Pathways Ying Guo a, Tingting Ma a, Alan L. Porter b & Lu

Ying Guo, Tingting Ma, Alan L. Porter & Lu Huang (2012) Text mining of information resources to inform Forecasting Innovation Pathways, Technology analysis & Strategic management, 24:8, 843-861, DOI:

8 september 2012,843 861 Text mining of information resources to inform Forecasting Innovation Pathways Ying Guoa Tingting Maa, Alan L. Porterb and Lu Huanga*aschool of Management and Economics, Beijing Institute of technology, Beijing, China;

Once a set of multi-database, emerging technology search results has been obtained, we devise a means to help extract intelligence on key technology components and functions, major stakeholders,

These include innovation system modelling, text mining of Science, Technology & Innovation(‘ST&I')information resources, trend analyses, actor analyses,

We treat DSSC abstract records through 2010 based on searches in four databases. We employ a set of multi-database NEST search results,

sharing progress on our efforts to devise algorithms to help extract key technology components, significant actors, and potential applications.*

With the expansion of databases that compile abstract records and of desktop computing power, text mining of these records further enriches the empirical base.

Tech Mining (Porter and Cunningham 2005) is our shorthand for such activities.‘‘Research profiling'(Porter, Kongthon,

Here, we go further to apply text mining tools (see www. thevantagepoint. com) to such compilations of research article and patent abstracts.

and work to relate the content of the data searches to particular innovation process trajectories. Downloaded by University of Bucharest at 05:05 03 december 2014 Text mining of information resources 845 2. 2. Analysing NESTS NESTS comprise a loose category (Foxon et al. 2005;

Robinson and Propp 2008. Classical technology forecasting methods were devised to address incrementally advancing technological systems. These methods keyed on technical system parameters, somewhat more than on socioeconnomi system aspects.

3. Framework and data 3. 1. Framework The FIP framework includes four stages, broken down into 10 steps (Figure 1). We label Steps A j,

We search for R&d activity in suitable‘ST&I'databases and profile that activity and the associated actors from these data (Steps C and D). Many analytical tools can serve to profile R&d,

including bibliometric analyses, social netwoor analyses, and trend analyses. We adapt these to facilitate our study as a function of the state of development of NESTS.

Downloaded by University of Bucharest at 05:05 03 december 2014 Text mining of information resources 847 Step J,

and are less equipment intensive than other solar cell technologies. 3. 3. Data We chose a modular,

Boolean term search approach (Porter et al. 2007) to identify DSSCRELLATE activity in four databases: Web of Science (WOS), EI Compendex, Derwent World Patent Index (DWPI), and Factiva.

We added exclusion terms for the publication databases. We tested the performance of the search modules.

We created search algoritthm somewhat tailored for each of the four databases (details in Appendix 1). Data-cleaning in Vantagepoint software (Porter et al. 2007) refined the data downloaded from the four databases.

eliciting information on stakeholders and potential applications from text mining (Steps D and E; and consolidating empirical

In the long term, we believe that general Downloaded by University of Bucharest at 05:05 03 december 2014 Text mining of information resources 849 Figure 2. TDS for DSSCS in the USA. economic forces will favour innovation

We developed this initial DSSC TDS by mining our database search results for leads on importaan stakeholders

and text mining analyses of the database search results. Tech Mining the various publication and patent abstract records can track the emergence of key terms over time to spotlight new (appearing only in the most recent time period) and hot subtechnoologie (i e. those appearing

We begin by showing trends based on the annual activity from each database in Figure 3. It is clear that the research publications drawn from the SCI

and Compendex databases keep growing and show similar trends. This suggests that fundamental research on DSSC continues to increase essentially exponentially.

The data from both DWPI and Factiva show a small peak in 2005 and suddenly decrease in 2006.

Actually, the data from Compendex also grow slower in 2006. We Are downloaded by University of Bucharest at 05:05 03 december 2014 850 Y. Guo et al.

although the data in 2009 and 2010 were collected not completely by Thomson Reuters at the time of the downloading.

After 2008, the Factiva records suddenly climb quicker than the activity in the other databases does.

The rapid growth of DWPI and Factiva data suggests that DSSC technology is becoming more mature

Downloaded by University of Bucharest at 05:05 03 december 2014 Text mining of information resources 851 Figure 4. DSSC science overlay map.

Based on the SCI data set, we identified the top 11 research publishing institutions (Table 1)

without doing extensive data-cleaning); US National Renewable Energy Lab (NREL) is second with 4780,

. Leading DSSC companies'prevalence in various data sources. SCI EI DWPI Factiva Samsung SDI Co. Ltd 52*38 65*4 Sharp Co. Ltd 27*24 17*4 Nippon oil

identifying the leading organisations active in each of the different data sources. Table 2 compares selected organisations in this way. 5 Note the variation in prominence across these data sets.

For instance, Samsung is the leading patentee and publisher (in this compilation) on DSSCS but has not been mentioned frequently in conjunction with business actions (Factiva database.

Dainippon Printing is extremely active in patent families but does not publish. The use of multiple information sources in conjunction with each other enriches perspective on how the NEST is being developed.

Downloaded by University of Bucharest at 05:05 03 december 2014 Text mining of information resources 853 Once such highly active players have been identified,

executed an initial database search, and text mined the database search results. We carried out preliminary searches to identify local expertise to help guide us.

We also contacted Georgia Tech and Emory University colleagues with a background in solar cells. One professor invited us to meet him.

Downloaded by University of Bucharest at 05:05 03 december 2014 Text mining of information resources 855 sulphide Figure 6. Ingredients for the multipath exploration.

Figure 7 raises the desirability of life-cycle Downloaded by University of Bucharest at 05:05 03 december 2014 Text mining of information resources 857 analyses to consider likely life span, maintainability, material transformation

We are investigating DSSC technical component developments through patent analyses that combine text mining, semantic/syntactic analyses,

and then zooming into these through augmented expert engagement exercisses The richness of the data is unquestionable,

The amount of available data time horizons for innovation, and scope of study all reinforce the need to adapt these 10 steps to one's priorities.

Downloaded by University of Bucharest at 05:05 03 december 2014 Text mining of information resources 859 Guo, Y.,L. Huang,

*)or (systemic sclerosis) or (diffuse scleroderma) or (Deep space Station Controller) or (Data Storage Systems Center) or (decompressive stress strain curve or (double-sidebandsupprresse carrier) or (Flexible AC Transmission Systems

and exclude (2) noisy data#3 330 TS=((dye-Photosensiti*)or (dye same Photosensiti*)or (pigment-Photosensiti*)or (pigment same Photosensiti*))same((solar or Photovoltaic or photoelectr

*or cancer) to exclude noisy data#4 188 TS=((dye adj (sensiti*or photosensiti*))and (conduct*or semiconduct*))same electrode*)and electrolyte*)not (wastewater or wastewater or degradation)) Search term

or wastewater or degradation) to exclude noisy data Total 4104#1 or#2 or#3 or#4 Combined search terms Downloaded by University of Bucharest at 05:05 03 december 2014 Text mining of information resources 861 Appendix 2. Different generations of solar cells Material

Main research target Functional objectives Examples Commercialisation First generation Single-crystalline silicon To make use of solar energy To convert solar energy into current Conventional solar cells Now,


< Back - Next >


Overtext Web Module V3.0 Alpha
Copyright Semantic-Knowledge, 1994-2011