Synopsis: Ict: Data: Data:


Survey on ICT and Electronic Commerce Use in Companies (SPAIN-Year 2013-First quarter 2014).pdf

according to the data from the first quarter 2014.67.7%of micro-companies had Internet access, and 99%of them used some broadband access solution.


Survey regarding reistance to change in Romanian Innovative SMEs From IT Sector.pdf

Data collection was done over a 2 month period during September-October 2014. To reliably identify trends only respondents with long tenure

-organizational-or marketing-related innovation as defined by the Oslo manual (a set of integral guidelines for the collection of innovation data,


Tepsie_A-guide_for_researchers_06.01.15_WEB.pdf

WP2 analysed available data to better understand the growth, impact and potential for social innovation in Europe.

which will require much more broad-scale data. WP3: Removing barriers to social innovation. The development and growth of social innovation is impeded by factors such as limited access to finances, poorly developed networks and intermediaries and limited skills and support structures.

Data and monitoring. Most of the future research questions we identified would benefit greatly from advanced databases containing information on social innovation, social needs, the social economy and its innovative potential, other environments of social innovation, relevant

and tap into existing data sources on national technological innovation systems. Social movements, power and politics.

and employs up to 10%of the total workforce in Germany. 47 In other countries (as is the case in Greece) there is no data to be found on employment in the social economy.

Thus we are still lacking more comprehensive and comparable data on the sector. The Third Sector Impact project that started early in 2014 will help to make this data available. 48 Nonetheless, the extent to

which social economy organisations are in fact innovators depends on numerous variables, e g. the size of the social economy and also on the welfare regime.

scope and impact of social innovation and adds complications to producing reliable data. Concerning metrics for social innovation

We should therefore try to harness relevant knowledge in the field and tap into existing data sources on national technological innovation systems.

It has become clear that survey-based data related to social innovation are necessary. Considering the importance of entrepreneurial activities as push-factors for social innovation, we need empirical survey data on organisations that are socially innovative

in order to better understand how social innovation emerges and how well it develops in societies. Figure 1:

and data sources on national technological innovation systems and make attempts to identify patterns in these systems. 3. Empirical testing of the proposed indicator system.

and contexts and could lead to more homogeneous data about social innovation and opportunities for social innovations in future.

and advocacy New flows of information (open data) Developing the knowledge base INTERMEDIARIES Social innovation networks Centres for information and evidence Hubs for diffusion and adoption Platforms for open

data/exchange of ideas Providing programmes/interventions Networking opportunities/events Information and brokerage support Knowledge transfer programmes Learning forums and insight legal advice, marketing services, fis cal and accounting services, HR advice

for example by civil organisations or the public sector who use data to better target pockets of social need

which are data-and analytics-heavy, and where high speed and global reach are important through reductions in transaction costs and increases in process efficiency.

Data and monitoring It is clear that we require more and better data on social innovation, social needs, the social economy and its innovative potential, other environments of social innovation, relevant actors and networks, technological

It will be a task for future research to develop a standard structure that allows such data to be combined

the connection between social economy organizations and social innovation requires more data for sound analyses.

All of this requires much more empirical data, in particular data separately considering socially innovative organisations. 38 SOCIAL INNOVATION THEORY

AND RESEARCH Effective collaborations It is evident that the nature of social innovations requires various actors to collaborate to make them successful (e g. for reasons of resource acquisition and allocation, for raising legitimacy, for reducing barriers or for spreading).

and tap into existing data sources on national technological innovation systems. Social movements, power and politics What can we learn from the literature on social movements?


The 2013 EU Industrial R&D Investment Scoreboard.pdf

Data have been collected by Bureau Van dijk Electronic Publishing Gmbh under supervision by Mark Schwerzel, Petra Steiner, Annelies Lenaerts and Roberto Herrero Lorenzo.

Our goal is to ensure that the data are accurate. However, the data should not be relied on as a substitute for your own research or independent advice.

We accept no responsibility or liability whatsoever for any loss or damage caused to any person as result of any error,

omission or misleading statement in the data or due to using the data or relying on the data.

If errors are brought to our attention we will try to correct them. EUR 26221 EN ISBN 978-92-79-33743-7 (print) 978-92-79-33742-0 (pdf) ISSN 1018-5593 (print

Scoreboard 5 Summary The 2013"EU Industrial R&d Investment Scoreboard"(the Scoreboard) contains economic and financial data for the world's top 2000 companies ranked by their investments in research and development (R&d.

The Scoreboard data are drawn from the latest available companies'accounts, i e. usually the fiscal year 2012 or 2012/131.

More salient facts observed from the analysis of 2012 and historic company data since 2003 include:

Figure S1 below shows the longerterm R&d trends for a subset of Scoreboard companies with available data for the past nine years.

For 1496 out of the top world 2000 companies in the Scoreboard with data for the whole period.

figures S2-S4 below show the longer-term R&d trends for subsets of Scoreboard companies with available data for the past nine years.

For 334 out of the top EU 527 companies in the Scoreboard with data for the whole period.

For 547 out of the top US 658 companies in the Scoreboard with data for the whole period.

For 324 out of the top Japanese 353 companies in the Scoreboard with data for the whole period.

For 350 EU and 566 US out of the top world 2000 companies in the Scoreboard with data for the whole period.

The relative size has been calculated as the ratio of sector R&d expenditures in EU over US considering the 136 companies with R&d data for the whole period. 12 The 2013 EU Industrial R&d Investment

Inflows of FDIS in R&d by main world regions 2003-2012 Data: FT fdi Markets database.

so that the companies'economic and financial data can be analysed over a longer period of time. For the second year, data are now being collected by Bureau Van dijk Electronic Publishing Gmbh,

following basically the same approach and methodology applied since the first Scoreboard edition in 2004.

Please see the main methodological limitations summarised in Box 1 and detailed methodological notes in Annex 2. The capacity of data collection is being improved by gathering information about the ownership structure of the Scoreboard parent companies

An analysis of the main indicators of the company data aggregated by world regions is included together with the performance of companies over the period 2004-2012.

Finally, chapter 6 presents an analysis based on data about foreign direct investments (FDIS) made by the Scoreboard companies.

and the listing of companies ranked by their level of R&d investment is provided in Annex 3. The complete data set is freely accessible online at:

http://iri. jrc. ec. europa. eu/scoreboard13. html In the next edition, this website will allow user-friendly and interactive access to the individual company data

The 2013 EU Industrial R&d Investment Scoreboard 17 Box 1. Methodological caveats Users of Scoreboard data should take into account the methodological limitations summarised here,

when comparing data from different currency areas. The Scoreboard data are expressed nominal and in Euros with all foreign currencies converted at the exchange rate of the year-end closing date (31.12.2012).

The variation in the exchange rates from the previous year directly affects the ranking of companies,

When analysing data aggregated by country or sector, be aware that in many cases, the aggregate indicator depends on the figures of a few firms.

Every Scoreboard comprises data of several financial years allowing analysis of trends for the same sample of companies.

It comprises an analysis of the company data aggregated by main world region for the period 2004-2012.

The 2000 Scoreboard companies invested €538. 8 billion in R&d, 6. 2%more than in 2011,6 Due to data availability some companies may be missed

which data are fully available. Source: The 2013 EU Industrial R&d Investment Scoreboard. European commission, JRC/DG RTD.

for 388 EU out of the 2000 companies with R&d and net sales data for the whole period Source:

for 547 US out of the 2000 companies with R&d and net sales data for the whole period Source:

The R&d data are broken down into groups of industrial sectors with characteristic R&d intensities (see definition in Box 1. 1). The following points can be observed regarding the overall R&d changes in the period 2004-2012

for 324 Japanese out of the 2000 companies with R&d and net sales data for the whole period Source:

It is important to remember that data reported by the Scoreboard companies do not inform about the actual geographic distribution of the number of employees.

and Japanese companies and those from the Rest of the World that reported employment data for the whole period 2004-12.

years based on our history database containing company data for the period 2002-2012. Results of companies showing outstanding R&d and economic results are underlined.

for 135 German out of the EU1000 companies with data for the whole period*Profitability expressed as companies'profits as percentage of net sales Source:

for 81 French out of the EU1000 companies with data for the whole period*Profitability expressed as companies'profits as percentage of net sales Source:

for 122 UK out of the EU1000 companies with data for the whole period.**Profitability expressed as companies'profits as percentage of net sales Source:

12%in 2004 (data from Evaluatepharma's 2013 report). But before discussing the details and the companies involved we need to describe the main features of the business environment in

Company Country Table 5. 4 shows key data for Gilead Sciences Celgene, Life Technologies, Illumina, United Therapeutics, Alkermes, Emergent Biosolutions, Viropharma, BTG, Acorda Therapeutics, Genus, Genomic Health, Spectrum Pharmaceuticals and Luminex.

The data is taken mainly from the companies'own websites. The first is Abcam, a biotech

It provides comprehensive technical data sheets and quality control for these products which are marketed all through its website.

Matching the first 1500 Scoreboard companies13 with data on greenfield FDIS14, the objective is to show how the top world R&d spenders are locating

the developer is hired to finish the entire project without owner input. 13 Sample corresponding to the 2012 EU Industrial R&d Investment Scoreboard edition. 14 Greenfield investment data is derived from the 2013 fdi

which data is available for the period 2003-2012. Figure 6. 7 reports the number of projects by type of FDI (R&d versus manufacturing and other types of FDIS16) and R&d intensity (high, medium-high, medium-low,

The data for the Scoreboard are taken from companies'publicly available audited accounts. As in more than 99%of cases these accounts do not include information on the place where R&d is performed actually

therefore, fundamentally different20 from that of statistical offices or the OECD when preparing Business enterprise Expenditure on R&d (BERD) data,

The Scoreboard data are primarily of interest to those concerned with benchmarking company commitments and performance (e g. companies, investors and policymakers),

while BERD data are used primarily by economists, governments and international organisations interested in the R&d performance of territorial units defined by political boundaries.

which provides reliable up-to-date information on R&d investment and other economic and financial data, with a unique EU-focus.

The data in the Scoreboard are published as a four-year time-series to allow further trend analyses to be carried out, for instance,

The sources of data also differ: the Scoreboard collects data from audited financial accounts and reports whereas BERD typically takes a stratified sample,

covering all large companies and a representative sample of smaller companies. Additional differences concern the definition of R&d intensity (BERD uses the percentage of R&d in value added,

The 2013 EU Industrial R&d Investment Scoreboard 77 Annex 2-Methodological notes The data for the ranking of the 2013 EU Industrial R&d Scoreboard (the Scoreboard) have been collected from companies

Bvd data for the years prior to 2012 have been checked with the corresponding data of the previous Scoreboards adjusted for the corresponding exchange rates of the annual reports.

Main characteristics of the data The data correspond to companies'latest published accounts, intended to be their 2012 fiscal year accounts,

Therefore, the current set represents a heterogeneous set of timed data. In order to maximise completeness and avoid double counting,

The data used for the Scoreboard are different from data provided by statistical offices e g.

BERD data. The Scoreboard refers to all R&d financed by a particular company from its own funds,

Further, the Scoreboard collects data from audited financial accounts and reports. BERD typically takes a stratified sample,

For companies outside the Euro area, all currency amounts have been translated at the Euro exchange rates ruling at 31 december 2012 as shown in Table A3. 1. The exchange rate conversion also applies to the historical data.

The original domestic currency data can be derived simply by reversing the translations at the rates above.

and is Table A3. 1. Euro exchange rates applied to Scoreboard data of companies based in different currency areas (as of 31 dec 2012).

which data exist for both R&d and net sales in the specified year. The calculation of R&d intensity in the Scoreboard is different from than in official statistics, e g.

only if data exist for both the current and previous year. At the aggregate level, 1yr growth is calculated only by aggregating those companies for

which data exist for both the current and previous year. 6. Three-year growth is the compound annual growth over the previous three years,

only if data exist for the current and base years. At the aggregate level, 3yr growth is calculated only by aggregating those companies for

which data exist for the current and base years. 7. Capital expenditure (Capex) is used expenditure by a company to acquire

2013 EU Industrial R&d Investment Scoreboard 85 Annex 4-Access to the full dataset The 2013 Scoreboard comprises two data samples:

The following links provide access to the two Scoreboard data samples containing the main economic and financial indicators and main statistics over the past four years.

The Scoreboard contains economic and financial data for the world's top 2000 companies ranked by their investments in research and development (R&d.

The Scoreboard data are drawn from the latest available companies'accounts, i e. usually the fiscal year 2012 or 2012/13.


The 2013 EU SURVEY on R&D Investment Business Trends.pdf

and provides data and analysis on companies from the EU and abroad investing the largest sums in R&d (see:

and some occasional country-specific statistics, were the main sources of these data. 32 A mapping of available transnational data sources on industrial R&d33 from the European commission,

OECD and European industry associations, showed that data on business enterprise R&d essentially drew upon retrospective surveys

Statistical offices generally collect R&d data in the form of Business R&d Expenditure (BERD which defines R&d from a top-down perspective.

Private data sources and surveys by industrial associations existed but were published rarely, and there was a shortage of qualitative and forward-looking information on industrial R&d.

and policy making in this area was usually based on results of analysis based on partial or incomplete data.

The survey complements other R&d investment related surveys and data collection exercises (e g. Innobarometer, Eurostat data collection and other ongoing surveys.

Link to the R&d Investment Scoreboards The EU R&d survey is part of the Industrial Research

Mapping Surveys and other Data Sources on Industrial R&d in the EU-25 countries, Seville,

Description of Information Sources on Industrial R&d data: European commission, OECD and European Industry Associations, Seville, July 2004.34 The rationale for the IRIMA activities emerged in the context of the European commission's"3%Action Plan"established to implement

and provides data and analysis on the largest R&d investing companies in the EU and abroad (see:

To maintain the maximum information in the data, outliers were eliminated only in extreme cases and after assessing the impact on the result. 37 One-year growth is simple growth over the previous year,

only if data exist for both the current and previous year. At the aggregate level, 1yr growth is calculated only by aggregating those companies for

which data exist for both the current and previous year. Three-year growth is the compound annual growth over the previous three years,

only if data exist for the current and base years. At the aggregate level, 3yr growth is calculated only by aggregating those companies for

which data exist for the current and base years. Unless otherwise stated, the weighted figures presented in this report are weighted by R&d investment.

an online site was provided to facilitate data entry via the European commission's Interactive Policy-making (IPM) tool,

The Controller commits himself dealing with the data collected with the necessary confidentiality and security as defined in the regulation on data protection and processes it only for the explicit and legitimate purposes declared

Purpose and data treatment The purpose of data collection is to establish the analysis of the 2013 EU Survey of R&d Investment Business Trends.

and aggregated for analysis. Data verification and modification In case you want to verify the personal data or to have modified it respectively corrected,


The antecedents of SME innovativeness in an emerging transition economy.pdf

Afterexaminingandcleaning the data, 448firmswereusedinthisanalysis. In thisstudy, wedefinealistofpossiblefactorsthathave bearing oninnovation (Tables1 4). Ourgoalistofind those factorsthathavesignificantimpactoninnovationin SMES inasmalldevelopingcountry.

Data showthatthereisnodifferenceinprocess innovationbetweenfirmsthatreportobstaclesandthose that donot (N 172, w2 1. 9, p 0. 17. Regarding productinnovation, thereisaweakrelationshipshowing that farfrombeing less innovative, firmsthatreported obstaclesare more innovativecomparedwithotherfirms that didnotreportobstacles (81.16%ofthosethat reportedobstaclesinnovatedcomparedwith68. 93%of those thatdidnotreportobstacles N 172, w2 3. 2 and p


THE CULTURE OF INNOVATION AND THE BUILDING OF KNOWLEDGE SOCIETIES.pdf

such as the digital divide which increases the development gap, free circulation and equal access to data, information and to good practices and the knowledge of information societies,


The future internet.pdf

and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version,

Volume and nature of data the sheer volume of Internet traffic and the change from simple text characters to audio and video and also the demand for very immediate responses.

For example, Cisco's latest forecast predicts that global data traffic on the Internet will exceed 767 Exabytes by 2014.

Data traffic for mobile broadband will double every year until 2014, increasing 39 times between 2009 and 201413.

Physical objects on the net small devices enable the emergence of the Internet of things where practically any physical object can now be on the net sending location and local context data when requested.

and Oscar Corcho Towards a RESTFUL Architecture for Managing a Global Distributed Interlinked Data-Content-Information Space...

and Matthias Schunter Data Usage Control in the future Internet Cloud...223 Michele Bezzi and Slim Trabelsi Part IV:

and Sergio Takeo Kofuji Fostering a Relationship between Linked Data and the Internet of Services...

The Towards a RESTFUL Architecture for Managing a Global Distributed Interlinked Data-Content-Information Space chapter analyses the concept of Content-Centric architecture, lying between the Web of Documents and the generalized Web of Data, in

which explicit data are embedded in structured documents enabling consistent support for the direct manipulation of information fragments.

and type of data currently exchanged over the Internet. Based on 2, out of the 42 Exabytes (1018) of consumer Internet traffic likely to be generated every month in 2014,56%will be due to Internet video,

In the following, we use the term data to refer to any organized group of bits a k a. data packets, data traffic, information, content (audio, video, multimedia),

etc. and the term service to refer to any action performed on data or other services and the related Application programming interface (API).

Processing/handling of data: refers to forwarders (e g. routers, switches, etc. computers (e g.,, terminals, servers, etc.

and access Data storage of data: refers to memory, buffers, caches, disks, etc. and associated logical data structures.

Transmission of data: refers to physical and logical transferring/exchange of data. Control of processing, storage, transmission of systems and functions:

refers to the action of observation (input), analysis, and decision (output) whose execution affects the running conditions of these systems and functions.

transmission and control functions applied to data. The term control is used here to refer to control functionality but also management functionality, e g. systems, networks, services, etc.

Lack of data identity is damaging the utility of the communication system. As a result, data,

as an‘economic object',traverses the communication infrastructure multiple times, limiting its scaling, while lack of content‘property rights'(not only author-but also usage-rights) leads to the absence of a fair charging model. iii.

the limited capability for processing data on a real-time basis poses limitations in terms of the applications that can be deployed over the Internet.

Data are associated not inherently with knowledge of their context. This information may be available at the communication end-points (applications)

but not when data are in transit. So, it is not feasible to make efficient storage decisions that guarantee fast storage management, fast data mining and retrieval,

refreshing and removal optimized for different types of data 18. ii. Lack of inherited user and data privacy:

data cannot be stored efficiently/handled. On the other hand, lack of encryption, violates the user and data privacy. More investigations into the larger privacy and data protection ecosystem are required to overcome current limits of how current information systems deal with privacy and protection of information of users,

Lack of data integrity, reliability and trust, targeting the security and protection of data; this issue covers both unintended disclosure and damage to integrity from defects or failures,

Multimedia contentoriented traffic comprises much larger volumes of data as compared to any other information flow,

while its inefficient handling results in retransmission of the same data multiple times. Content Delivery Networks (CDN) and more generally architectures using distributed caching alleviate the problem under certain conditions

when massive amounts of data are exchanged. 12 T. Zahariadis et al. ii. Lack of integration of devices with limited resources to the Internet as autonomous addressable entities.

not only mean protecting/encrypting the exchanged data but also not disclosing that communication took place. It is not sufficient to just protect/encrypt the data (including encryption of protocols/information/content,

tamper-proof applications etc) but also protect the communication itself, including the relation/interaction between (business

Improper segmentation of data and control. The current Internet model segments (horizontally) data and control,

whereas from its inception the control functionality has a transversal component. Thus, on one hand, the IP functionality isn't limited anymore to the network layer,

The IP data plane is itself relatively simple but its associated control components are numerous and sometimes overlapping,

The huge number of (mobile) terminals combined with a sudden peak in demand for a particular piece of data may result in phenomena that cannot be handled;

The amount of foreseen data and information5 requires significant processing power/storage/bandwidth for indexing/crawling

All the aforementioned issues imply the need for addressing new architectural challenges capable to cope with the fast and scalable identification and discovery of and access to data.

the world's largest index of the Internet, estimated the size at around 5 million terabytes of data (2005).

Accessibility (open and by means of various/heterogeneous wireless/radio and wired interfaces) to the communication network but also to heterogeneous data, applications,

concerning storage and processing, several architectural enhancements might be required, e g. for the integration of distributed but heterogeneous data and processes. 16 T. Zahariadis et al.

and associated data traffic such as non/real-time streams, messages, etc.,independently of the shared infrastructure partitioning/divisions,

Note that the AMS is responsible for obtaining management data describing the physical resource. The vcpi is responsible for providing dynamic management data to its governing AMS that states how many virtual resources are currently instantiated,

and how many additional virtual resources of what type can be supported. 2. 4 Knowledge Plane Overview The Knowledge Plane was proposed by Clark et al. 1 as a new dimension to a network architecture, contrasting with the data and control planes;

its purpose is to provide knowledge and expertise to enable the network to be self-monitoring, selfanalysing,

A narrow functionality Knowledge Plane (KP), consisting of context data structured in information models and ontologies,

The KP brings together widely distributed data collection, wide availability of that data, and sophisticated and adaptive processing or KP functions, within a unifying structure.

Knowledge extracted from information/data models forms facts. Knowledge extracted from ontologies is used to augment the facts,

which is used then to transform received data into a common form that enables it to be managed.

In our implementation, each sensor runs in its own thread allowing each one to collect data at different rates

The reader collects the raw measurement data from all of the sensors of a CCP. The collection can be done at a regular interval or as an event from the sensor itself.

The reader collects data from many sensors and converts the raw data into a common measurement object used in the CISP Monitoring framework.

The format contains meta-data about the sensor and the time of day, and it contains the retrieved data from the sensor.

The filter takes measurements from the reader and can filter them out before they are sent on to the forwarder.

and to set the rate at which they collect data;(ii) the filtering process, by changing the filter

Mapping logic enables the data stored in models to be transformed into knowledge and combined with knowledge stored in ontologies to provide a context-sensitive assessment of the operation of one or more virtual resources.

The framework provides data sources, data consumers, and a control strategy. In a large distributed system there may be hundreds or thousands of measurement probes,

which can generate data. APE (Autonomic Policy-based Engine), a component of the MP, supports contextaware policy-driven decisions for management and orchestration activities.

There are many existing solutions aiming to handle the capacity problems of current mobile Internet architectures caused by the mobile traffic data evolution.

it is foreseen that due to the development of data-hungry entertainment services like television/radio broadcasting and Vod,

66%of mobile traffic will be video by 2014 2. A significant amount of this data volume will be produced by mobile Web-browsing

and smart data caching technologies might have further impact on the traffic characteristics and obviously on mobile architectures.

The most drastic among them is that IP has become the unique access protocol for data networks

and the continuously increasing future wireless traffic is also based on packet data (i e.,Internet communication.

With the increasing IP-based data traffic flattening hierarchical and centralized functions became the main driving force in the evolution of 3gpp network architectures.

i e. the Mobility Management Entity (MME), the Serving GW (S-GW) and the Packet data Network GW (PDN GW).

This results in centralized, unscalable data plane and control plane with non-optimal routes, overhead and high end-to-end packet delay even in case of motionless users,

and data packets traverse the centralized or hierarchized mobility anchor. Since the volume of user plane traffic is compared much higher to the signaling traffic

hence separate control packets from data messages after a short period of route optimization procedure.

, both data plane and control plane are distributed). This implies the introduction of special mechanisms in order to identify the anchor that manages mobility signaling

and data forwarding of a particular mobile node, and in most cases this also requires the absolute distribution of mobility context database (e g.,

Global Mobile Data Traffic Forecast Update, 2009-2014 (Feb. 2010) 3. Dohler, M.,Watteyne, T.,Alonso-Zárate, J.:

Federation, Management, Reference Model, Future Internet, Architectures and Systems, Autonomics, Service Management, Semantic Modelling and Management, Knowledge Engineering, Networking Data and Ontologies, Future Communications

Demands on data models integration are requirements to be considered during the design and implementation phases of any ICT system.

the aggregation and subsequent understanding of monitoring/fault data is a problem that has not yet been solved completely here is where federation take place

These cross-domain interactions demand certain level of abstraction to deal with mapping requirements from different information and data domains.

A goal of autonomic systems is to provide rich usage data to guide rapid service innovation.

and also identify particular management data at application service, middleware and hardware levels (3. Analysis) that can be gathered,

We support the idea that monitoring data at the network and application level can be used to generate knowledge that can be used to support enterprise application management in a form of control loops in the information;

and resolve high level representation and mapping of data and information. Negotiations in form of data representation between different data and information models by components in the system (s) are associated to this feature.

Management Control-Administration functionality for the establishment of cross-domain regulations considering service and network regulations and requirements as negotiations.

and sustain service offering between various communities of users (heterogeneous data & infrastructure). The federated architecture must be enabled for ensuring the information is available allowing useful transfer of knowledge (information interoperability) across multiple interfaces.

information and data can be integrated, and the power of machinebased learning and reasoning can be exploited more fully.

Autonomic control loops and its formalisms 29 30, such as FOCALE 25 and Autoi 21 23 translate data from a device-specific form to a device

In a federated autonomic architecture, information is used to relate knowledge, rather than only map data,

and correlation techniques that can process relevant data in a timely and decentralised manner and relay it as appropriate to management federated making functions are necessaries to investigate (federation).

Techniques for analysis, filtering, detection and comprehension of monitoring data in federated enterprise and networks.

and the data they deliver has to be associated with some quality of information parameters before further processing. 1 Device Profile for Web Services An Architectural Blueprint for a Real-world Internet 69 3 Reference Architecture In this section we present an initial model on

, retrieve sensor data from a sensor. However, while the concept of the web resource refers to a virtual resource identified by a Universal Resource Identifier (URI),

actuation, processing of context and sensor data or actuation loops, and management information concerning sensor/actuator nodes, gateway devices or entire collections of those.

the tags act as hosts for the resources in form of Electronic Product Codes (EPCS), IDS or other information as well as for value-added information in form of e g. sensor data.

and abstracting data about the environment, workflow based specifications of system behaviour and semanticallyenabled service discovery.

streaming and static data sources in manners that were not necessarily foreseen when the sensor networks were deployed

or the data sources made available. The architecture may be applied to almost any type of real world entity,

streaming data sources, normally containing historical information from sensors; and even relational databases, which may contain any type of information from the digital world (hence resource hosts are multiple).

These resources are made available through a number of data-focused services (acting as resource endpoints),

which are based on the WS-DAI specification for data access and integration and which are supported by the Semsor-Grid4env reference implementation.

These services include those focused on data registration and discovery (where a spatiotemporal extension of SPARQL stsparql-,is used to discover data sources from the Semsorgrid4env registry),

data access and query (where ontology-based and non-ontology-based query languages are provided to access data:

SPARQL-Stream and SNEEQL a declarative continuous query language over acquisition sensor networks, continuous streaming data,

and traditional stored data), and data integration (where the ontology-based SPARQL-Stream language is used to integrate data from heterogeneous and multimodal data sources).

Other capabilities offered by the architecture are related to supporting synchronous and asynchronous access modes, with subscription/pull

and push-based capabilities, and actuating over sensor networks, by in-network query processing mechanisms that take declarative queries

and to mash up these real-world services with traditional services and data available in the Web.

supporting heterogeneous and resourceconstrained devices, its extensive use of existing Web standards such as RESTFUL interfaces and Linked Open Data,

and selects and combines those services to achieve the (abstract) service goals Semsorgrid4env Using an RDF-based registry of data sources,

and corresponding stsparql queries In-network query processing capabilities (SNEE) with mote-based sensor networks Data services are generated dynamically according to WS-DAI (Web Services Data Access and Integration) indirect

The Author (s). This article is published with open access at Springerlink. com. Towards a RESTFUL Architecture for Managing a Global Distributed Interlinked Data-Content-Information Space Maria Chiara Pettenati, Lucia

The current debate around the future of the Internet has brought to front the concept of Content-Centric architecture, lying between the Web of Documents and the generalized Web of Data

in which explicit data are embedded in structured documents enabling the consistent support for the direct manipulation of information fragments.

Web of Data; future Web; Linked Data; RESTFUL; read-write Web; collaboration. 1 Introduction There are many evolutionary approaches of the Internet architecture

which are at the heart of the discussions both in the scientific and industrial contexts:

Web of Data/Linked Data, Semantic web, REST architecture, Internet of Services, SOA and Web Services and Internet of things approaches.

Table 1. Rough classification of main driving forces in current Future Network evolutionary approaches Content-centric Service-centric Users-centric Approaches Web of Data

/Linked Data REST Internet of Services WS-*SOA Web 2. 0, Web 3. 0, Semantic web Internet of things The three views can be interpreted as emphasizing different aspect rather than expressing opposing statements.

therefore a Transitional Web lying between the Web of Documents and the generalized Web of Data in

which explicit data are embedded in documents enabling the consistent support for the direct manipulation of information as data without the limitation of current data manipulation approaches.

Abstracting from the different use of terms related to the concepts data, content and information which can be found in literature with different meanings 4,

the grounding consistency that can be highlighted is need related to the of providing an evolutionary direction to the network architecture hinging on the concept of a small, Web-wide addressable data/content/information unit

and handled by the network architecture so as to Managing a Global Distributed Interlinked Data-Content-Information Space 83 provide basic Services at an infrastructural level

Among the different paths to the Web of Data the one most explored is adding explicit data to content.

Directly treating content as data has had instead little analysis. In this paper we discuss evolution of Interdatanet (IDN) an high-level Resource Oriented Architecture proposed to enable the Future Internet approaches (see 5 6

the more we get away from the data and move into the direction of information, the fewer available solutions are there capable of covering the following requirements:

addressable and reusable information fragments (as in Web of Data) 2. IDN adopts an URI-based addressing scheme (as in Linked Data) 3. IDN provides simple a uniform Web-based

interface to distributed heterogeneous data management (REST approach) 4. IDN provides-at an infrastructural level-collaboration-oriented basic services, namely:

This will alleviate application-levels of sharing arbitrary pieces of information in ad hoc manner while providing compliancy with current network architectures and approaches such as Linked Data, RESTFUL Web Services, Internet of Service,

Managing a Global Distributed Interlinked Data-Content-Information Space 85 IDN-SA (Interdatanet Service Architecture.

The Information Model is based the graph data model (see Figure 3) to describe interlinked data representing a generic document model in IDN

Generic information modeled in IDN-IM is formalized as an aggregation of data units. Each data unit is assigned at least with a global identifier

and contains generic data and metadata; at a formal level, such data unit is a node in a Directed Acyclic Graph (DAG.

The abstract data structure is named IDN-Node. An IDN-Node is the content-item handled by the content-centric IDN-Service Architecture.

The degree of atomicity of the IDN Nodes is related to the most elementary information fragment

An IDN-document structures data units is composed of nodes related to each other through directed links. Three main link types are defined in the Information Model:

Managing a Global Distributed Interlinked Data-Content-Information Space 87 Replica Management (RM) provides a delocalized view of the resources to the upper layer.

therefore be enabled to the manipulation of data on a global scale within the Web. REST interface has been adopted in IDN-SA implementation as the actions allowed on IDN-IM can be translated in CRUD style operations over IDN-Nodes with the assumption that an IDN-document can be thought as an IDN-Node resources collection.

and deploy specific functionalities of the architecture Fig. 5. Interdatanet Service Architecture scalability features Managing a Global Distributed Interlinked Data-Content-Information Space 89 without the need to achieve the complete

The presented approach is not an alternative to current Web of Data and Linked Data approaches rather it aims at viewing the same data handled by the current Web of Data from a different perspective,

where a simplified information model, representing only information resources, is adopted and where the attention is focused on collaboration around documents

or suggesting new methods of handling data, relying on standard Web techniques. Interdatanet could be considered to enable a step ahead from the Web of Document

and possibly grounding the Web of Data, where an automated mapping of IDNIM serialization into RDF world is made possible using the Named Graph approach 9. Details on this issue are beyond the scope of the present paper.

thus providing a contribution in the direction of taking full advantage of the Web of Data potential.

A Data Web Foundation For The Semantic web Vision. Iadis International Journal On Www/Internet 6 (2 december 2008) 6. Pirri, F.,Pettenati, M. C.,Innocenti, S.,Chini, D.,Ciofi, L.:

a Scalable Middleware Infrastructure for Smart Data Integration, in D. In: Giusto, D.,et al. eds.)

Web of Data. Oh it is data on the Web posted on April 14, 2010; accessed September 8, 2010,

http://webofdata. wordpress. com/2010/04/14/ohit-is-data-on-the-web/J. Domingue et al.

Eds.):) Future Internet Assembly, LNCS 6656, pp. 91 102,2011. The Author (s). This article is published with open access at Springerlink. com. A Cognitive Future Internet Architecture Marco Castrucci1, Francesco Delli Priscoli1, Antonio Pietrabissa1,

metadata Enriched data/servicse/contents Monitored Actor related information Aggregated metadata (present context) Exchanged metadata TO/FROM OTHER PEER COGNITIVE MANAGERS Applications API functionalities Application

and of Resource related information (Sensing functionalities embedded in the Resource Interface this monitoring has to take place according to transparent techniques,(ii) the formal description of the above-mentioned heterogeneous parameters/data/services/contents in homogeneous

(ii) providing enriched data/services/contents to the Actors. In addition, these enablers control the sensing, metadata handling, actuation and API functionalities (these control actions,

(ii) provisioning to the appropriate Actors the enriched data/contents/services produced by the Cognitive Enablers (Provisioning functionalities embedded in the Actor Interface;

to transfer data from a file, or content of email/instant message, it is necessary to have delivery guarantee in communication.

or more entities and ensure that data exchange occurs at the link level and takes place according to the understanding made by the service layer.

Another look at data. In: Proceedings of the Fall Joint Computer Conference. AFIPS November 14-16, Volume 31, pp. 525 534.

Due to the fact that overlay applications as of today still generate large volumes of data

An example is locality promotion based on BGP routing data. Insertion of Additional Locality-Promoting Peers/Resources involves (a) the insertion of ISP-owned Peers (Iops) in the overlay

but an increase of the outgoing traffic due to the data exchange also with remote peers;

due to the separate handling of MPTCP's signalling and data. Incremental: the story is good,

However, there may be NATS on the data path, and MPTCP's signalling messages must get through them.

and to decrease the latency of data delivery. The CDN server sends premium packets (perhaps for IPTV) as Conex-Not-Marked or Conex-Re-echo.

Adding concurrent data transfer to transport layer, Proquest ETD Collection for FIU, Paper AAI3279221 (2007), http://digitalcommons. fiu. edu/dissertations/AAI3279221 17.

Improved data distribution for multipath TCP communication. In: IEEE GLOBECOM (2005) 19. Kelly, F.,Voice, T.:

when they are creating data that a business would like to sell, with or without their knowledge and consent,

Repurposing tussles occur in regards to the privacy of user communication data between users, ISPS, service providers and regulators.

they must be given access to network communication data. Furthermore, ISPS and other companies such as Google and Amazon have increasingly been able to monetize their user transaction data and personal data.

Google is feed able to advertisements based on past searching and browsing habits, and Amazon is able to make recommendations based on viewing and purchasing habits.

These applications of user data as marketing tools are largely unregulated. And in many cases, users have proved willing to give up some of their privacy in exchange for the economic benefit of better deals that can come from targeted advertising.

Responsibility tussles occur with ISPS that often inhabit a middle ground they are the bodies with direct access to the data

And, finally, the principle of sharing and collaboration reaches to the applications and business models, ranging from the exchange of data of physical objects for the optimization of business scenarios in, e g.,

and in a controlled way, allowing the owner of the data to decide and control how,

Data travel through a multitude of different domains, contexts and locations while being processed by a large number of entities with different ownership.

and treated according to the data owner's policy, in balance with the processing entities'policies.

while distribution and exchange of data serve for additional entry points that can potentially be exploited to penetrate a system.

The chapter, Security Design for an Inter-domain Publish/Subscribe Architecture by K. Visala et al. looks into security implications of a data-centric approach for the Future Internet,

and scoping that ensure the availability of data and maintains their integrity. It is a good example of how clean-slate approaches to the Future Internet can support security needs by design,

but also in most other Future Internet scenarios like the Internet of Services, the need for data exchange leads to sensitive data, e g.,

and exploit these data, posing a challenge to the enforcement of the users'protection needs and privacy regulations.

which does not allow one to predict by whom data will be processed or stored. To provide transparency and control of data usage

the chapter Data Usage Control in the future Internet Cloud proposes a policy-based framework for expressing data handling conditions

and enforcing them. Policies relating events and obligations are coupled with data (sticky policies) and, hence, cannot get lost in transition.

A common policy framework based on tamper-proof event handlers and obligation engines allows for the evaluation of user-defined policies

Several new architectures have been proposed recently to replace the Internet Protocol Suite with a data-centric

In this paper we present a security design through the network stack for a data-centric pub/sub architecture that achieves availability, information integrity,

network security 1 Introduction Data-centric pub/sub as a communication abstraction 2, 3, 4 reverses the control between the sender and the receiver.

but our goal is to replace the whole Internet protocol suite with a clean-slate data-centric pub/sub network waist 14.

and minimal in complexity and trust assumptions between stakeholders. 2 Basic Concepts Data-or content-centric networking can be seen as the inversion of control between the sender

the receiver expresses its interest in some identified data that the network then returns when it becomes available taking advantage of multicast

3. We use the term information-centric for this communication pattern to emphasize that the data items can link to other named data

and that the data has structure. An immutable association can be created between a rendezvous identifier (Rid)

and a data value by a publisher and we call this association a publication. At some point in time, a data source may then publish the publication inside a set of scopes that determine the distribution policies such as access control

routing algorithm, reachability, and Qos for the publication and may support transport abstraction specific policies such as replication and persistence for data-centric communication.

The Security Design for an Inter-Domain Publish/Subscribe Architecture 169 scope must be trusted by the communicating nodes to function as promised and much of the security of our architecture is based on this assumption as we explain in 5. Scopes are identified with a special type

operates solely using data-centric pub/sub model, it can be used to set up communication using any kind of transport abstraction on the data plane fast path,

that is used for the payload communication. The data-centric paradigm is a natural match with the communication of topology information that needs to be distributed typically to multiple parties

and the ubiquitous caching considerably reduces the initial latency for the payload communication as popular operations can be completed locally based on cached data.

Below the control plane t he network is composed of domains, that encapsulate resources such as links,

the roles for the endpoints are a source and a destination or for data-centric transport:

a data source and a subscriber. The topic is identified with an Rid and is used to match the end nodes in correct interaction instances by the scope.

For example, for data-centric communication, the topic identifies the requested publication. A graphlet defines the network resources used for the payload communication

and L is a variable length label of binary data. Only fixed length hash of the identifier is used in-network

where the data source uses the label as an argument to produce the publication on the fly.

Fig. 1 depicts a simplified example of My movie edit meta-data publication that has Rid (PN

The contents of this publication point to another movie frame data publication indirectly using a so called application level identifier (Aid) of the referred publication.

where publications are made available are orthogonal to the structure of the data. In Fig. 1, the publication on the left is published inside My home scope that is fully controlled by the local user.

In this example, it is easy to see that the logical structure of the data, e g. the link between the two publications, is orthogonal to the scoping of the data that determines the communication aspects for each publication. 2. 2 Interdomain Structure Each node has an access to a set of network resources In the current Internet,

most policy compliant paths have the so-called valley-free property 16, which means that, on the AS business relationship level,

which implements a data-centric pub/sub primitive as a recursive, hierarchical structure, which first joins node local rendezvous implementations into rendezvous networks (RN)

and produces an endtoend path between the service container e g. a data source) and the client (e g. a subscriber) and returns the information to the client that can then use this information to join a graphlet (e g. a delivery tree) that can then be used for the fast-path payload communication.

in order to keep the publica Security Design for an Inter-Domain Publish/Subscribe Architecture 173 tion data or pending subscription alive.

but this type of applications should be supported by adding a data-centric transport to the data plane as we did in 2. Topology manager (TM) is another function that is implemented by each independently managed domain.

Each scope also publishes a meta-data publication inside itself named (DKX, scope meta-data) describing which transports the scope supports, among others.

It should be noted that the upgraph combination based routing does not require any type of central entity to manage addresses

The upgraph data itself is published by the provider domain of the node. Because many nodes share the same upgraph,

the data-centric rendezvous system caches them orthogonally close to the scope homes that are nodes implementing the scope in question.

If the transport in question is multicast data dissemination then a separate resource allocation protocol could be coupled with the protocol as we did in 2. The client side implementation of the transport would then take the resource description from rendezvous as an input

A data-oriented network architecture DONA 4 replaces a traditional DNS-based namespace with self-certifying flat labels,

which owns the data and L is a label. DONA utilizes an IP header extension mechanism to add a DONA header to the IP header,

Consumers of data send interest packets to the network, and a nodes possessing the data reply with the corresponding data packet.

Since packets are named independently, a separate interest packet must be sent for each required data packet.

In CCN data packets are signed by the original publisher allowing independent verification, however interest packet's are protected not always by signatures.

Security issues of the content-based pub/sub system have been explored in 7. The work proposes secure event types

and Future Work In this paper we introduced a data-centric inter-domain pub/sub architecture addressing availability and data integrity.

We used the concept of scope to separate the logical structure of linked data from the orthogonal distribution strategies used to determine how the data is communicated in the network.

An Inter-Domain Data-Oriented Routing Architecture. In: Rearch'09, Rome, Italy (2009) 3. Jacobson, V.,Smetters, D. K.,Thornton, J. D.,Plass, M.,Briggs, N.,Braynard, R. L.:

A Data-Oriented (and Beyond) Network architecture. In: ACM SIGCOMM 2007, Kyoto, Japan (2007) 176 K. Visala, D. Lagutin,

and cloning of data on RFID tags (identity theft). Applications that involve such deployments typically cross organization boundaries.

In that case, it is possible to develop test data generation that specifically targets the integration of services

We will study approaches for run-time monitoring of data flow, as well as technologies for privacy-preserving usage control.

Clients want to be sure that their data outsourced to other domains, which the clients cannot control,

and authentication/integrity of the communicated data. More elaborate goals are structural properties (which can sometimes be reduced to confidentiality and authentication goals) such as authorization (with respect to a policy), separation or binding of duty,

However, in inter-organizational business processes it is crucial to protect sensitive data of each organization;

The idea is to organize data by means of sets and to abstract data by set membership.

These must contain all relevant information required to determine the access to private data and to the meta-policies that control them.

making clinical and nonclinical data available anywhere and anytime in a health care organization, while lowering infrastructure costs.

granting unauthorized access to private data and services (email, docs, etc.).The vulnerability was detected by the SATMC backend of the AVANTSSAR Platform

FIA projects like RESERVOIR or VISION are conducting research on core technological foundations of the cloud-of-clouds such as federation technologies, interoperability standards or placement policies for virtual images or data

in the sense that it will ensure that data mobility is limited to ensure compliance with a wide range of different national legislation including privacy legislation such as the EU Data protection Directive 95/46/EC.

, for the Office Live Workspace-in analogy to what Google does with Gmail-unencrypted data transfer between the cloud and the user

As a consequence, data leakage and service disruptions gain importance and may propagate through such shared resources.

An important requirement is that data cannot leak between customers and that malfunction or misbehavior by one customer must not lead to violations of the service-level agreement of other customers.

and data wiping before reuse. Sharing of resources and multi-tenant isolation can be implemented on different levels of abstraction (see Figure 2). Coarse-grained mechanisms such as shared datacenters, hosts,

In particular the last example of Software-as-a-service requires that each data instance is assigned to a customer

Shared resources must moderate potential data flow and ensure that no unauthorized data flow occurs between customers.

To limit flow control, mechanisms such as access control that ensures that machines and applications of one customer cannot access data

or resources from other customers can be used. Actual systems then need to implement this principle for all shared resources 4 (see, e g.,

Examples may include a network administrator impacting database operations or administrators stealing and disclosing data.

Customer employees can access their respective data and systems (or parts thereof) but cannot access infrastructure

or data owned by different customers. This so-called privileged identity management system is starting to be implemented today

, trusted computing 21 or computations on outsourced data 20. Trustworthy Clouds Underpinning the Future Internet 215 3. 3 Failures of the Cloud Management Systems Due to the highly automated nature of the cloud management systems

For building such resilient systems, important tools are data replication, atomic updates of replicated management data,

and integrity checking of all data received (see, e g.,, 24. In the longer run, usage of multiple clouds may further improve resiliency (e g.,

, as pursued by the TCLOUDS project www. tclouds-pro ject. eu or proposed in 11). 3. 4 Lack of Transparency

Data corruption may not be detected for a long time. Data leakage by skilled insiders is unlikely to be detected. Furthermore, the operational state and potential problems are communicated usually not to the customer except after an outage has occurred.

An important requirement in a cloud setting is to move away from today's black-box approach to cloud computing where customers cannot obtain insight on or evidence of correct cloud operations.

and no data is corrupted or leaked. In practice, these problems are unsolved largely. Cryptographers have designed schemes such as homomorphic encryption 9 that allow verifiable computation on encrypted data.

However, the proposed schemes are too inefficient and do not meet the complete range of privacy requirements 23.

In simple terms, data privacy aims at protecting personally identifiable data (PID. In Europe, Article 8 of the European Convention on Human rights (ECHR) provides a right to respect for ones private and family life, his home and his correspondence.

, limited collection of data the authorization to collect data either by law or by informed consent of the individual whose data are processed (data subject),

the right to correction and deletion as well as the necessity of reasonable security safeguards for the collected data.

Since cloud computing often means outsourcing data processing, the user as well as the data subject might face risks of data loss,

corruption or wiretapping due to the transfer to an external cloud provider. Related to these de-facto obstructions in regard to the legal requirements, there are three particular challenges that need to be addressed by all cloud solutions:

and liabilities concerning the data. This means that the user must be able to control

and comprehend what happens to the data in the cloud and which security measures are deployed. Therefore, the utmost transparency Trustworthy Clouds Underpinning the Future Internet 217 regarding the processes within the cloud is required to enable the user to carry out his legal obligations.

, installing informative event and access logs which enable the user to retrace in detail what happens to his data,

So to avoid unwanted disclosure of data, sufficient protection mechanisms need to be established. These may also extend to the level of technical solutions, such as encryption,

data minimization or enforcement of processing according to predefined policies. 4 Open Research Challenges Today's technology for outsourcing

Furthermore, data generated by systems need to be assigned to one or more customers to enable access to critical data such as logs and monitoring data.

A particularly hard challenge will be to reduce the amount of covert and side channels. Today

Today, regulations often mandate that data needs to be processed in a particular country. This does not align well with today's cloud architectures

and data integrity through authentication. However, we expect that they will then move on to the harder problems such as providing verifiable transparency,

Controlling data in the cloud: outsourcing computation without outsourcing control. In: ACM Workshop on Cloud computing Security (CCSW'09), pp. 85 90.

Token-Based Cloud computing Secure Outsourcing of Data and Arbitrary Computations with Lower Latency. In: Acquisti, A.,Smith, S.,Sadeghi, A r. eds.

Cloud computing und Datenschutz (2009), http://www. datenschutzzentrum. de/cloud-computing/Data Usage Control in the future Internet Cloud Michele Bezzi and Slim Trabelsi SAP Labs

in order to try to control the terms of usage of these collected data, but generally not providing a real efficient solution.

the data owners and the data collectors to verify the compliance of the data usage conditions with the regulations.

Recent studies address these issues by proposing a policy-based framework to express data handling conditions

and enforce the restrictions and obligations related to the data usage. In this paper, we first review recent research findings in this area, outlining the current challenges.

and visualize the use of their data stored in a remote server or in the cloud.

In the cloud, data may flow around the world, ignoring borders, across multiple services, all in total transparency for the user.

In fact, when data cross borders, they have to comply with privacy laws in every jurisdiction,

as well as, honest businesses may lose confidence in handling data, when usage conditions are uncertain. To face these challenges,

expressing that the data should be used for specific purposes only, or the retention period should not exceed 6 months,

when data are transfered to a third party). The sticky policy is propagated with the information throughout its lifetime,

and data processors along the supply chain of the cloud have to handle the data in accordance with their attached policies.

Providing the data owner with a user-friendly way to express their preferences, as well as to verify the privacy policy the data are collected with.

Develop mechanisms to enforce these sticky policies in ways that can be verified and audited. In this paper

and data handling policies; we then describe the corresponding policy engine, enabling the deployment, interpretation and enforcement of PPL policies.

the current framework lacks mechanisms to provide the data owner with the guarantee that policy

users are asked to provide various kinds of personal information, starting from basic contact information (addresses, telephone, email) to more complex data such as preferences, friends'list, photos.

Service providers Data Usage Control in the future Internet Cloud 225 Fig. 1. PPL high level architecture. describe how the users'data are handled using privacy policy,

which is presented, more or less explicitly to users during the data collection phase. Privacy policies are composed typically of a long text written in legal terms that are understood rarely fully,

which their data are handled. Therefore, there is need to support the user in this process, providing an as-automatic-as-possible means to handle privacy policies.

which allows to describe in an XML machine-readable format the conditions of access and usage of the data.

A PPL policy can be used by a service provider to describe his privacy policies (how the data collected will be treated and with

or by a user to specify his preferences about the use of his data (who can use it

which is bound to the data (sticky policy) and travels with them. In fact, this sticky policy will be sent to the server

and follow the data in all their lifecycle to specify the usage conditions. The PPL sticky policy defines the following conditions:

Data Handling: the data handling part of the language defines two conditions: Purpose: expressing the purpose of usage of the data.

Purpose can be for example marketing, research, payment, delivery, etc. Downstream usage: supporting a multilevel nested policy describing the data handling conditions that are applicable for any third party collecting the data from the server.

This nested policy is applicable when a server storing personal data decides to share the data with a third party Obligations:

Obligations in sticky policies specify the actions that should be carried out after collecting or storing a data.

For example, notification to the user whenever his data are shared with a third party, or deleting the credit card number after the payment transaction is finished, etc..

Introducing PPL policies requires the design of a new framework for the processing of such privacy rules.

In particular, it is important to stress that during the lifecyle of personal data, the same actor may play the role of both data collector and data provider.

For this reason, Primelife proposed the PPL engine based on a symmetric architecture, where any data collector can become a data provider

if a third party requests some data (see Figure 1). According to the role played by an entity (data provider

or data collector) the engine behaves differently by invoking the appropriate modules. In more detail

on the data provider side (user) the modules invoked are: The access control engine: it checks if there is any access restriction for the data before sending it to any server.

For example, we can define black or white lists for websites with whom we do not want to exchange our personal information.

Policy matching engine: after verifying that a data collector is in the white list, a data provider recovers the server's privacy policy

in order to compare it to its preferences and verify whether they are compatible in terms of data handling and obligation conditions.

The result of this matching may be displayed through a graphical interface, where a user can clearly understand how the information is handled

if he accepts to continue the transaction with the data collector. The result of the matching conditions,

as agreed by the user, is transformed into a sticky policy. On the data collector side

after recovering the personal information with its sticky policy the invoked modules are: Event handler:

it monitors all the events related to the usage of the collected data. These event notifications are handled by the obligation engine

For example, if a sticky policy provides for the logging of any information related to the usage of a data,

whenever an Data Usage Control in the future Internet Cloud 227 access (read, write, modification, deletion etc.)

to data is detected in order to keep track of this access. Obligation engine: it triggers all the obligations required by the sticky policy.

If a third party requests some data from the server, the latter becomes a data provider and acts as a user-side engine invoking access control and matching modules,

and the third party plays the role of data collector invoking the obligation engine and the event handler 3 Open Challenges Although the PPL framework represents an important advancement in fulfilling many privacy requirements of the cloud scenario,

there are still some issues, which are addressed not by the PPL framework. Firstly, in the current PPL framework

the data owner has no guarantee of actual enforcement of the data handling policies and obligations.

Indeed, the data collector may implement the PPL framework, thus having the technical capacity of processing the data according to the attached policies,

but it could always tamper with this system, which controls, or simply access directly the data without using the PPL engine.

In practice, the data owner should trust the data collector to behave honestly. A second problem relates to the scalability of the sticky policy approach.

Clearly, the policy processing adds a relevant computational overhead. Its applicability to realistic scenarios where large amounts of data have to be transmitted

and processed, has to be investigated. A last issue relates to the privacy business model. The main question is:

What should motivate the data collectors/processors to implement such technology? Actually, in many cases, their business model relies on the as-less-restricted-aspossible use of private data.

On the user side, a related question is, are the data owners ready to pay for privacy 9?

Both questions are difficult to address, especially when dealing with such a loosely defined concept as privacy.

In fact, in a typical web 2. 0 application the user is disclosing his own data,

and they tend to disclose their data quite easily. In the cloud world organizations store the data they have collected (under specific restrictions) with the cloud provider.

These data have a clear business value, and typically companies can evaluate the amount of money they are risking

if such data are lost or made public. For these reasons, it is likely that they are ready to pay for a stronger privacy protection.

All these issues need further research work to be addressed. In the next section, we present our initial thoughts on how we may extend the Primelife framework to address the first problem we mentioned above, i e.,

there is no guarantee of enforcement of the data handling policies and obligations. In other words, we suppose that the server enforces correctly the sticky policies,

as well as giving the user control on the released data. The main idea is to introduce tamperproof 6 obligation engine and event handler, certified by a trusted third party,

which mediate the communication and the handling of private data in the cloud platform. The schedule of the events,

and event handler Data Usage Control in the future Internet Cloud 229 with a tamper-proof event handler and a tamper-proof obligation engine certified by a trusted third party (e g.,

If the data owner has the guarantee from a trusted authority (governmental office EU commission, etc.

he will tend to transfer his data to the certified host. In order to certify the compliance of an application,

if the stored data are handled correctly. The difficulty comes for the access to the database by the service provider.

Fig. 3. A sketch of data track administration console The particularity of this API is that all the methods to access the data can be detected by the event handler.

For example, if the service adds a new element (data and sticky policy) this action should be detected,

The monitoring can be accessible by any data owner who, once authenticated, can list all the data (or set of data) with their related events and pending or enforced obligations.

The data owner can at any time control how his data are handled, under which conditions the information is accessed,

and compare them with the corresponding stored sticky policy. Fig. 3 shows a very simple example of how the remote administrative console could be structured,

and more control to the data hosted within the cloud. It also allows the user to detect any improper usage of his data

and, in this case, notify the host or the trusted authority. 230 M. Bezzi and S. Trabelsi The advantages of the proposed solution are twofold.

First, from the data owner perspective, there is a guarantee that actual enforcement has taken place, and that he can monitor the status of his data and corresponding policies.

Second, from the auditors'point of view, it limits the perimeter of their analysis, since the confidence zone provided by the tamper proof elements

possibly owned by different entities in different locations, the conditions of the data usage, agreed upon collection, may be lost in the lifecycle of the personal data.

From the data consumer point of view, businesses and organizations seek to ensure compliance with the plethora of data protection regulations

it notably requires a high level of trust in the data collector/processor. We presented some initial thoughts about how this problem can be mitigated through the usage of a tamper proof implementation of the architecture.

Enterprise privacy authorization language (EPAL 1. 1). IBM Research Report (2003) Data Usage Control in the future Internet Cloud 231 3. Bonneau, J

Privacy-enabled management of customer data. In: Dingledine, R.,Syverson, P. F. eds. PET 2002.

W3c Workshop on Privacy and data usage control p. 5 october 2010), http://www. w3. org/2010/policy-ws/11.

or even get monitoring status data properly after the VCT is provisioned and while the testing is in progress.

and that implement a general data transport service are designated as routing slices 13. Routing slices as an architectural concept is known as Transport Virtualization (TV) 23,24.

, they permit data transport resource to be accessed without knowledge of their physical or network location.

and data exchange among providers (e g. 8). Intrusion detection systems can increase situation awareness (and with this overall security) by sharing information.

For example, transmission delay constraints of real-time multimedia streaming are much stricter than that of bulk data transfer.

which requires two routers at the user premises one for sending data to the uplink

and service layers cooperation for more efficient end-to-end self management (Fig. 1). The term cooperation is used to describe the collection of the service-level monitoring data and the usage of service-level adaptation actions for efficient network adaptation.

The Service-level NECM undertakes to collect service-level data. The Service-level NECM could be placed at the service provider's side

The decision making engine of the NDCM filters the collected monitoring data from the network and the service level

Trust Management and Security, privacy and data protection mechanisms of distributed data. An addressing scheme, where identity and location are embedded not in the same address.

In many cases, the network operator is obliged to search through vast amounts of monitoring data to find any inconveniences to his network behaviour

a layered architecture and an agreed upon set of protocols for the sharing and transmission of data over practically any medium.

Indeed, IT resources are processing data that should be transferred from the user's premises or from the data repository to the computing resources.

and the data deluge will fall in it, the communication model offered by the Internet may break the hope for fully-transparent remote access and outsourcing.

This concept has little to do with the way data is processed or transmitted internally, while enabling the creation of containers with associated nonfunctional properties (isolation, performance, protection, etc.).

It also connects heterogeneous data resources in an isolated virtual infrastructure. Furthermore, it supports scaling (up and down) of services and load.

For example, sensor networks will be composed on adhoc collections of devices with low-level interfaces for accessing their status and data online.

Mobile platforms will need to access to external data and functionality in order to meet consumer expectations for rich interactive seamless experiences.

Linked Data is the Semantic web in its simplest form and is based on four principles: Use URIS (Uniform Resource Identifiers) as names for things.

Services 325 Given the growing take-up of Linked Data for sharing information on the Web at large scale there has begun a discussion on the relationship between this technology and the Future Internet.

In particular, the Future Internet Assemblies in Ghent and Budapest both contained sessions on Linked Data.

Fostering a Relationship Between Linked Data and the Internet of Services discusses the relationship between Linked Data and the Internet of Services.

Specifically, the chapter outlines an approach which includes a lightweight ontology and a set of supporting tools.

and data service support to other enterprise services and lines of business. This brings varied expectations of availability, mean-time-torecover, Quality of Service, transaction throughput capacity, etc.

Taking a holistic cost view, it provides fine grained SLA based data to influence future investment decisions based on capital

From the evaluation perspective, the application scenario is particularly critical due to sensitive data on the health status of the citizens

and compared with the trends in the real data extracted from the past behaviours of the systems at the service providers.

It is responsible to support the Data link communication to guarantee the correct delivery Meeting Services and Networks in the future Internet 341 of data transfer between links.

The main difference between these two layers is that the Net-Ontology layer is responsible to support service needs beyond simple data transfers.

These protocols generally just can send information at the data field and do not support semantic in their stacks. 342 E. Santos et al.

and the needs of the data flow that will start. With the understanding of application needs, the Net-Ontology layer, sends to the DL-Ontology layer another OWL object with the requirements of data communication as a way of addressing, for example.

and the data is sent through the layers also using raw sockets. At the current stage of development the implementation of FINLAN library is made in application level.

The Author (s). This article is published with open access at Springerlink. com. Fostering a Relationship between Linked Data and the Internet of Services John Domingue1, Carlos Pedrinaci1, Maria Maleshkova1, Barry Norton2,

We outline a relationship between Linked Data and the Internet of Services which we have been exploring recently.

Linked Data is a lightweight mechanism for sharing data at web-scale which we believe can facilitate the management and use of service-based components within global networks.

Linked Data, Internet of Services, Linked Services 1 Introduction The Future Internet is a fairly recent EU initiative

Frederic Gittler, FIA Stockholm The Web of Data is a relatively recent effort derived from research on the Semantic web 1,

and interlinking data previously enclosed within silos. Like the Semantic web the Web of Data aims to extend the current human-readable Web with data formally represented

so that software agents are able to process and reason with the information in an automatic and 352 J. Domingue et al. flexible way.

sharing and linking of data on the Web. From a Future Internet perspective a combination of service-orientation and Linked Data provides possibilities for supporting the integration, interrelationship and interworking of Future Internet components in a partially automated fashion through the extensive use of machine

-processable descriptions. From an Internet of Services perspective, Linked Data with its relatively simple formal representations and inbuilt support for easy access and connectivity provides a set of mechanisms supporting interoperability between services.

In fact the integration between services and Linked Data is increasingly gaining interest within industry and academia.

Examples include, for instance, research on linking data from RESTFUL services by Alarcon et al. 3, work on exposing datasets behind Web APIS as Linked Data by Speiser et al. 4,

and Web APIS providing results from the Web of Data like Zemanta1. We see that there are possibilities for Linked Data to provide a common‘glue'as services descriptions are shared amongst the different roles involved in the provision,

aggregation, hosting and brokering of services. In some sense service descriptions as and interlinked with, Linked Data is complementary to SAP's Unified Service Description Language2 5, within their proposed Internet of Services framework3,

as it provides appropriate means for exposing services and their relationships with providers, products and customers in a rich, yet simple manner

which is tailored to its use at Web scale. In this paper we discuss the relationship between Linked Data and services based on our experiences in a number of projects.

Using what we have learnt thus far, at the end of the paper we propose a generalization of Linked Data

and service principles for the Future Internet. 2 Linked Data The Web of Data is based upon four simple principles,

known as the Linked Data principles 6, which are: 1. Use URIS (Uniform Resource Identifiers) as names for things. 2. Use HTTP URIS so that people can look up those names. 3

. When someone looks up a URI, provide useful information, using standards (RDF*,SPARQL). 4. Include links to other URIS,

so that they can discover more things. 1 http://developer. zemanta. com/2 http://www. internet-of-services. com/index. php?

id=260&l=0 Fostering a Relationship between Linked Data and the Internet of Services 353 RDF (Resource Description Framework) is a simple data model for semantically describing resources on the Web.

SPARQL is a query language for RDF data which supports querying diverse data sources, with the results returned in the form of a variable-binding table,

or an RDF graph. Since the Linked Data principles were outlined in 2006, there has been impelled a large uptake most notably by the Linking Open Data project4 supported by the W3c Semantic web Education and Outreach Group.

As of September 2010, the coverage of the domains in the Linked Open Data Cloud is diverse (Figure 1). The cloud now has nearly 25 billion RDF statements

and over 400 million links between data sets that cover media, geography, academia, lifesciences and government data sets.

Fig. 1. Linking Open Data cloud diagram as of September 2010, by Richard Cyganiak and Anja Jentzsch5.

From a government perspective significant impetus to this followed Gordon brown's announcement when he was UK Prime Minister6 on making Government data freely available to citizens through a specific Web of Data portal7 facilitating the creation of a diverse set of citizen-friendly applications. 4 http

://esw. w3. org/Sweoig/Taskforces/Communityprojects/Linkingopendata 5 http://lod-cloud. net/6 http://www. silicon. com/management/public-sector/2010/03

/22/gordon-brown-spends-30mto-plug-britain-into-semantic web-39745620/7 http://data. gov. uk/354 J. Domingue et al.

On the corporate side, the BBC has been making use of RDF descriptions for some time. BBC Backstage8 allows developers to make use of BBC programme data available as RDF.

The BBC also made use of scalable RDF repositories for the back-end of the BBC world cup website9 to facilitate agile modeling 10.

http://developers. facebook. com/docs/opengraph 13 http://news. cnet. com/8301-13577 3-20003053-36. html Fostering a Relationship between Linked Data

and the Internet of Services 355 4 Linked Services The advent of the Web of Data together with the rise of Web 2 0 technologies and social principles constitute, in our opinion,

1. Publishing service annotations within the Web of Data, and 2. Creating services for the Web of Data, i e.,

, services that process Linked Data and/or generate Linked Data. We have devoted since then significant effort to refining the vision 10

and implementing diverse aspects of it such as the annotation of services and the publication of services annotations as Linked Data 11,12,

as well as on wrapping, and openly exposing, existing RESTFUL services as native Linked Data producers dubbed Linked Open Services 13,14.

It is worth noting in this respect that these approaches and techniques are different means contributing to the same vision

and the Web of Data through their integration based on the two notions highlighted above. As can be seen in Figure 2 there are three main layers that we consider.

which we provide in essence a Linked Data-oriented view over existing functionality exposed as services.

either Fig. 2. Services and the Web of Data 356 J. Domingue et al. by interpreting their semantic annotations (see Section 4. 1)

data from legacy systems, state of the art Web 2. 0 sites, or sensors, which do not directly conform to Linked Data principles can easily be made available as Linked Data.

In the second layer Are linked Service descriptions. These are annotations describing various aspects of the service which may include:

Following Linked Data principles these are given HTTP URIS, are described in terms of lightweight RDFS vocabularies, and are interlinked with existing Web vocabularies.

Note that we have made already our descriptions available in the Linked Data Cloud through iserve these are described in more detail in Section 4. 1. The final layer in Figure 2 concerns services which are able to consume RDF data

or continue with the activity it is carrying out using these newly obtained RDF triples combined with additional sources of data.

RDF-aware and their functionality may range from RDF-specific manipulation functionality up to highly complex processing beyond data fusion that might even have real-life side-effects.

The use of services as the core abstraction for constructing Linked Data applications is therefore more generally applicable than that of current data integration oriented mashup solutions.

Data-based descriptions of Linked Services allowing them to be published on the Web of Data and using these annotations for better supporting the discovery, composition and invocation of Linked Services.

Fostering a Relationship between Linked Data and the Internet of Services 357 Fig. 3. Conceptual model for services used by iserve As it can be seen in Figure 3,

During the annotation both tools make use of the Web of Data as background knowledge so as to identify

since they are adapted to existing sources of Linked Data. The annotation tools are connected both to iserve for one click publication. iserve15,

the first system to publish web service descriptions on the Web of Data, as well as the first to provide advanced discovery over Web APIS comparable to that available for WSDL-based services.

service descriptions are exposed following the Linked Data principles and a range of advanced service analysis and discovery techniques are provided on top.

It is worth noting that as service publication Is linked based on Data principles, application developers can easily discover services able to process

or provide certain types of data, and other Web systems can seamlessly provide additional data about service descriptions in an incremental and distributed manner through the use of Linked Data principles.

One such example Is linked for instance LUF User Feedback) 16, which links service descriptions with users ratings, tags and comments about services in a separate server.

kmi. open. ac. uk/soa4all-studio/consumption-platform/rs4all/Fostering a Relationship between Linked Data and the Internet of Services 359 In summary,

Which Produce and Consume Linked Data In this section we consider the relationship between service interactions and Linked Data;

that is, how Linked Data can facilitate the interaction with a service and how the result can contribute to Linked Data.

In other words, this section is not about annotating service descriptions by means of ontologies and Linked Data,

but about how services should be implemented on top of Linked Data in order to become first class citizens of the quickly growing Linking Open Data Cloud.

Note that we take a purist view of the type of services which we consider.

These services should take RDF as input and the results should be available as RDF;

, service consume Linked Data and service produce Linked Data. Although this could be considered restrictive, one main benefit is that everything is instantaneously available in a machine-readable form.

Within existing work on Semantic web Services, considerable effort is expended often in lifting from a syntactic description to a semantic representation and lowering from a semantic entity to a syntactic form.

and platform to interpret them, following Linked Data and 18 http://soa4all. isoco. net/spices/about/19 http://technologies. kmi. open. ac. uk/soa4all-studio/360

As a general motivation for our case, we consider the status quo of the services offered over the geonames data set,

a notable and‘lifelong'member of the Linking Open Data Cloud, which are offered primarily using JSON

it conveys neither the result's internal semantics nor its interlinkage with existing data sets.

in Linked Data, on the other hand, geonames itself provides a predicate and values for country codes and the WGS84 vocabulary is used widely for latitude and longitude information.

(and indeed also within the Ourairports and DBPEDIA Linked Data sets) 20 but the string value does not convey this interlinkage.

A solution more in keeping with the Linked Data principles, as seen in our version of these services,

reusing URIS from Linked Data source for representing features in input and output messages; making explicit the semantic relationship between input and output.

In order to make the statement of this relationship more useful as Linked Data, the approach of Linked Data Services (LIDS) 25 is to URL-encode the input.

For instance, the latitude and longitude and used as query parameters so that the point is represented in a URI forming a new 20 The three identifiers for the Innsbruck Airport resource are http://sws. geonames. org/6299669/,

and http://dbpedia. org/resource/Innsbruck airport, respectively. 21 http://www. linkedopenservices. org/services/geo/geonames/weather/Fostering a Relationship between Linked Data and the Internet of Services

In aligning LOS and LIDS principles, pursued via a Linked Services Wiki22 and a Linked Data and Services mailing list23,

it can first BE POSTED as a new resource (Linked Data and Linked Data Services so far concentrate on resource retrieval

and therefore primarily the HTTP GET verb), in the standard REST style, and then a resource-oriented service can be offered with respect to it.

it aims at the greatest familiarity and ease for Linked Data developers. It is not without precedent in semantic service description 26.

etc. and free of FILTERS. etc. 4 362 J. Domingue et al. 5 Conclusions In this paper we have outlined how Linked Data provides a mechanism for describing services in a machine readable fashion

and enables service descriptions to be connected seamlessly to other Linked Data. We have described also a set of principles for how services should consume

and produce Linked Data in order to become first-class Linked Data citizens. From our work thus far, we see that integrating services with the Web of Data,

as depicted before, will give birth to a services ecosystem on top of Linked Data, whereby developers will be able to collaboratively

and incrementally construct complex systems exploiting the Web of Data by reusing the results of others.

The systematic development of complex applications over Linked Data in a sustainable, efficient, and robust manner shall only be achieved through reuse.

We believe that our approach is a particularly suitable abstraction to carry this out at Web scale.

We also believe that Linked Data principles and our extensions can be generalized to the Internet of Services That is,

to scenarios where services sit within a generic Internet platform rather than on the Web.

which integrates service orientation with the principles underlying Linked Data. We are also hopeful that our approach provides a viable starting point for this.

and also note that proposals already exist for integrating Linked Data at the network level25.

http://www. soa4all. eu/Fostering a Relationship between Linked Data and the Internet of Services 363 Open Access.

Linking Data from RESTFUL Services. In: Workshop on Linked Data on the Web at WWW 2010 (2010) 4. Speiser, S.,Harth, A.:

Taking the LIDS off Data Silos. In: 6th International Conference on Semantic Systems (I-SEMANTICS)( October 2010) 5. Cardoso, J.,Barros, A.,May, N.,Kylau, U.:

Towards a Unified Service Description Language for the Internet of Services: Requirements and First Developments.

Linked Data-Design Issues (July 2006), http://www. w3. org/Designissues/Linkeddata. html 7. Fielding, R. T.:

Services and the Web of Data: An Unexploited Symbiosis. In: AAAI Spring Symposium Linked Data Meets Artificial intelligence, March 2010, AAAI Press, Menlo Park (2010) 10.

Pedrinaci, C.,Domingue, J.:Toward The next Wave of Services: Linked Services for the Web of Data.

Journal of Universal Computer science 16 (13), 1694 1719 (2010) 11. Maleshkova, M.,Pedrinaci, C.,Domingue, J.:

Consuming Dynamic Linked Data. In: 1st International Workshop on Consuming Linked Data (November 2010) 15.

Benslimane, D.,Dustdar, S.,Sheth, A.:Services Mashups: The New Generation of Web Applications. IEEE Internet Computing 12 (5), 13 15 (2008) 16.

Towards Linked Data Services. In: Int'l Semantic web Conference (Posters and Demonstrations (November 2010) 26.

and provision of very high-volume video data. Second, the development of advanced networking technologies in the access and core parts,

no duplicates), making it free for (other) data (e g.,, more enhancement layers. The key innovations of this approach to service/content adaptation are distributed,

Management, Control and Data Planes (MPL, CPL, DPL), parallel and cooperating (not represented explicitly in the picture).

The upper data plane interfaces at the CAN layer and transport the packets between the VCAN layer and the Home-Box layer in both directions.

The media data flows, are classified intelligently at ingress MANES, and associated to the appropriate VCANS

Figure 2 shows the process of VCAN negotiation (action 1 on the figure) and installation in the networks (action 2). Then (action 3) MANE1 is instructed how to classify the data packets, based on information as:

the SM@SP instructs the SP/CP servers how to mark the data packets. The information to be used in content aware classification can be:

The data packets are analysed by the classifier, assigned and forwarded to one of the VCANS for further processing.

Special algorithms are needed to reduce the amount of processing of MANE in the data plane based on deep analysis of the first packets of a flow

1) data confidentiality, integrity and authenticity; and 2) intelligent and distributed access control policy-based enforcement.

The second objective will pursue a content-aware approach that will be enforced by MANE routers over data in motion.

Content-aware security technologies typically perform deep content inspection of data traversing a security element placed in a specific point in the network.

and offer the basis for spatial and temporal scalability The ST decomposition results in two distinctive types of data:

the resulting data are mapped into the scalable stream in the Scalable and Adaptable Media Coding Techniques for Future Internet 385 bit-stream organisation module,

which creates a layered representation of the compressed data. This layered representation provides the basis for low-complexity adaptation of the compressed bit-steam. 3 Scalable Multiple Description Coding (SMDC) SMDC is a source coding technique,

General principles and different approaches for MDC are reviewed in 5. Approaches for generating multiple descriptions include data partitioning (e g.,

representation schemes for its semantic context can be constructed by learning from data. In the target representation scheme, metadata is divided into three levels:

a Bayesian network model is built using from a small amount of training data. Semantic inference and reasoning is performed then based on the model to decide the relevance of a video.

a representation scheme for its semantic context is learned directly from data and will not be restricted to the predefined semantic structures in specific application domains.

a Bayesian network model is learned from a small amount of training data. Semantic inference and reasoning is carried then out based on the learned model to decide

This selection criterion is basically a measure of how well the given graph correlates to the data.

when an un-annotated data item is present, the Bayesian network model derived from the training stage conducts automatic semantic inferences for the high-level query.

Tech. rep.,Institute for Image Data Research, University of Northumbria at Newcastle (1999), http://www. jisc. ac. uk/uploaded documents/jtap-039. doc 8

A bayesian method for the induction of probabilistic networks from data. Machine learning 9 (4), 309 347 (1992) 400 Q. Zhang and E. Izquierdo 9. Fan, J.,Gao, Y.,Luo, H.,Jain, R.:

IEEE Transactions on Knowledge and Data engineering, 665 677 (2005) Part VIII: Future Internet Applications Part VIII:

and effectiveness of the health value chain e g. through enabling access to and sharing of patient data, secure data exchange between healthcare actors,

The first topic concerns the resources of telecom operators and service providers such as networks, switching, computing and data cen 404 Part VIII:

OOP aims at developing applications and software systems that provide a high level of data abstraction and modularity (using technologies such as COM,.

and data generated during FINERS'operations. There is not a centralised database, the information will stay by the business entity to

Data flows are transferred among GSN nodes over dedicated circuits (like light paths or P2p links), tunnels over Internet or logical IP networks.

The GSN Data plane corresponds to the System level, including massive physical resources, such as storage servers and application servers linked by controlled circuits (i e.,

Capabilities can contribute to a resources Business, Presentation or Data Access Tier. The Tool component provides additional services, such as persistence,

which provides for a flexible set of data flows among data centers. The ability of incorporating third-party power control components is also an advantage of the Iaas Framework.

with the help of instrumentation and interconnection of mobile devices, sensors and actuators allowing real-world urban data to be collected

and actuators, offering realtime data management, alerts, and information processing, and (3) the creation of applications enabling data collection and processing, web-based collaboration,

and actualisation of the collective intelligence of citizens. The latest developments in cloud computing and the emerging Internet of things, open data, semantic web,

and future media technologies have much to offer. These technologies can assure economies of scale in infrastructure

secure a continuous flow of data and information, and offer useful services. It is here that the third task for city authorities comes into play,

and open public data up to developers as well as user communities. As the major challenge facing European cities is to secure high living standards through the innovation economy

public data. 4 Emerging Smart City Innovation Ecosystems As Table 4 illustrates, several FP7-ICT projects are devoted to research and experimentation on the Future Internet and the Internet of things within cities,

a map of sensor data available on smart phone) as well as urban waste management are two of the use cases from the Smart Santander project.

and a local SME providing data access from electric cars equipped with air quality sensors (VULOG) and a citizen IT platform (a regional Internet space for citizens in the NCA area).

to investigate experiential learning of the Iot in an open and environmental data context, and to facilitate the co-creation of Smart Cities

and the Future Internet 441 green services based on environmental data obtained via sensors. Various environmental sensors will be used,

1) to participate in the collection of environmental data; 2) to participate in the co-creation of services based on environmental data;

and 3) to access services based on environmental data, such as accessing and/or visualising environmental data in real time.

Three complementary approaches have already been identified as relevant for the green services use case: participatory/usercentred design methods;

diary studies for Iot experience analysis, and coupling quantitative and qualitative approaches for portal usage analysis. In this context of an open innovation and Living Lab innovation ecosystem,

such as specific testing facilities, tools, data and user groups, can be made accessible and adaptable to specific demands of any research and innovation projects.

common resources for research and innovation can be identified, such as testbeds, Living Lab facilities, user communities, technologies and know-how, data,

and will enable data transfer services agnostic to the underlying connection protocol. Furthermore, a major challenge in future urban spaces will be how to manage the increasing number of heterogeneous and geographically dispersed machines

so that data and information could be shared among different applications and services at global urban levels.

combination and processing of data and information from different service provides, sources and formats. The Internet of People (Iop:

Advanced location based services, social networking and collaborative crowdsourcing collecting citizens'generated data. By analyzing these different Smart Cities application scenarios, together with the need of a broadband communication infrastructure that is becoming,

where data is binding the different dimensions, as most aspects are related closely (e g. environment and traffic, both of them to health, etc.).

many Smart City services will rely on continuously generated sensor data (for example for energy monitoring, video surveillance or traffic control.

This functionality will provide a repository where observations/sensors'data are stored to allow later retrieval or processing,

to extract information from data by applying semantic annotation and data linkage techniques. Publish-Subscribe-Notify:

The USN-Gateway represents a logical entity acting as data producers to the USNENABLER that implements two main adaptation procedures to integrate physical or logical Sensor and Actuator Networks (SANS:

This functionality is intended to provide USNENABLER both Sensorml (meta-information) and O&m (observation & measurements) data from specific SANS data (i e.

The Notification Entity (NE) is the interface with any sensor data consumer that require filtering

or information processing over urban-generated data. The main functionalities provided by this entity are the subscription (receive the filter that will be applied

like for example a request to gather data, without the need to wait for an answer.

when the desired data gets available it will receive the corresponding alert. This is mainly used for configuration and for calling actuators.

) Data Management in the Worldwide Sensor Web. IEEE PERVASIVE computing, April-June (2007) 18. Panlab Project, Pan European Laboratory Infrastructure Implementation, http://www. panlab. net/fire. html 19.


< Back - Next >


Overtext Web Module V3.0 Alpha
Copyright Semantic-Knowledge, 1994-2011