data and reports are needed more promptly and must be produced more speedily. This should include progress updates on projects from
implementing bodies as well as quantitative data -The levels of bureaucracy and administrative burden on beneficiaries within
is one EU data commissioner but the legislation in countries is still different âoewhen we go to Germany, the
data is a goldmine for companies Computer algorithms are better at diagnosing severe cancer than
Kenneth Cukier is data editor at The Economist and co-author with Viktor Mayer -Schã nberger of Big data:
data can teach us things that are extremely interesting, in fact things we would never have been able to find out with smaller
algorithms onto these large amounts of data Let me give you an example. Google handles more than a billion searches in the
data from the Centers for Disease Control and Prevention. The idea was to see whether
against their data, Google identified 45 terms that strongly coincided with CDCÂ s data on flu outbreaks
The Google trends method has been criticised, because its been wrong in some instances. However that is not the whole
collects data. Some of the data it collects has actually improved the accuracy of German
weather forecasting by 7%,which is a considerable improvement Lufthansa now sells that data to a meteorological company, which is a
great example of how big data can be commodified So big data can be sold Absolutely. In fact big data is a potential
the data they collect as they go about their everyday work. It will be a revenue generator
employing data or chief information officers, who will be responsible for this Itâ s not just companies.
each of us will be able to sell our data. People will upload data to online data exchanges
neutral platforms which can bring the data to the marketplace for a fair price. And there
will be a market for this data, as people realise the enormous potential of big data Will there be an impact on how people
work There will be a significant impact. This will be a revolution in the workplace. Both white colour and blue collar jobs will be
where data shape decisions more and more what purpose will remain for people, or for intuition, or for going against the facts
data, you have to tell them what you are collecting and why That isnâ t really feasible with big data
purpose the data will be used for Small data is like a waltz. Thereâ s a clear
tempo with known steps. Big data is like a mosh pit or jazz-improv. No one knows
a person to give consent, for that data to be used and reused and reused without
data will change the world for the better Continued from Page 6 Continued on Page 8
data mentioned is catalytic and shows us that this is the direction we need to move
and provides data, and other evidence that demonstrates how a business creates and delivers value to cus
Data, Tools and Research, held at the U s. Department of commerce, Washington, D c.,25-26 may 1999. It draws
Gordon (1998a) presents more finely disaggregated data on labor productivity, which reveals the pervasiveness of the slowdown
and improved access to marketing data are indeed enabling faster, less costly product innovation, manufacturing process redesign,
By combining this with data from Bailey and Gordon (1988) on the rising number of
spread of partially networked personal computers supported the development of new database and data entry tasks, new analytical and reporting tasks,
"data display terminals, the basis for interactive data display and entry to mainframe and minicomputer systems.
have become sufficiently ubiquitous to provide the infrastructure for task-oriented data acquisition and display
and maintenance of critical company data resources must be resolved and these often are compelling enough to force redesign of the organizational structure
centralized data resources. The common standards defining Internet technology have the fortuitous feature that virtually all personal computers can be configured similarly,
with only about a fifth of the workforce time in large service sector firms providing data
and the Data Constraint, â American Economic Review, Mar. 1994,84, pp. 1-23 Griliches, Zvi, âoecomments on Measurement Issues in Relating IT Expenditures to Productivity Growth, â Economics
Evidence from Government and Private Data Sources, 1977-1993, â Canadian Journal of Economics, 1998
Area Network (LAN) installed, according to the data from the first quarter 2014 67.7%of micro-companies had Internet access,
Data collection was done over a 2 month period during September-October 2014. To reliably identify trends only respondents with long tenure and representing
innovation data, see OECD, 2005. Secondly, respondents had to be involved at least in one implementation of change management process during the last 5 years.
WP2 analysed available data to better understand the growth, impact and potential for social innovation in Europe.
require much more broad-scale data WP3: Removing barriers to social innovation The development and growth of social innovation
Data and monitoring Most of the future research questions we identified would benefit greatly from advanced databases
into existing data sources on national technological innovation systems Social movements, power and politics Much of the existing literature on social innovation
is no data to be found on employment in the social economy. Thus, we are still lacking more
comprehensive and comparable data on the sector The Third Sector Impact project that started early in
2014 will help to make this data available. 48 Nonetheless, the extent to which social economy
to producing reliable data Concerning metrics for social innovation, we found that there are significant overlaps between
field and tap into existing data sources on national technological innovation systems Examples for such established metrics that
survey-based data related to social innovation are necessary. Considering the importance of entrepreneurial activities as push-factors for
social innovation, we need empirical survey data on organisations that are socially innovative in order to better understand how social innovation
existing knowledge and data sources on national technological innovation systems and make attempts to identify patterns in these systems
more homogeneous data about social innovation and opportunities for social innovations in future DEFINING MEASURING
 New flows of information (open data  Developing the knowledge base INTERMEDIARIES  Social innovation networks
 Platforms for open data/exchange of ideas Providing programmes/interventions  Networking opportunities/events
 New flows of information (open data  Developing the knowledge base INTERMEDIARIES  Social innovation networks
 Platforms for open data/exchange of ideas Providing programmes/interventions  Networking opportunities/events
public sector who use data to better target pockets of social need and tailor interventions or services
which are data-and analytics-heavy, and where high speed and global reach are important through reductions in
Data and monitoring It is clear that we require more and better data on social innovation, social needs, the social economy
and its innovative potential, other environments of social innovation, relevant actors and networks technological innovations, etc.
develop a standard structure that allows such data to be combined and compared Civil society and the social economy as
data for sound analyses. We need to dig deeper into the numerous variables determining in how far
requires much more empirical data, in particular data separately considering socially innovative organisations 38 SOCIAL INNOVATION THEORY AND RESEARCH
Effective collaborations It is evident that the nature of social innovations requires various actors to collaborate to make them
upon, and tap into existing data sources on national technological innovation systems Social movements, power and politics
Our data come from the Community Inno -vation Study performed in 2004 and covers period from
The data presented in this study were collected as part of Community Innovation Survey conducted on Croatian companies from manufacturing
The data were collected by mail survey followed up by two telephone prompts. This particular survey was the ï rst CIS
the data, 448 ï rms were used in this analysis In this study, we deï ne a list of possible factors that have
drivers of innovation, our data interestingly show no evidence that having received municipality or government
for a cross-sectional data model obtained from large-scale surveys of this type Out of the external factors, collaboration with other
Regarding internal factors, data show that the propor -tion of highly educated staff has a positive effect on radical
Data show that there is no difference in process innovation between ï rms that report obstacles and those
More detailed investigation of data shows that sources of ï nancing are indeed lacking: most Croatian SMES ï nanced
) Regardless of problems with ï nancing, data reveal that 85.5%of the ï rms that reported obstacles managed to
development gap, free circulation and equal access to data, information and to good practices and
and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of
â¢Volume and nature of data â the sheer volume of Internet traffic and the change
For example, Ciscoâ s latest forecast predicts that global data traffic on the Internet will exceed 767 Exabytes by 2014.
Data traffic for mobile broadband will double every year until 2014, in -creasing 39 times between 2009 and 201413
-cation and local context data when requested â¢Commercial services â as mentioned above the Internet is now a conduit for a
Interlinked Data-Content-Information Space...81 Maria Chiara Pettenati, Lucia Ciofi, Franco Pirri, and Dino Giuli
Data Usage Control in the future Internet Cloud...223 Michele Bezzi and Slim Trabelsi Part IV: Future Internet Foundations:
Fostering a Relationship between Linked Data and the Internet of Services...351 John Domingue, Carlos Pedrinaci, Maria Maleshkova, Barry Norton, and
The transmission can be improved by utilising better data processing & handling and better data storage, while the overall Internet performance
-linked Data-Content-Information Spaceâ chapter analyses the concept of âoecontent -Centricâ architecture, lying between the Web of Documents and the generalized Web
of Data, in which explicit data are embedded in structured documents enabling consis -tent support for the direct manipulation of information fragments.
data currently exchanged over the Internet. Based on 2, out of the 42 Exabytes 1018) of consumer Internet traffic likely to be generated every month in 2014,56
a k a. data packets, data traffic, information, content (audio, video, multimedia), etc and the term âoeserviceâ to refer to any action performed on data or other services and
the related Application programming interface (API. 2 Note however that this docu -ment does not take position on the localization and distribution of these APIS
and access data â¢Storage of âoedataâ: refers to memory, buffers, caches, disks, etc. and associated
refers to physical and logical transferring/exchange of data â¢Control of processing, storage, transmission of systems and functions:
Lack of data identity is damaging the utility of the communication system. As a result, data,
as an â economic objectâ, traverses the communication infrastructure multiple times, limiting its scaling, while lack of content â property rightsâ (not
-ture itself, the limited capability for processing data on a real-time basis poses limitations in terms of the applications that can be deployed over the Internet.
Data are not inherently asso -ciated with knowledge of their context. This information may be available at the
communication end-points (applications) but not when data are in transit. So, it is not feasible to make efficient storage decisions that guarantee fast storage man
-ferent types of data 18 ii. Lack of inherited user and data privacy: In case data protection/encryption meth
data cannot be stored efficiently/handled. On the other hand, lack of encryption violates the user and data privacy. More investigations into the larger privacy and
Lack of data integrity, reliability and trust, targeting the security and protection of data; this issue covers both unintended disclosure and damage to integrity from
defects or failures, and vulnerabilities to malicious attacks iv. Lack of efficient caching & mirroring:
-oriented traffic comprises much larger volumes of data as compared to any other information flow,
same data multiple times. Content Delivery Networks (CDN) and more generally architectures using distributed caching alleviate the problem under certain condi
massive amounts of data are exchanged 12 T. Zahariadis et al ii. Lack of integration of devices with limited resources to the Internet as autono
only mean protecting/encrypting the exchanged data but also not disclosing that communication took place. It is not sufficient to just protect/encrypt the data (in
-cluding encryption of protocols/information/content, tamper-proof applications etc) but also protect the communication itself,
Improper segmentation of data and control. The current Internet model segments horizontally) data and control,
whereas from its inception the control functional -ity has a transversal component. Thus, on one hand, the IP functionality isn't lim
The IP data plane is itself relatively simple but its associated control components are numerous and sometimes
-bined with a sudden peak in demand for a particular piece of data may result in
The amount of foreseen data and information5 requires significant processing power/storage/bandwidth for indexing/crawling
the fast and scalable identification and discovery of and access to data. The expo -nential growth of information makes it increasingly harder to identify relevant in
size at around 5 million terabytes of data (2005. Eric commented that Google has indexed roughly 200 terabytes of that is 0, 004%of the total size
wired interfaces) to the communication network but also to heterogeneous data, ap -plications, and services, nomadicity,
-gration of distributed but heterogeneous data and processes 16 T. Zahariadis et al â¢Scalability, including routing
and associated data traffic such as non/real-time streams, messages, etc. independently of the shared infrastructure par
intelligent routing) and better data storage (e g. network/terminals caches, data cen -ters/mirrors etc. while the overall Internet performance would be significantly im
-agement data describing the physical resource. The vcpi is responsible for providing dynamic management data to its governing AMS that states how many virtual re
-sources are currently instantiated, and how many additional virtual resources of what type can be supported 2. 4 Knowledge Plane Overview
-work architecture, contrasting with the data and control planes; its purpose is to pro -vide knowledge and expertise to enable the network to be self-monitoring, self
Plane (KP), consisting of context data structured in information models and ontolo -gies, which provide increased analysis
The KP brings together widely distributed data collection, wide availability of that data, and sophisticated and adaptive processing or KP functions, within a unify
-ing structure. Knowledge extracted from information/data models forms facts Knowledge extracted from ontologies is used to augment the facts,
so that they can be reasoned about. Hence, the combination of model and ontology knowledge forms a
which is used then to transform received data into a common form that enables it to be managed.
own thread allowing each one to collect data at different rates and also having the
The reader collects the raw measurement data from all of the sensors of a CCP The collection can be done at a regular interval or as an event from the sensor itself
The reader collects data from many sensors and converts the raw data into a common measurement object used in the CISP Monitoring framework.
meta-data about the sensor and the time of day, and it contains the retrieved data from
the sensor The filter takes measurements from the reader and can filter them out before they
which they collect data;(ii) the filtering process, by changing the filter or adapting an
Mapping logic enables the data stored in models to be transformed into knowledge and combined with knowledge stored in ontologies
The framework provides data sources, data consumers, and a control strategy. In a large distributed system there may be hundreds or thousands of measurement
probes, which can generate data â¢APE (Autonomic Policy-based Engine), a component of the MP, supports context
-rent mobile Internet architectures caused by the mobile traffic data evolution. Reserv -ing additional spectrum resources is the most straightforward approach for increasing
it is foreseen that due to the development of data-hungry entertainment services like television/radio broadcasting and Vod, 66%of mobile traffic will be
video by 2014 2. A significant amount of this data volume will be produced by mobile Web-browsing which is expected to become the biggest source of mobile
data caching technologies might have further impact on the traffic characteristics and obviously on mobile architectures
protocol for data networks and the continuously increasing future wireless traffic is also based on packet data (i e.,
, Internet communication. Due to the collateral effects of this change a convergence procedure started to introduce IP-based transport tech
With the increasing IP-based data traffic flattening hierarchical and centralized functions became the main driving force in the evolution of 3gpp network architec
Entity (MME), the Serving GW (S-GW) and the Packet data Network GW (PDN GW
This results in centralized, unscalable data plane and control plane with non-optimal routes, overhead and high end-to-end packet delay even in
and data packets traverse the centralized or hierarchized mobility anchor. Since the volume of user plane traffic is compared much higher to
packets from data messages after a short period of route optimization procedure The second type of partially distributed mobility management is based on the ca
, both data plane and control plane are distributed). This implies the introduc -tion of special mechanisms in order to identify the anchor that manages mobility sig
-naling and data forwarding of a particular mobile node, and in most cases this also
Global Mobile Data Traffic Forecast Update, 2009-2014 (Feb. 2010 3. Dohler, M.,Watteyne, T.,Alonso-Zá
and Management, Knowledge Engineering, Networking Data and Ontologies Future Communications and Internet 1 Introduction In recent years convergence on Internet technologies for communicationâ s, computa
data models integration are requirements to be considered during the design and im -plementation phases of any ICT system
of monitoring/fault data is a problem that has not yet been solved completely here is where federation take place
requirements from different information and data domains. This higher level of ab -straction enables business
data to guide rapid service innovation Concepts related to Federation such as Management Distribution, Management Con -trol and process representation are clear on their implications to the network manage
2. Observation) and also identify particular management data at application, service middleware and hardware levels (3. Analysis) that can be gathered, processed, aggre
data at the network and application level can be used to generate knowledge that can be used to support enterprise application management in a form of control loops in the
high level representation and mapping of data and information. Negotiations in form of data representation between different data and information models by
components in the system (s) are associated to this feature â¢Management Control-Administration functionality for the establishment of
communities of users (heterogeneous data & infrastructure The federated architecture must be enabled for ensuring the information is avail
formal manner, information and data can be integrated, and the power of machine -based learning and reasoning can be exploited more fully.
and its formalisms 29 30, such as FOCALE 25 and Autoi 21 23 translate data from a device-specific form to a device-and technology-neutral form to facilitate its
architecture, information is used to relate knowledge, rather than only map data, at different abstractions and domain levels corelating independent events each other in
correlation techniques that can process relevant data in a timely and decentralised manner and relay it as appropriate to management federated making functions are
â¢Techniques for analysis, filtering, detection and comprehension of monitoring data in federated enterprise and networks
and the data they deliver has to be associ -ated with some quality of information parameters before further processing
, retrieve sensor data from a sen -sor. However, while the concept of the web resource refers to a virtual resource iden
entities such as sensing, actuation, processing of context and sensor data or actuation loops, and management information concerning sensor/actuator nodes, gateway devices
e g. sensor data. The resource hosts are abstracted through the RFID readers due to the passive communication of the tags.
and abstracting data about the envi -ronment, workflow based specifications of system behaviour and semantically
use of sensor-based, streaming and static data sources in manners that were not neces
or the data sources made available. The architecture may be applied to almost any type of real world entity
streaming data sources, normally containing historical information from sensors; and even relational databases, which may contain any type of information from the digital world (hence
data-focused services (acting as resource endpoints), which are based on the WS-DAI specification for data access and integration and which are supported by the Semsor
-Grid4env reference implementation. These services include those focused on data registration and discovery (where a spatiotemporal extension of SPARQL â
stsparql-,is used to discover data sources from the Semsorgrid4env registry data access and query (where ontology-based and non-ontology-based query lan
-guages are provided to access data: SPARQL-Stream and SNEEQL â a declarative continuous query language over acquisition sensor networks, continuous streaming
data, and traditional stored data), and data integration (where the ontology-based SPARQL-Stream language is used to integrate data from heterogeneous and multi
-modal data sources. Other capabilities offered by the architecture are related to sup -porting synchronous and asynchronous access modes, with subscription/pull and
push-based capabilities, and actuating over sensor networks, by in-network query processing mechanisms that take declarative queries
and transform them into code that changes the behavior of sensor networks. Context information queries are sup
-ported by using ontologies about roles, agents, services and resources 4. 5 SENSEI The SENSEI architecture SENSEI aims at integrating geographically dispersed and
internet interconnected heterogeneous WSAN (Wireless Sensor and Actuator Net -works) systems into a homogeneous fabric for real world information and interaction
traditional services and data available in the Web. SPITFIRE extends the architectural model of this chapter by its focus on services,
interfaces and Linked Open Data, along with semantic descriptions throughout the whole architecture The Iot-A project extends the concepts developed in SENSEI further to provide a
Distributed Interlinked Data-Content-Information Space Maria Chiara Pettenati, Lucia Ciofi, Franco Pirri, and Dino Giuli
Documents and the generalized Web of Data, in which explicit data are embed -ded in structured documents enabling the consistent support for the direct ma
Web of Data; future Web; Linked Data; RESTFUL; read-write Web collaboration 1 Introduction There are many evolutionary approaches of the Internet architecture
which are at the heart of the discussions both in the scientific and industrial contexts:
Data/Linked Data, Semantic web, REST architecture, Internet of Services, SOA and Web Services and Internet of things approaches.
Approaches Web of Data /Linked Data REST Internet of Services WS -*SOA Web 2. 0
Web 3. 0 Semantic web Internet of things The three views can be interpreted as emphasizing different aspect rather than ex
Web of Data in which explicit data are embedded in documents enabling the consis -tent support for the direct manipulation of information as data without the limitation
of current data manipulation approaches. To this end, Ayers identifies the need to find and develop technologies allowing the management of âoemicro-contentâ i e. sub
-document-sized chunks (information/document fragments), in which content being managed and delivered is associated with descriptive metadata
small, Web-wide addressable data/content/information unit which should be organ -ized according a specific model and handled by the network architecture so as to
Managing a Global Distributed Interlinked Data-Content-Information Space 83 provide basic Services at an âoeinfrastructural levelâ which in turn will ground the de
the different paths to the Web of Data the one most explored is adding explicit data to
Directly treating content as data has had instead little analysis In this paper we discuss evolution of Interdatanet (IDN) an high-level Resource
the more we get away from the data and move into the direction of information, the fewer available solutions are there capable of covering the following
as in Web of Data 2. IDN adopts an URI-based addressing scheme (as in Linked Data
3. IDN provides simple a uniform Web-based interface to distributed heterogeneous data management (REST approach
4. IDN provides-at an infrastructural level-collaboration-oriented basic services namely: privacy, licensing, security, provenance, consistency, versioning and
such as Linked Data, RESTFUL Web Services, Internet of Service, Internet of things 2. 1 The Interdatanet Information Model and Service Architecture
Managing a Global Distributed Interlinked Data-Content-Information Space 85 IDN-SA (Interdatanet Service Architecture.
data model (see Figure 3) to describe interlinked data representing a generic docu -ment model in IDN and is the starting point from
Generic information modeled in IDN-IM is formalized as an aggregation of data units. Each data unit is assigned at least with a global identifier
and contains generic data and metadata; at a formal level, such data unit is a node in a Directed Acyclic
Graph (DAG. The abstract data structure is named IDN-Node. An IDN-Node is the âoecontent-itemâ handled by the âoecontent-centricâ IDN-Service Architecture.
The de -gree of atomicity of the IDN Nodes is related to the most elementary information
data units is composed of nodes related to each other through directed âoelinksâ. Three main link types are defined in the Information Model
Managing a Global Distributed Interlinked Data-Content-Information Space 87 Replica Management (RM) provides a delocalized view of the resources to the upper
therefore be enabled to the manipulation of data on a global scale within the Web REST interface has been adopted in IDN-SA implementation as the actions al
Managing a Global Distributed Interlinked Data-Content-Information Space 89 without the need to achieve the complete development of the architecture before its
The presented approach is not an alternative to current Web of Data and Linked Data
approaches rather it aims at viewing the same data handled by the current Web of
Data from a different perspective, where a simplified information model, representing only information resources, is adopted
naming convention or suggesting new methods of handling data, relying on standard Web techniques Interdatanet could be considered to enable a step ahead from the Web of Docu
-ment and possibly grounding the Web of Data, where an automated mapping of IDN -IM serialization into RDF world is made possible using the Named Graph approach
of Data potential Acknowledgments. We would like to acknowledge the precious work of Davide Chini, Riccardo Billero, Mirco Soderi, Umberto Monile, Stefano Turchi, Matteo
A Data Web Foundation For The Semantic web Vision. Iadis International Journal On Www /Internet 6 (2 december 2008
-dleware Infrastructure for Smart Data Integration, in D. In: Giusto, D.,et al. eds.)) The Internet of things:
Web of Data. âoeoh â it is data on the Webâ posted on April 14, 2010;
-it-is-data-on-the-web /J. Domingue et al. Eds.):) Future Internet Assembly, LNCS 6656, pp. 91â 102,2011
-sources, computing resources, device characteristics) via virtualization and data min -ing functionalities; the metadata produced in this way are then input of intelligent
data/servicse/contents Monitored Actor related information Aggregated metadata present context Exchanged metadata TO /FR
above-mentioned heterogeneous parameters/data/services/contents in homogeneous metadata according to proper ontology based languages (such as OWL â Web Ontol
-data exchanged among peer Cognitive Managers, in order to dynamically derive the aggregated metadata which can serve as inputs for the Cognitive Enablers;
(ii) providing enriched data/services/contents to the Actors In addition, these enablers control the sensing, metadata handling, actuation and API
data/contents/services produced by the Cognitive Enablers (Provisioning functional -ities embedded in the Actor Interface;
For example, to transfer data from a ï le, or content of email/instant message, it is necessary to have delivery guarantee in commu
between two or more entities and ensure that data exchange occurs at the link level and takes place according to the understanding made by the service layer
Another look at data. In: Proceedings of the Fall Joint Computer Conference. AFIPS November 14-16, Volume 31, pp. 525â 534.
applications as of today still generate large volumes of data, Internet Service Provid -ers (ISP) need to address the problem of expensive interconnection-charges.
An example is locality promotion based on BGP routing data â¢Insertion of Additional Locality-Promoting Peers/Resources involves (a) the inser
but an increase of the outgoing traffic due to the data exchange also with remote peers;
MPTCP s signalling and data â¢Incremental: the story is good, as only one stakeholder is involved viz the data centre
the two servers (at the bottom) travels over two paths through the switching fabric of the data
the data path, and MPTCPÂ s signalling messages must get through them Deployment and Adoption of Future Internet Protocols 139
and to decrease the latency of data delivery. The CDN server sends âoepremiumâ packets (perhaps for IPTV) as Conex-Not-Marked or Conex
Adding concurrent data transfer to transport layer, Proquest ETD Collection for FIU, Paper AAI3279221 (2007), http://digitalcommons. fiu. edu/dissertations
Improved data dis -tribution for multipath TCP communication. In: IEEE GLOBECOM (2005 19. Kelly, F.,Voice, T.:
-sumers, when they are creating data that a business would like to sell, with or without
Repurposing tussles occur in regards to the privacy of user communication data be -tween users, ISPS, service providers and regulators.
-munication data. Furthermore, ISPS and other companies such as Google and Amazon have increasingly been able to monetize their user transaction data and personal data
Google is feed able to advertisements based on past searching and browsing habits and Amazon is able to make recommendations based on viewing and purchasing
These applications of user data as marketing tools are largely unregulated. And in many cases, users have proved willing to give up some of their privacy in exchange
the bodies with direct access to the data, but are simply businesses, trying to make a
reaches to the applications and business models, ranging from the exchange of data of physical objects for the optimization of business scenarios in, e g.,
-lowing the owner of the data to decide and control how, when, and where it is going
Data travel through a multitude of different domains, contexts and locations while being processed by a large number of entities with different owner
and treated according to the data ownerâ s policy in balance with the processing entitiesâ policies
while distribution and exchange of data serve for additional entry points that can potentially be exploited to penetrate a system.
data-centric approach for the Future Internet, replacing point-to-point communication by a publish/subscribe approach.
that ensure the availability of data and maintains their integrity. It is a good example of how clean-slate approaches to the Future Internet can support security needs by
Future Internet scenarios like the Internet of Services, the need for data exchange leads to sensitive data, e g.,
, personally identifiable information, travelling across a number of processes, components, and domains. All these entities have the means to
collect and exploit these data, posing a challenge to the enforcement of the usersâ protection needs and privacy regulations.
which does not allow one to predict by whom data will be proc -essed or stored.
To provide transparency and control of data usage, the chapter âoedata Usage Control in the future Internet Cloudâ proposes a policy-based framework for
expressing data handling conditions and enforcing them. Policies relating events and obligations are coupled with data (âoesticky policiesâ) and,
hence, cannot get lost in transition. A common policy framework based on tamper-proof event handlers and
Internet Protocol Suite with a data-centric or publish/subscribe (pub/sub) net -work layer waist for the Internet.
through the network stack for a data-centric pub/sub architecture that achieves availability, information integrity,
Data-centric pub/sub as a communication abstraction 2, 3, 4 reverses the control between the sender and the receiver.
whole Internet protocol suite with a clean-slate data-centric pub/sub network waist 14. This enables new ways to secure the architecture in a much more fundamental
Data-or content-centric networking can be seen as the inversion of control between the sender and the receiver compared to message passing:
-fied data that the network then returns when it becomes available taking advantage of multicast and caching 2,
-tion pattern to emphasize that the data items can link to other named data and that the
data has structure An immutable association can be created between a rendezvous identifier (Rid) and a data value by a publisher
and we call this association a publication. At some point in time, a data source may then publish the publication inside a set of scopes that
determine the distribution policies such as access control, routing algorithm, reach -ability, and Qos for the publication and may support transport abstraction specific
policies such as replication and persistence for data-centric communication. The Security Design for an Inter-Domain Publish/Subscribe Architecture 169
function, operates solely using data-centric pub/sub model, it can be used to set up communication using any kind of transport abstraction on the data plane fast path
that is used for the payload communication. The data-centric paradigm is a natural match with the communication of topology information that needs to be distributed
typically to multiple parties and the ubiquitous caching considerably reduces the ini -tial latency for the payload communication as popular operations can be completed
locally based on cached data Below the control plane, t he network is composed of domains, that encapsulate re
endpoints are a source and a destination or for data-centric transport: a data source
and a subscriber. The topic is identified with an Rid and is used to match the end
For example, for data-centric communication, the topic identifies the requested publication A graphlet defines the network resources used for the payload communication and
the identifier and L is a variable length label of binary data. Only fixed length hash of
variable length names are needed for dynamically generated content, where the data source uses the label as an argument to produce the publication on the fly.
scopes, where publications are made available are orthogonal to the structure of the data In Fig. 1, the publication on the left is published inside âoemy home scopeâ that is fully
easy to see that the logical structure of the data, e g. the link between the two publica
-tions, is orthogonal to the scoping of the data that determines the communication aspects for each publication
a data-centric pub/sub primitive as a recursive, hierarchical structure, which first joins node local rendezvous implementations into rendezvous networks (RN) and then RNS
-end path between the service container (e g. a data source) and the client (e g. a subscriber) and
tion data or pending subscription alive. This pub/sub primitive is the only functional -ity implemented by the rendezvous core.
should be supported by adding a data-centric transport to the data plane as we did
Each scope also publishes a meta-data publication inside itself named DKX, âoescope meta-dataâ) describing which transports the scope supports, among
The upgraph data itself is published by the provider domain of the node Because many nodes share the same upgraph, the data-centric rendezvous system
caches them orthogonally close to the scope homes that are nodes implementing the scope in question. Similarly, the result of the rendezvous is cached automatically and
If the transport in question is multicast data dissemination then a separate resource allocation protocol could be coupled with the protocol as we
A data-oriented network architecture DONA 4 replaces a tradi -tional DNS-based namespace with self-certifying flat labels,
which owns the data and L is a label. DONA utilizes an IP header extension mechanism to add a DONA header to the IP header, and sepa
Consumers of data send interest packets to the network, and a nodes possessing the data reply with the corresponding
data packet. Since packets are named independently, a separate interest packet must be sent for each required data packet.
In CCN data packets are signed by the original publisher allowing independent verification, however interest packet's are not always
protected by signatures Security issues of the content-based pub/sub system have been explored in 7. The
work proposes secure event types, where the publication's user friendly name is tied to the publisher's cryptographic key
In this paper we introduced a data-centric inter-domain pub/sub architecture addressing availability and data integrity.
We used the concept of scope to separate the logical structure of linked data from the orthogonal distribution strategies used to determine
how the data is communicated in the network. This is still ongoing work and, for exam -ple, the ANDL language and quantitative analysis will be covered in our future work
Open Access. This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction
An Inter-Domain Data-Oriented Routing Architecture. In: Rearchâ 09, Rome, Italy (2009 3. Jacobson, V.,Smetters, D. K.,Thornton, J. D.,Plass, M.,Briggs, N.,Braynard, R. L.:
A Data-Oriented (and Beyond) Network architecture. In: ACM SIGCOMM 2007 Kyoto, Japan (2007 176 K. Visala, D. Lagutin, and S. Tarkoma
-ing of RFID tags (privacy violation) and cloning of data on RFID tags (identity theft).
In that case, it is possible to develop test data generation that speciï cally targets the integration of services, access control policies or speciï c attacks.
of data ï ow, as well as technologies for privacy-preserving usage control Towards a Traverse Methodology.
Clients want to be sure that their data outsourced to other domains, which the clients cannot control,
of the communicated data. More elaborate goals are structural properties (which can sometimes be reduced to conï dentiality and authentication goals) such as
data of each organization; and our main motivation is to take into account the se -curity policies while computing an orchestration.
is to organize data by means of sets and to abstract data by set membership.
-quired to determine the access to private data and to the meta-policies that control them.
data available anywhere and anytime in a health care organization, while lower -ing infrastructure costs. Clearly, privacy requirements will be much more diï cult
user on Google Apps, granting unauthorized access to private data and services email, docs, etc..
â programmatic controlâ over a part of the data centerâ 1, pp. 8-9 For this cloud-of-clouds vision4this article will investigate the related chal
interoperability standards or placement policies for virtual images or data across providers. Many of these developments can be expected to be transferred into
in the sense that it will ensure that data mobility is limited to ensure compliance with a wide range of diï erent national
to what Google does with Gmail-unencrypted data transfer between the cloud and the user, cloud services for more sensitive markets (such as Microsoft Health
As a consequence, data leakage and service disrup -tions gain importance and may propagate through such shared resources.
important requirement is that data cannot leak between customers and that malfunction or misbehavior by one customer must not lead to violations of the
through dedicated infrastructure for each individual customer and data wiping before reuse. Sharing of resources and multi-tenant isolation can be implemented
-Service requires that each data instance is assigned to a customer and that these instances cannot be accessed by other customers.
Shared resources must moderate potential data ï ow and ensure that no unauthorized data ï ow occurs between customers.
To limit ï ow control, mechanisms such as access control that ensures that machines and applications of one customer cannot access data or resources from other
customers can be used Actual systems then need to implement this principle for all shared resources 4
-istrators stealing and disclosing data. This risk is hard to mitigate since security controls need to strike a balance between the power needed to administrate and
or transported data â Security administrators can design and deï ne policies but cannot play any
â Customer employees can access their respective data and systems (or parts thereof) but cannot access infrastructure
or data owned by diï erent cus -tomers This so-called privileged identity management system is starting to be imple
-sourced data 20 Trustworthy Clouds Underpinning the Future Internet 215 3. 3 Failures of the Cloud Management Systems
For building such resilient systems, important tools are data replication atomic updates of replicated management data,
and integrity checking of all data received (see, e g.,, 24. In the longer run, usage of multiple clouds may further
improve resiliency (e g.,, as pursued by the TCLOUDS project www. tclouds-pro ject. eu or proposed in 11
Data corruption may not be detected for a long time. Data leakage by skilled insiders is unlikely to be detected.
Furthermore, the operational state and potential problems are usually not communicated to the customer except after an outage has occurred
and no data is corrupted or leaked. In practice, these problems are unsolved largely. Cryptographers have designed schemes such as homomorphic encryption 9 that allow veriï able com
-putation on encrypted data. However, the proposed schemes are too ineï cient and do not meet the complete range of privacy requirements 23.
-tiï able data (PID. In Europe, Article 8 of the European Convention on Hu -man Rights (ECHR) provides a right to respect for ones âoeprivate and family
limited collection of data, the authorization to collect data either by law or by informed consent of the individual whose data are processed (âoedata subjectâ
the right to correction and deletion as well as the necessity of reasonable security safeguards for the collected data
Since cloud computing often means outsourcing data processing, the user as well as the data subject might face risks of data loss, corruption or wiretap
-ping due to the transfer to an external cloud provider. Related to these de facto obstructions in regard to the legal requirements, there are three particular chal
responsibilities and liabilities concerning the data. This means that the user must be able to control
and comprehend what happens to the data in the cloud and which security measures are deployed. Therefore, the utmost transparency
happens to his data, where they are stored and who accesses them. Also, the cloud service provider could prove to have an appropriate level of security mea
Unlike local data centers residing in a single country, such cloud infrastructures often extend over
So to avoid unwanted disclosure of data, suï cient protection mechanisms need to be established. These may also extend to the level of technical solutions
such as encryption, data minimization or enforcement of processing according to predeï ned policies 4 Open Research Challenges
Furthermore, data generated by systems need to be assigned to one or more customers to enable access to critical data such as logs and monitoring data
A particularly hard challenge will be to reduce the amount of covert and side channels. Today, such channels are frozen often in hardware
Today, regulations often mandate that data needs to be processed in a particular country. This does not align well with todayâ s cloud architectures
and data integrity through authen -tication. However, we expect that they will then move on to the harder problems
Controlling data in the cloud: outsourcing computation without outsourcing control. In: ACM Workshop on Cloud computing Security (CCSWÂ 09), pp. 85â 90
-cure Outsourcing of Data and Arbitrary Computations with Lower Latency. In Acquisti, A.,Smith, S.,Sadeghi, A r. eds.
/Data Usage Control in the future Internet Cloud Michele Bezzi and Slim Trabelsi SAP Labs 06253, Mougins, France
collected data, but generally not providing a real eï cient solution. Tech -nical solutions are missing to help
and support the legislator, the data owners and the data collectors to verify the compliance of the data usage
conditions with the regulations. Recent studies address these issues by proposing a policy-based framework to express data handling conditions
and enforce the restrictions and obligations related to the data usage. In this paper, we ï rst review recent research ï ndings in this area, outlin
-ing the current challenges. In the second part of the paper, we propose a new perspective on how the users can control
of their data stored in a remote server or in the cloud. We introduce a
In the cloud, data may ï ow around the world, ignoring borders, across multiple services, all in total transparency
In fact, when data cross borders, they have to comply with privacy laws in every jurisdiction,
handling data, when usage conditions are uncertain To face these challenges, the concept of sticky policy has been introduced 5
-pressing that the data should be used for speciï c purposes only, or the retention period should not exceed 6 months,
the user when data are transfered to a third party. The sticky policy is prop -agated with the information throughout its lifetime, and data processors along
the supply chain of the cloud have to handle the data in accordance with their
â Providing the data owner with a user friendly way to express their prefer -ences, as well as to verify the privacy policy the data are collected with
â Develop mechanisms to enforce these sticky policies in ways that can be veriï ed and audited
which combines access and data handling policies; we then describe the corresponding policy engine, enabling the deploy
particular, the current framework lacks mechanisms to provide the data owner with the guarantee that policy and obligations are enforced actually.
to more complex data such as preferences, friendsâ list, photos. Service providers Data Usage Control in the future Internet Cloud 225
Fig. 1. PPL high level architecture describe how the usersâ data are handled using privacy policy,
which is, more or less explicitly, presented to users during the data collection phase. Privacy
policies are composed typically of a long text written in legal terms that are rarely fully understood,
which their data are handled Therefore, there is need to support the user in this process, providing an
-tions of access and usage of the data. A PPL policy can be used by a service
provider to describe his privacy policies (how the data collected will be treated and with whom they will be shared),
the use of his data (who can use it and how it should be treated). Before disclos
which is bound to the data (sticky policy) and travels with them. In fact, this sticky policy will be sent to the server
and follow the data in all their lifecycle to specify the usage conditions The PPL sticky policy deï nes the following conditions
â Data Handling: the data handling part of the language deï nes two condi -tions â¢Purpose:
expressing the purpose of usage of the data. Purpose can be for example marketing, research, payment, delivery, etc
â¢Downstream usage: supporting a multilevel nested policy describing the data handling conditions that are applicable for any third party collect
-ing the data from the server. This nested policy is applicable when a server storing personal data decides to share the data with a third party
â Obligations: Obligations in sticky policies specify the actions that should be carried out after collecting
or storing a data. For example, notiï cation to the user whenever his data are shared with a third party,
or deleting the credit card number after the payment transaction is ï nished, etc Introducing PPL policies requires the design of a new framework for the process
-ing of such privacy rules. In particular, it is important to stress that during the lifecyle of personal data, the same actor may play the role of both data collector
and data provider. For this reason, Primelife proposed the PPL engine based on a symmetric architecture, where any data collector can become a data provider
if a third party requests some data (see Figure 1). According to the role played by an entity (data provider or data collector) the engine behaves diï erently by
invoking the appropriate modules In more detail, on the data provider side (user) the modules invoked are
â The access control engine: it checks if there is any access restriction for the data before sending it to any server.
For example, we can deï ne black or white lists for websites with whom we do not want to exchange our personal
information â Policy matching engine: after verifying that a data collector is in the white
list, a data provider recovers the serverâ s privacy policy in order to compare it to its preferences and verify
whether they are compatible in terms of data handling and obligation conditions. The result of this matching may be
displayed through a graphical interface, where a user can clearly understand how the information is handled
if he accepts to continue the transaction with the data collector. The result of the matching conditions,
as agreed by the user, is transformed into a sticky policy On the data collector side, after recovering the personal information with its
sticky policy the invoked modules are â Event handler: it monitors all the events related to the usage of the collected
data. These event notiï cations are handled by the obligation engine in order to check if there is any trigger that is related to an event.
of a data, the event handler will notify the obligation engine whenever an Data Usage Control in the future Internet Cloud 227
access (read, write, modiï cation, deletion etc.)to data is detected in order to keep track of this access
â Obligation engine: it triggers all the obligations required by the sticky policy If a third party requests some data from the server,
the latter becomes a data provider and acts as a user-side engine invoking access control and matching
modules, and the third party plays the role of data collector invoking the obli -gation engine and the event handler
3 Open Challenges Although the PPL framework represents an important advancement in fulï lling many privacy requirements of the cloud scenario, there are still some issues
Firstly, in the current PPL framework, the data owner has no guarantee of actual enforcement of the data handling policies and obligations.
Indeed, the data collector may implement the PPL framework, thus having the technical capacity of processing the data according to the attached policies,
but it could always tamper with this system, which controls, or simply access directly the
data without using the PPL engine. In practice, the data owner should trust the data collector to behave honestly
A second problem relates to the scalability of the sticky policy approach Clearly, the policy processing adds a relevant computational overhead.
Its appli -cability to realistic scenarios, where large amounts of data have to be transmitted and processed, has to be investigated
A last issue relates to the privacy business model. The main question is: What should motivate the data collectors/processors to implement such technology
Actually, in many cases, their business model relies on the as-less-restricted-as -possible use of private data.
On the user side, a related question is, are the data owners ready to pay for privacy 9?
Both questions are diï cult to address, es -pecially when dealing with such a loosely deï ned concept as privacy.
Although studies exist (see 11,3, and references therein), mainly in the context of the web
data, balancing the value of his personal data with the services obtained. As a matter of fact, users have diï culties to monetize the value of their personal
information, and they tend to disclose their data quite easily. In the cloud world organizations store the data they have collected (under speciï c restrictions) with
the cloud provider. These data have a clear business value, and typically com -panies can evaluate the amount of money they are risking
if such data are lost or made public. For these reasons, it is likely that they are ready to pay for a
stronger privacy protection All these issues need further research work to be addressed. In the next section, we present our initial thoughts on how we may extend the Primelife
In the current PPL framework, there is no guarantee of enforcement of the data handling policies and obligations.
the user control on the released data. The main idea is to introduce tamper -proof 6 obligation engine and event handler, certiï ed by a trusted third party
which mediate the communication and the handling of private data in the cloud platform. The schedule of the events,
Data Usage Control in the future Internet Cloud 229 with a tamper-proof event handler and a tamper-proof obligation engine certiï ed
If the data owner has the guarantee from a trusted authority (governmental oï ce, EU commission
requirements, he will tend to transfer his data to the certiï ed host. In order to certify the compliance of an application,
if the stored data are handled correctly The diï culty comes for the access to the database by the service provider
The particularity of this API is that all the methods to access the data can be
data and sticky policy) this action should be detected, managed and logged by the event handler.
any data owner, who, once authenticated, can list all the data (or set of data with their related events and pending or enforced obligations.
The data owner can at any time control how his data are handled, under which conditions the
information is accessed, and compare them with the corresponding stored sticky policy. Fig. 3 shows a very simple example of how the remote administrative
-trol to the data hosted within the cloud. It also allows the user to detect any
improper usage of his data, and, in this case, notify the host or the trusted authority
First, from the data owner perspective, there is a guarantee that actual enforcement has taken place and that he can monitor the status of his data and corresponding policies.
Second from the auditorsâ point of view, it limits the perimeter of their analysis, since the conï dence zone provided by the tamper proof elements and the standardized API
data usage, agreed upon collection, may be lost in the lifecycle of the personal data. From the data consumer point of view, businesses and organizations seek to
ensure compliance with the plethora of data protection regulations, and minimize the risk of violating the agreed privacy policy
The concept of sticky policy may be used to address some of the privacy requirements of the cloud scenario.
a high level of trust in the data collector/processor. We presented some initial thoughts about how this problem can be mitigated through the usage of a tam
Data Usage Control in the future Internet Cloud 231 3. Bonneau, J.,Preibusch, S.:The privacy jungle:
Privacy-enabled management of customer data. In: Dingledine, R.,Syverson, P. F eds.)) PET 2002.
W3c Workshop on Privacy and data usage control p. 5 october 2010), http://www. w3. org/2010/policy-ws
status data properly after the VCT is provisioned and while the testing is in progress Figure 5 displays this condition where the System Under Test (SUT) is our algorithm
to a single application and that implement a general data transport service are designated as routing slices 13.
they permit data transport resource to be accessed without knowledge of their physical or network location
and data exchange among providers (e g. 8). Intrusion detection systems can increase situation awareness (and with this overall security) by sharing infor
real-time multimedia streaming are much stricter than that of bulk data transfer. IEEE 802. 16d 5, the employed Wimax testbed is based on,
the user premises â one for sending data to the uplink and receiving the downlink
monitoring data and the usage of service-level adaptation actions for efficient network adaptation The NECM of the Wimax BS constantly monitors network device statistics (e g
The Service-level NECM undertakes to collect service-level data. The Service-level NECM could be placed at the service providerâ s side, even at premises
The decision making engine of the NDCM filters the collected monitoring data from the network and the service level
-tributed data â¢An addressing scheme, where identity and location are embedded not in the same
search through vast amounts of monitoring data to find any âoeinconveniencesâ to his network behaviour and to ensure a proper servicesâ delivery.
the sharing and transmission of data over practically any medium. The Inter -netâ s infrastructure is essentially an interconnection of several heterogeneous net
Indeed, IT resources are processing data that should be transferred from the userâ s premises or from the data repository to the computing resources
and the data deluge will fall in it the communication model oï ered by the Internet may break the hope for
This concept has little to do with the way data is processed or trans -mitted internally, while enabling the creation of containers with associated non
It also connects heterogeneous data resources in an isolated virtual infrastructure. Furthermore, it supports scaling
accessing their status and data online. Mobile platforms will need to access to external data and functionality in order to meet consumer expectations for rich interactive
seamless experiences. Thus, a second driving requirement for the Internet of Services is to provide a uniform conduit between the Future Internet architectural elements
Linked Data is the Semantic web in its simplest form and is based on four principles â¢Use URIS (Uniform Resource Identifiers) as names for things
Given the growing take-up of Linked Data for sharing information on the Web at large scale there has begun a discussion on the relationship between this technology
Budapest both contained sessions on Linked Data. The final chapter in this section Domingue et al. âoefostering a Relationship Between Linked Data and the Internet of
Servicesâ discusses the relationship between Linked Data and the Internet of Services Specifically, the chapter outlines an approach which includes a lightweight ontology
and a set of supporting tools John Domingue J. Domingue et al. Eds.):) Future Internet Assembly, LNCS 6656, pp. 327â 338,2011
and data service support to other enterprise services and lines of business. This brings varied expectations of availability, mean-time-to
Taking a holistic cost view, it provides fine grained SLA based data to influence future investment decisions based on capital, security, compute power and
to sensitive data on the health status of the citizens and quite challenging for the key
trends in the real data extracted from the past behaviours of the systems at the service
of data transfer between links. The main diï erence between these two layers is that the Net-Ontology layer is responsible to support service needs beyond
simple data transfers. These layers, compared with the TCP IP layers, are rep -resented in Fig. 1,
protocols generally just can send information at the data ï eld and do not support
the needs of the data ï ow that will start. With the understanding of application
and the data is sent through the layers also using raw sockets At the current stage of development the implementation of FINLAN library
Fostering a Relationship between Linked Data and the Internet of Services John Domingue1, Carlos Pedrinaci1, Maria Maleshkova1, Barry Norton2, and
We outline a relationship between Linked Data and the Internet of Services which we have been exploring recently.
Linked Data is a lightweight mechanism for sharing data at web-scale which we believe can fa
-cilitate the management and use of service-based components within global networks Keywords: Linked Data, Internet of Services, Linked Services
1 Introduction The Future Internet is a fairly recent EU initiative which aims to investigate scientific
The Web of Data is a relatively recent effort derived from research on the Semantic
whose main objective is to generate a Web exposing and interlinking data previously enclosed within silos.
Like the Semantic web the Web of Data aims to extend the current human-readable Web with data formally represented so that soft
-ware agents are able to process and reason with the information in an automatic and
-ing of data on the Web From a Future Internet perspective a combination of service-orientation and Linked
Data provides possibilities for supporting the integration, interrelationship and inter -working of Future Internet components in a partially automated fashion through the
-spective, Linked Data with its relatively simple formal representations and inbuilt support for easy access and connectivity provides a set of mechanisms supporting
Data is increasingly gaining interest within industry and academia. Examples include for instance, research on linking data from RESTFUL services by Alarcon et al. 3
work on exposing datasets behind Web APIS as Linked Data by Speiser et al. 4, and Web APIS providing results from the Web of Data like Zemanta1
We see that there are possibilities for Linked Data to provide a common â glueâ as
services descriptions are shared amongst the different roles involved in the provision aggregation, hosting and brokering of services.
In some sense service descriptions as and interlinked with, Linked Data is complementary to SAPÂ s Unified Service De
-scription Language2 5, within their proposed Internet of Services framework3, as it provides appropriate means for exposing services and their relationships with provid
In this paper we discuss the relationship between Linked Data and services based on our experiences in a number of projects.
end of the paper we propose a generalization of Linked Data and service principles for the Future Internet
2 Linked Data The Web of Data is based upon four simple principles, known as the Linked Data
principles 6, which are 1. Use URIS (Uniform Resource Identifiers) as names for things 2. Use HTTP URIS so that people can look up those names
Fostering a Relationship between Linked Data and the Internet of Services 353 RDF (Resource Description Framework) is a simple data model for semantically
describing resources on the Web. Binary properties interlink terms forming a directed graph. These terms as well as the properties are described by using URIS.
SPARQL is a query language for RDF data which supports querying diverse data sources, with the results returned in the form of a variable-binding table, or an RDF
graph Since the Linked Data principles were outlined in 2006, there has been a large up -take impelled most notably by the Linking Open Data project4 supported by the W3c
Semantic web Education and Outreach Group As of September 2010, the coverage of the domains in the Linked Open Data
Cloud is diverse (Figure 1). The cloud now has nearly 25 billion RDF statements and
over 400 million links between data sets that cover media, geography, academia, life -sciences and government data sets
Fig. 1. Linking Open Data cloud diagram as of September 2010, by Richard Cyganiak and Anja
Jentzsch5 From a government perspective significant impetus to this followed Gordon Brownâ s announcement when he was UK Prime Minister6 on making Government data freely
available to citizens through a specific Web of Data portal7 facilitating the creation of a diverse set of citizen-friendly applications
4 http://esw. w3. org/Sweoig/Taskforces/Communityprojects/Linkingopendata 5 http://lod-cloud. net /6 http://www. silicon. com/management/public-sector/2010/03/22/gordon-brown-spends-30m
-to-plug-britain-into-semantic web-39745620 /7 http://data. gov. uk /354 J. Domingue et al
On the corporate side, the BBC has been making use of RDF descriptions for some time.
BBC Backstage8 allows developers to make use of BBC programme data avail -able as RDF.
Fostering a Relationship between Linked Data and the Internet of Services 355 4 Linked Services
The advent of the Web of Data together with the rise of Web 2. 0 technologies and
1. Publishing service annotations within the Web of Data, and 2. Creating services for the Web of Data, i e.,
, services that process Linked Data and/or generate Linked Data We have devoted since then significant effort to refining the vision 10 and imple
-menting diverse aspects of it such as the annotation of services and the publication of services annotations as Linked Data 11,12,
as well as on wrapping, and openly exposing, existing RESTFUL services as native Linked Data producers dubbed Linked
Open Services 13,14. It is worth noting in this respect that these approaches and techniques are different means contributing to the same vision
and are not to be con -sidered by any means the only possible approaches. What is essential though is ex
-ploiting the complementarity of services and the Web of Data through their integra -tion based on the two notions highlighted above
APIS, for which we provide in essence a Linked Data-oriented view over existing functionality exposed as services.
Fig. 2. Services and the Web of Data 356 J. Domingue et al by interpreting their semantic annotations (see Section 4. 1)
In this way, data from legacy systems, state of the art Web 2. 0 sites, or sen -sors, which do not directly conform to Linked Data principles can easily be made
available as Linked Data In the second layer Are linked Service descriptions. These are annotations describ
-ing various aspects of the service which may include: the inputs and outputs, the func
Following Linked Data principles these are given HTTP URIS, are described in terms of lightweight RDFS vocabularies, and are interlinked with existing Web vocabularies.
descriptions available in the Linked Data Cloud through iserve these are described in more detail in Section 4. 1
The final layer in Figure 2 concerns services which are able to consume RDF data
using these newly obtained RDF triples combined with additional sources of data Such an approach, based on the ideas of semantic spaces, has been sketched for the
manipulation functionality up to highly complex processing beyond data fusion that might even have real-life side-effects.
constructing Linked Data applications is therefore more generally applicable than that of current data integration oriented mashup solutions
We expand on the second and third layers in Figure 2 in more detail below
descriptions of Linked Services allowing them to be published on the Web of Data and using these annotations for better supporting the discovery, composition and
Fostering a Relationship between Linked Data and the Internet of Services 357 Fig. 3. Conceptual model for services used by iserve
of the Web of Data as background knowledge so as to identify and reuse existing vocabularies. Doing so simplifies the annotation
sources of Linked Data The annotation tools are connected both to iserve for one click publication. is
Data, as well as the first to provide advanced discovery over Web APIS comparable to that available for WSDL-based services.
-posed following the Linked Data principles and a range of advanced service analysis and discovery techniques are provided on top.
-cation Is linked based on Data principles, application developers can easily discover services able to process or provide certain types of data,
and other Web systems can seamlessly provide additional data about service descriptions in an incremental and
distributed manner through the use of Linked Data principles. One such example is for instance LUF (Linked User Feedback) 16,
which links service descriptions with users ratings, tags and comments about services in a separate server.
/Fostering a Relationship between Linked Data and the Internet of Services 359 In summary, the fundamental objective pursued by iserve is to provide a platform
and Consume Linked Data In this section we consider the relationship between service interactions and Linked
Data; that is, how Linked Data can facilitate the interaction with a service and how the result can contribute to Linked Data.
In other words, this section is not about an -notating service descriptions by means of ontologies and Linked Data, but about how
services should be implemented on top of Linked Data in order to become first class citizens of the quickly growing Linking Open Data Cloud.
Note that we take a purist view of the type of services which we consider.
These services should take RDF as input and the results should be available as RDF;
i e.,, service consume Linked Data and service produce Linked Data. Although this could be considered restrictive, one
main benefit is that everything is instantaneously available in a machine-readable form Within existing work on Semantic web Services,
considerable effort is expended often in lifting from a syntactic description to a semantic representation and lowering from a
and platform to interpret them, following Linked Data and 18 http://soa4all. isoco. net/spices/about
-fered over the geonames data set, a notable and â lifelongâ member of the Linking
Open Data Cloud, which are offered primarily using JSON -and XML-encoded mes -saging. A simple example is given in Table 1,
it conveys neither the resultâ s internal semantics nor its interlinkage with existing data sets.
in Linked Data, on the other hand, geonames itself provides a predicate and values for country codes and the WGS84 vocabulary is used widely for latitude and
Linked Data sets) 20 but the string value does not convey this interlinkage A solution more in keeping with the Linked Data principles,
as seen in our version of these services, 21 uses the same languages and technologies in the implementation
â¢reusing URIS from Linked Data source for representing features in input and output messages
relationship more useful as Linked Data, the approach of Linked Data Services LIDS) 25 is to URL-encode the input.
/Fostering a Relationship between Linked Data and the Internet of Services 361 resource identifier. This URI is used then as the subject of such a triple, encoding
Linked Data and Services mailing list23, a URI representing the input is returned using the standard Content-Location HTTP header field.
-encoded, it can first BE POSTED as a new resource (Linked Data and Linked Data Services so far concentrate on resource retrieval and therefore primarily the HTTP
greatest familiarity and ease for Linked Data developers. It is not without precedent in semantic service description 26.
In this paper we have outlined how Linked Data provides a mechanism for describing services in a machine readable fashion and enables service descriptions to be seam
-lessly connected to other Linked Data. We have described also a set of principles for how services should consume
and produce Linked Data in order to become first-class Linked Data citizens From our work thus far, we see that integrating services with the Web of Data, as
depicted before, will give birth to a services ecosystem on top of Linked Data whereby developers will be able to collaboratively
and incrementally construct com -plex systems exploiting the Web of Data by reusing the results of others.
The system -atic development of complex applications over Linked Data in a sustainable, efficient and robust manner shall only be achieved through reuse.
We believe that our ap -proach is a particularly suitable abstraction to carry this out at Web scale
We also believe that Linked Data principles and our extensions can be generalized to the Internet of Services.
That is, to scenarios where services sit within a generic Internet platform rather than on the Web.
underlying Linked Data. We are also hopeful that our approach provides a viable starting point for this.
Linked Data at the network level25 Acknowledgements. This work was funded partly by the EU project SOA4ALL (FP7
/Fostering a Relationship between Linked Data and the Internet of Services 363 Open Access. This article is distributed under the terms of the Creative Commons Attribution
Linking Data from RESTFUL Services. In: Workshop on Linked Data on the Web at WWW 2010 (2010
4. Speiser, S.,Harth, A.:Taking the LIDS off Data Silos. In: 6th International Conference on
Semantic Systems (I-SEMANTICS)( October 2010 5. Cardoso, J.,Barros, A.,May, N.,Kylau, U.:
Linked Data-Design Issues (July 2006), http://www. w3. org /Designissues/Linkeddata. html 7. Fielding, R. T.:
Services and the Web of Data: An Unex -ploited Symbiosis. In: AAAI Spring Symposium âoelinked Data Meets Artificial Intelli
-genceâ, March 2010, AAAI Press, Menlo Park (2010 10. Pedrinaci, C.,Domingue, J.:Toward The next Wave of Services:
Web of Data. Journal of Universal Computer science 16 (13), 1694â 1719 (2010 11. Maleshkova, M.,Pedrinaci, C.,Domingue, J.:
Consuming Dynamic Linked Data. In: 1st International Workshop on Consuming Linked Data (November 2010 15.
Benslimane, D.,Dustdar, S.,Sheth, A.:Services Mashups: The New Generation of Web Applications. IEEE Internet Computing 12 (5), 13â 15 (2008
Towards Linked Data Services. In: Intâ l Semantic web Conference Posters and Demonstrations (November 2010 26.
and provision of very high-volume video data â¢Second, the development of advanced networking technologies in the access and
, no duplicates), making it free for (other) data (e g.,, more enhance -ment layers The key innovations of this approach to service/content adaptation are â distrib
and Data Planes (MPL, CPL, DPL), parallel and cooperating (not represented explicitly in the picture.
The upper data plane interfaces at the CAN layer and transport the packets between the VCAN layer and the Home-Box layer in both directions.
The media data flows, are classified intelligently at ingress MANES, and associated to the appropriate VCANS in order to be processed accordingly based on:(
-structed how to classify the data packets, based on information as: VCAN IDS, Con -tent description metadata, headers to analyse, Qos class information, policies, PHB â
At its turn, the SM@SP instructs the SP/CP servers how to mark the data packets
The data packets are analysed by the classifier, assigned and forwarded to one of the VCANS for further processing.
processing of MANE in the data plane based on deep analysis of the first packets of a
1) data confiden -tiality, integrity and authenticity; and 2) intelligent and distributed access control policy-based enforcement
by MANE routers over data in motion. Such security enforcement will be done ac -cordingly to policies and filtering rules obtained from the CANMGR.
inspection of data traversing a security element placed in a specific point in the net
distinctive types of data: wavelet coefficients representing of the texture information remaining after the wavelet transform
Finally, the resulting data are mapped into the scalable stream in the Scalable and Adaptable Media Coding Techniques for Future Internet 385
-pressed data. This layered representation provides the basis for low-complexity adap -tation of the compressed bit-steam
-proaches for generating multiple descriptions include data partitioning (e g.,, even/odd sample or DCT coefficient splitting) 5, multiple description (MD) quantization
context can be constructed by learning from data. In the target represen -tation scheme, metadata is divided into three levels:
built using from a small amount of training data. Semantic inference and reasoning is performed then based on the model to decide the relevance
scheme for its semantic context is learned directly from data and will not be restricted to the pre-deï ned semantic structures in speciï c application domains
-data extracted from multimedia content into three levels-low, mid and high -according to their levels of semantic abstraction and try to deï ne the mapping
amount of training data. Semantic inference and reasoning is carried then out based on the learned model to decide
the data. Due to the scope of this paper, we give only a brief introduction to K2
when an un-annotated data item is present, the Bayesian network model derived from the training stage conducts automatic
Tech. rep.,Institute for Image Data Re -search, University of Northumbria at Newcastle (1999), http://www. jisc. ac. uk
networks from data. Machine learning 9 (4), 309â 347 (1992 400 Q. Zhang and E. Izquierdo
on Knowledge and Data engineering, 665â 677 (2005 Part VIII Future Internet Applications Part VIII:
access to and sharing of patient data, secure data exchange between healthcare actors and applications for remote and collaborative diagnosis, cure and care
operators and service providers such as networks, switching, computing and data cen -404 Part VIII: Future Internet Applications
provide a high level of data abstraction and modularity (using technologies such as COM,., NET, EJB and J2ee
-mitting events and data generated during FINERSÂ operations. There is not a central -ised database, the information will stay by the business entity to
Data flows are transferred among GSN nodes over dedicated circuits (like light paths or P2p links), tunnels over Internet or logical IP networks
The GSN Data plane corresponds to the System level, including massive physical resources, such as storage servers and application
resources Business, Presentation or Data Access Tier. The Tool component provides additional services, such as persistence,
-ble set of data flows among data centers. The ability of incorporating third-party power control components is also an advantage of the Iaas Framework
Such a virtual data center can be hosted by any physical network node, according to the power availabil
and to migrate data cen -ters following green energy source availability, such as solar and wind
-connection of mobile devices, sensors and actuators allowing real-world urban data to be collected and analysed, will improve the ability to forecast
-time data management, alerts, and information processing, and (3) the creation of applications enabling data collection and processing, web-based collaboration, and
actualisation of the collective intelligence of citizens. The latest developments in cloud computing and the emerging Internet of things, open data, semantic web, and
future media technologies have much to offer. These technologies can assure econo -mies of scale in infrastructure, standardisation of applications,
-tinuous flow of data and information, and offer useful services. It is here that the third
and open public data up to developers as well as user communities. As the major challenge facing European cities is to secure high living
would mash-up with the cityâ s open, public data 4 Emerging Smart City Innovation Ecosystems
map of sensor data available on smart phone) as well as urban waste management are two of the use cases from the Smart Santander project
and a local SME providing data access from electric cars equipped with air quality sensors (VULOG)
the Iot in an open and environmental data context, and to facilitate the co-creation of
green services based on environmental data obtained via sensors. Various environ -mental sensors will be used, such as fixed sensors from Atmo PACA in the NCA area
-ronmental data; 2) to participate in the co-creation of services based on environmental data; and 3) to access services based on environmental data,
such as accessing and/or visualising environmental data in real time. Three complementary approaches have already been identified as relevant for the green services use case:
participatory/user -centred design methods; diary studies for Iot experience analysis, and coupling quan -titative and qualitative approaches for portal usage analysis. In this context of an open
innovation and Living Lab innovation ecosystem, focus groups involving stake -holders and/or citizen may be run either online or face-to-face
such as specific testing facilities, tools, data and user groups, can be made accessible and adaptable to specific demands of any research and innovation projects
facilities, user communities, technologies and know-how, data, and innovation meth -ods. Such common resources potentially can be shared in open innovation environ
will enable data transfer services agnostic to the underlying connection protocol. Fur -thermore, a major challenge in future urban spaces will be how to manage the in
on top of a unified model so that data and information could be shared among dif -ferent applications and services at global urban levels.
the use of semantics for the understanding, combination and processing of data and information from different service provides, sources and formats
and collaborative crowdsourcing collecting citizensâ generated data By analyzing these different Smart Cities application scenarios, together with the
multidimensional ecosystem, where data is binding the different dimensions, as most aspects are related closely (e g. environment and traffic, both of them to health, etc
sensor data (for example for energy monitoring, video surveillance or traffic con -trol). ) This functionality will provide a repository where observations/sensorsâ data
are stored to allow later retrieval or processing, to extract information from data by applying semantic annotation and data linkage techniques
â¢Publish-Subscribe-Notify: in other cases, services rely on some specific events happening in the city (such as traffic jams or extreme pollution situations.
The platform will allow services to subscribe not just to the observations provided by the sensors,
Messages & Data Format Adapter Communication Protocol Adapter Fig. 2. High-level Architecture of a USN Iot Platform
The USN-Gateway represents a logical entity acting as data producers to the USN -Enabler that implements two main adaptation procedures to integrate physical or
-ments) data from specific SANS data (i e. Zigbee Adaptation and Homogenization are two key requirements for the USN Platform
â¢The Notification Entity (NE) is the interface with any sensor data consumer that require filtering or information processing over urban-generated data.
The main functionalities provided by this entity are the subscription (receive the filter that will be applied), the analysis of the filters (analyze the filter condition) and the no
the sensor network, like for example a request to gather data, without the need to wait for an answer.
desired data gets available it will receive the corresponding alert. This is mainly used for configuration and for calling actuators
) Data Management in the Worldwide Sensor Web. IEEE PERVASIVE computing, April-June (2007 18. Panlab Project, Pan European Laboratory Infrastructure Implementation
Towards a RESTFUL Architecture for Managing a Global Distributed Interlinked Data-Content-Information Space Introduction
Data Usage Control in the future Internet Cloud Introduction Primelife Privacy Framework Open Challenges Towards Privacy Policy Enforcement in the Cloud
Fostering a Relationship between Linked Data and the Internet of Services Introduction Linked Data Services on the Web
Linked Services Conclusions References Part VII: Future Internet Areas: Content Introduction to Part VII Media Ecosystems:
Overtext Web Module V3.0 Alpha
Copyright Semantic-Knowledge, 1994-2011