Synopsis: Ict: Computer:


Survey on ICT and Electronic Commerce Use in Companies (SPAIN-Year 2013-First quarter 2014).pdf

Moreover, the use of computers has expanded to almost the entirety of these companies (99.2%).%)In turn, 87.3%had a Local area network (LAN) installed,

%Percentage over the total number of companies with 10 or more employees Number of employees TOTAL 10 to 49 50 to 249 250 or more%of companies with-Computers 99.2 99.1 99.5 99.8-Local area network

enabling connection to the internet for company use. 36.4%of these were laptop computers, and 49%were smartphones or PDA phones.

%and company database server (54.7%).53.4%of the companies that used Cloud computing did so by paying any service existing in servers of shared services suppliers.

%of the total sales ICT use in companies with fewer than 10 employees 72.3%of companies with fewer than 10 employees had computers,

-Computers 71.6 72.3-Local area network 24.0 24.4-Wireless Local area network 16.4 17.6-Internet connection 65.7 67.7-Broadband (fixed or mobile) Internet connection (1) 98.5 99.0

The Survey has been conducted by the National Statistics Institute (INE), in cooperation with the National Foundation Centre of Reference for the Application of Information and Communication Technologies based on Open sources (CENATIC.


Survey regarding reistance to change in Romanian Innovative SMEs From IT Sector.pdf

and also large companies and was implemented by means of computer-assisted telephone interviewing. Data collection was done over a 2 month period during September-October 2014.

54.9%of companies NACE code principal-6201 (Activities to develop custom software (softwareoriented client), 20.9%CAEN 6202 (consultancy activities information technology),


Targetspdf.pdf

but display also higher growth. 0102030405060708090robgitelltptcyhrhulvplsiesczmtskeeeu28ieatbefrfidenllusedkuk nline shopping by citizens(%of individuals) year 2009increase by 2014source:


Tepsie_A-guide_for_researchers_06.01.15_WEB.pdf

many have highlighted that at the core of social innovations is this intention to create something better.

consultancy and so on) and as such supporting the adoption of the core‘content'of the innovation. After this initial period, their role is likely to shift

their role is to allow the core innovation to fit into a new context. But sometimes adaptations change the nature of the original innovation,

With the rapid growth of cheap, ubiquitous and powerful tools like the internet, the world-wide-web, social media and mobile devices, new ways of carrying out social innovation have become possible.


The 2013 EU Industrial R&D Investment Scoreboard.pdf

%Western digital, the US (49.0%;%Apple, the US (39.2%;%Volkswagen, Germany (32.1%;%Qualcomm, the US (30.7%),Huawei, China (30.3%),Google, the US (27.7%).

Google (Internet), Oracle (Software), Qualcomm (Telecom equipment), Apple computer Hardware) and Broadcom (semiconductors. The performance of EU companies compared to US companies in the ICT sectors varies by subsector...

Despite lagging behind the US in the volume of R&d investments and in the number of companies, EU-based Scoreboard companies in the Software and Computer Services sector show very strong performance:

This contrasts with negative figures in the Technology Hardware & Equipment sector for EU companies(-2. 3%in R&d and-9. 3%in sales;

IT Hardware, Automobiles & Parts, and Pharmaceuticals & Biotechnology. Figure S6. Relative size of EU R&d in Pharma and Biotech compared to US Source:

Software & Computer Services 37; Automobiles & Parts 36; Technology Hardware & Equipment 29; Chemicals 24;

Banks 23; Health care Equipment & Services 20; Aerospace & Defence 18. The top 5 sectors account for 43.8%of the 527.1473 companies based in non-EU countries Companies by country US 658;

The 10 most numerous sectors Technology Hardware & Equipment 264; Pharmaceuticals & Biotechnology 156; Software & Computer Services 151;

Electronic & Electrical Equipment 139; Industrial Engineering 116; Chemicals 94; Automobiles & Parts 90; Health care Equipment & Services 63;

The US is by far the strongest region in the group of high R&d intensity sectors including pharmaceuticals, health, software,

and technology hardware whereas the EU and Japan are stronger in medium R&d intensity sectors like the automotive sector (see chapter 4). Figure 1. 1 R&d investment by the top 2000

Technology hardware & equipment; Software & computer services and Aerospace & defence. Medium-high R&d intensity sectors (between 2%and 5%)include e g.

Electronics & electrical equipment; Automobiles & parts; Industrial engineering & machinery; Chemicals; Personal goods; Household goods;

In 2nd position is Samsung Electronics from South korea with Microsoft from the US 3rd. The other companies in the top-ten include four from the US, two from Switzerland and one from Japan.

%Western digital, US (49.0%;%Gilead Sciences, US (46.4%).%)Those showing the largest decrease in R&d are Renesas, Japan(-24.9%;

Microsoft (€7. 9bn), Intel (€7. 7bn), Merck US (€6. 0bn), Johnson & johnson (€5. 8bn) and Pfizer (€5. 7bn.

%Western digital, US (49.0%;%Apple, US (39.2%;%Qualcomm, US (30.7%),Huawei, China (30.3%),Google, US (27.7%).

PFIZER INC.'S INFANT NUTRITION PFIZER INC. 30/11/2012 Acq. 100%MICROSOFT 6164 2 SKYPE GLOBAL SARL SILVER LAKE PARTNERS 13/10

%NOKIA 1700.0 NOKIA SIEMENS SIEMENS 07/08/2013 Acq. from 50%to 100%IBM 1559.0 SOFTLAYER GLOBAL INNOVATION 08/07/2013 Acq. 100%ORACLE

BEIJING FOTON DAIMLER 18/02/2012 Joint venture 100%SONY 535.5 SO-NET ENTERTAINMENT 20/09/2012 Acq. from 57.974%to 95.609%HUAWEI 398.4 HUAWEI SYMANTEC

SYMANTEC 30/03/2012 Acq. from 51%to 100%IBM 275.9 ALGORITHMICS INC. FITCH INC. 21/10/2011 Acq. 100%AMGEN 251.6 KAI PHARMACEUTICALS THOMAS

'S DEVELOPMENT CSR PLC 04/10/2012 Acq. 100%VOLKSWAGEN 139.5 MAN SE 05/06/2012 Acq. from 73.76%to 75.03%INTEL 105 8 CRAY

Sun microsystems) and ten companies joined the top 50 (Abbott, Amgen, Apple, Denso, Google, Huawei, Oracle, Panasonic, Qualcomm and Takeda pharmaceuticals).

or more places but remained within the top 50 include Siemens (now 17th), IBM (now 21st), Ford motor (now 23rd), Ericsson (now 28th), NTT (now 49th), Hewlett-packard (now 44th),

HEWLETT-PACKARD, USA 44. CANON, Japan 43. TOSHIBA, Japan 42. BOEHRINGER INGELHEIM, Germany 41. TAKEDA PHARMACEUTICAL, Japan 40.

IBM, USA 20. GLAXOSMITHKLINE, UK 19. PANASONIC, Japan 18. CISCO SYSTEMS, USA 17. SIEMENS, Germany 16.

PFIZER, USA 9. JOHNSON & JOHNSON, USA 8. MERCK US, USA 7. NOVARTIS, Switzerland 6. ROCHE, Switzerland 5. TOYOTA MOTOR, Japan 4. INTEL

, USA 3. MICROSOFT, USA 2. SAMSUNG ELECTRONICS, South korea 1. VOLKSWAGEN, Germany R&d investment (Euro million) USA EU Japan South korea Switzerland

2 SAMSUNG ELECTRONICS up 31 3 MICROSOFT up 10 4 INTEL up 10 5 TOYOTA MOTOR dow n 1 6 ROCHE up 11 7 NOVARTIS

13 18 CISCO SYSTEMS up 13 19 PANASONIC up 128 20 GLAXOSMITHKLINE dow n 9 21 IBM dow n 12 22 NOKIA dow

41 TAKEDA PHARMACEUTICAL up 31 42 BOEHRINGER INGELHEIM up 20 43 TOSHIBA dow n 13 44 CANON dow n 5 45 HEWLETT-PACKARD dow

*rank Company Country Sector R&d in 2012 (€ m 1 GOOGLE USA Internet 4997.0 2 ORACLE USA Software 3675.9 3 QUALCOMM USA

Taiwan Electronic equipment 1191.6 12 WESTERN DIGITAL USA Computer hardware 1191.5 13 ZTE China Telecommunications Equipment 1170.5 14 VALE Brazil Mining 1120.2*These companies

%namely Software & Computer Services (11.7%),Automobiles & Parts (8. 9%)and Technology Hardware & Equipment (8. 8%).The top R&d investing sector, Pharmaceuticals and Biotechnology achieved a more modest

Companies based in the EU had the highest R&d growth in Automobile & Parts (14.4%),Software & Computer Services (14.2%)and the Industrial Engineering (12.3%)sectors.

%Pharmaceuticals & Biotechnology (17.5%)and Technology Hardware & Equipment (10.2%).%)The main R&d shares of those based in the US specialise in high R&d-intensive sectors, namely Technology Hardware & Equipment (25.2%),Pharmaceuticals & Biotechnology (22.1%)and Software & Computer Services (18.2%).

%)These three high R&d-intensity sectors account for 65.5%of US R&d, 30%for the EU and 26%for Japan.

Out of 40 industrial sectors, the top three Pharmaceuticals & Biotechnology, Technology Hardware & Equipment and Automobiles & Parts account for 50.2%of the total R&d investment by the Scoreboard companies;

%It is followed by the Technology Hardware & Equipment sector with a share of 16.4%(similar to last year's 16.6%)and the Automobile & Parts sector with 15.7%,slightly higher than the 15.0%of last year.

%and Technology Hardware & Equipment (10.2%);%In the US, Technology Hardware & Equipment (25.2%),Pharmaceuticals & Biotechnology (22.1%)and Automobiles & Parts (6. 6%;

%In Japan, Automobiles & Parts (26.4%),Pharmaceuticals & Biotechnology (10.8%)and Technology Hardware & Equipment (7. 3%).The contribution to the total Scoreboard R&d by EU companies is 53.0%to Aerospace

& Defence, 46.1%to Automobiles & Parts and 39.5%to the Industrial Engineering sectors; the US contributes 74.4%to Software and Computer Services, 63.8%to Health care Equipment & Services and 54.0%to Technology Hardware & Equipment and;

Japan contributes 34.5%to Chemicals, 33.3%to the Electronic & Electric Equipment sector and 31.8%to Automobiles & Parts.

Worldwide, the Software & Computer Services sector shows the highest one-year growth rate (11.8), %followed by Industrial Engineering (9. 8%),Automobiles & Parts (8. 9%)and Technology Hardware & Equipment (8. 8%)sectors.

Among the companies based in the EU, the Automobiles & Parts sector shows the highest one-year growth rate (14.4),

%followed by the Software & Computer Services (14.2%)and Industrial Engineering (12.3%)sectors. Sectors showing the lowest one-year R&d growth are Banks (for which only the EU companies report R&d,-6. 8%),Fixed Line Telecom(-4. 6%

and Technology Hardware & Equipment(-2. 3%).The 2013 EU Industrial R&d Investment Scoreboard 41 Among the companies based in the US,

the Technology Hardware & Equipment sector shows the highest one-year growth rate (14.8%)followed by Software

& Computer Services (12.6%)and Industrial Engineering (9. 4%).Sectors showing the lowest one-year R&d growth are Food Producers(-12.4%)and Leisure Goods(-4. 6%).For Japanese companies,

& Biotechnology Technology Hardware & Automobiles & Parts Software & Computer Services Electronic & Electrical Equipment Industrial Engineering Chemicals Aerospace & Defence General Industrials Leisure Goods

%20%30%40%50%60%70%80%90%100%Japan US EU Pharmaceuticals & Biotechnology Technology Hardware & Equipment Automobiles & Parts Software

& Computer Services Electronic & Electrical Equipment Industrial Engineering Chemicals Aerospace & Defense General Industrials Leisure Goods Other Source:

Japan-353 R&d change(%)1 year 3 years 1 Software & Computer Services 11.8 14.2 10.0 12.6 10.4-4. 7-8. 4

12.6-2. 6 5 1 6. 4 5. 3 4 Technology Hardware & Equipment 8. 8-2. 3 1. 4 14.8

in particular the Technology Hardware & Equipment (8. 8%vs. 1. 9%)and the Industrial Engineering sector (9. 8%vs. 3. 5%).The opposite happened for the Electronic & Electric Equipment

%Pharmaceuticals & Biotechnology, IT sectors (Software & Computer Services and Technology Hardware & Equipment) and Leisure Goods. The sector with the lowest R&d intensity is Oil & Gas Producers (0. 3

the R&d intensity of EU companies is larger than that of the US and Japan in 6 sectors (Software & Computer Services, Technology Hardware & Equipment, Industrial Engineering,

intensity,%1 Pharmaceuticals & Biotechnology 14.4 13.9 15.8 13.2 2 Software & Computer Services 9. 9 12.6 11.5 4. 8 3 Technology

Hardware & Equipment 7. 9 14.5 8. 8 6. 1 4 Leisure Goods 6. 3 3. 3 5. 3 6. 7

%followed by Software & Computer Services (7. 4%),Food Producers (7. 3%)and Aerospace & Defence (6. 4%).Regarding the automotive sales,

%)The sector showing the lowest one-year sales growth is Technology Hardware & Equipment(-9. 3%).Among the largest sectors in the EU,

the highest profitability is shown in Pharmaceuticals & Biotechnology (19.0%)and Software & Computer Services (18.2%).

%)The EU companies'negative profitability of the Technology Hardware & Equipment sector(-1. 1%)is mostly due to large losses incurred by Nokia, STMICROELECTRONICS and Alcatel-lucent.

the Software & Computer Services sector shows the highest one-year growth rate for sales (6. 9%)followed by Technology Hardware

%and Oil & Gas Producers(-3. 0%).The US-based companies have the highest profitability in Software & Computer Services (23.9%)and Pharmaceuticals & Biotechnology (21.7%).

*1 Automobiles & Parts 8. 8 11.3 5. 2 0. 0-3. 2 11.9 5. 6 2 Software & Computer Services 7

4. 4 10 Technology Hardware & Equipment 1. 9-9. 3-1. 1 6. 8 14.9-1. 2 6. 6 11

Technology Hardware & Equipment and Software & Computer Services, account for almost 90%of the total R&d investment of the US's high R&d intensity group.

Software & Computer Services 113: UK 47, France 21, Germany 19 Pharmaceuticals & Biotechnology 112: UK 30, France 18 Industrial Engineering 112:

Germany 20, UK 11, France 6 Technology Hardware & Equipment 46: UK 11, Germany 7, Sweden 7 50 The 2013 EU Industrial

of EU 1000 (number of firms) Pharmaceuticals & Biotechnology 59 (23%)52 (21%)Software & Computer Services 37 (14%)74 (30%)Technology Hardware & Equipment

More than 55%of these companies in the sectors of Electronic and Electrical Equipment, Pharmaceuticals & Biotechnologies and Software & Computer services have a higher R&d intensity than the average of the 527 EU companies.

The share of companies with a higher R&d intensity than that of the top European companies exceeds 40%in the sectors of Industrial Engineering and Technology Hardware & Equipment.

All the Swedish companies operating in the Technology Hardware and Equipment sector show higher performances, as compared to the upper reach average.

Swedish and UK companies in the Software and Computer Services sector show high performances as more than 80%display a higher R&d intensity than the upper reach average.

Similar cases occur in Finland where Nokia's R&d investment accounts for almost 74%of the total R&d by Finnish companies and in Ireland with Seagate The 2013 EU Industrial R&d Investment Scoreboard

Technological innovations range from biotech drugs or software-driven MRI scanners and radiotherapy systems to micromechanical devices like drug-eluting stents and robotic-assisted surgery.

FDIS in R&d are concentrated mainly in the three sectors of Technology Hardware and Equipment Automobiles & Parts and Pharmaceuticals & Biotechnology. 12 A turnkey contract is a business arrangement in

Table 6. 4 displays in more detail the destination of the 856 FDI projects in R&d made by the EU Scoreboard companies during the period 2003-2012.

4 2 18 29 1 1 4 5 3 1 11 1 111 Real estate Investment & Services 1 1 2 Software

& Computer Services 2 19 2 2 5 21 45 1 4 2 1 7 111 Support Services 10 2 19

1 2 4 38 Technology Hardware & Equip. 2 2 7 1 1 2 6 11 1 1 5 7 46 Tobacco


The 2013 EU SURVEY on R&D Investment Business Trends.pdf

the overall expectations of all the other companies in the sample show a more positive outlook for industrial R&d at exactly the same global level as in past year's survey (4%).For some sectors,

%and technology hardware & equipment (4%).Figure 1: Expected changes of R&d investment of the surveyed companies 2013-15, p. a. Note:

Million euros in 2011.5%0%5%10%Software & Computer Services Pharmaceuticals & Biotechnology Technology Hardware & Equipment Health care Equipment & Services Electronic & Electrical Equipment General

while maintaining an R&d focus in the EU. Low expectations for R&d in the EU (1%p. a. in 2013-15) are due to the outlook of seven automobiles

and monitors progress towards the 3%headline target. The survey complements IRIMA's core activity, the EU Industrial R&d Investment Scoreboard

7 which analyses private R&d investments based on the audited annual accounts of companies and shows ex-post trends.

Technology Hardware & Equipment, Software & Computer Services, and Health care Equipment & Services 49 47%Medium R&d intensity Industrial Engineering, Electronic & Electrical Equipment, Automobiles & Parts, Chemicals, Aerospace & Defence, General Industrials

Their outlook was compared significantly lower to the past(-0. 7%p. a. for 2013-15 vs. around 5%in our two previous surveys

%.While that level is a positive outlook for corporate R&d above the nominal EU GDP growth estimates at 1. 4%for 2013 and 1. 9%for 2014,15 the R&d investment expectations are not yet at the levels

In the high R&d intensity group, expected R&d investment changes from pharmaceuticals & biotechnology (4. 4%)and technology hardware & equipment (3. 6%)are slightly above those of last year's survey

-5%0%5%10%15%Software & Computer Services Pharmaceuticals & Biotechnology Technology Hardware & Equipment Health care Equipment & Services Electronic & Electrical Equipment General

their expected R&d investment changes are inline to the expected vehicle sales outlook for the next years,

& parts companies in China R&d investment of the 9 surveyed companies in China passenger vehicles sales outlook in China 14 The 2013 EU SURVEY on R&d Investment Business

also for US companies the 2013 outlook for R&d investment changes has been reduced to 2. 3%18 due to more moderate growth dynamics compared to the previous period. 19 The comparison of R&d investment

%In the high R&d intensity sectors, pharmaceuticals & biotechnology and software & computer services are the drivers of expectations in the US and Canada, China and India.

Figure 13 displays the ranking of the most attractive country for outsourcing the company's R&d to other companies.

%)Firms across all sector groups value the acquisition of new or highly improved machinery, equipment and software within the European union higher than acquisition from outside (non-EU) countries.

Companies in the technology hardware & equipment and pharmaceuticals & biotechnology (high R&d intensity) report the highest average shares.

pharmaceuticals & biotechnology, technology hardware & equipment, software & computer services, health care equipment & services,

sector group**Pharmaceuticals & Biotechnology 24 108 22.2%above 40%High technology Hardware & Equipment 10 47 21.3%above 40%High Software & Computer Services 8

This is the result of the high share of R&d employees in large companies that responded from technology, hardware & equipment and pharmaceuticals & biotechnology (high R&d intensity), automobiles & parts, industrial engineering,

c1) Inside the European union (c2) In non-EU countries (d) Acquisition of new or highly improved machinery, equipment and software:(

e-mail The collected personal data and all information related to the above mentioned survey is stored on servers of the JRCIPTS, the operations

and provisions established by the Directorate of Security for these kind of servers and services. The information you provide will be treated as confidential

%and technology hardware & equipment (4%).The responding companies carry out a quarter of their R&d outside the EU. Their expectations for R&d investment for the next three years show continued participation of European companies in the global economy, in particular growth


THE CULTURE OF INNOVATION AND THE BUILDING OF KNOWLEDGE SOCIETIES.pdf

and ethnic distinctions) and a reflexive approach to knowledge and practices among the core competencies that are crucial in creating A Culture of Innovation.


The future internet.pdf

, Switzerland C. Pandu Rangan Indian Institute of technology, Madras, India Bernhard Steffen TU Dortmund University, Germany Madhu Sudan Microsoft Research, Cambridge, MA

SL 5 Computer Communication Networks and Telecommunications The Editor (s)( if applicable) and the Author (s) 2011.

The core of this program will be a platform that implements and integrates new generic but fundamental capabilities of the Future Internet,

such as interactions with the real world through sensor/actuator networks, network virtualization and cloud computing, enhanced privacy and security features and advanced multimedia capabilities.

This core platform will be based on integration of already existing research results developed over the past few years,

using the properties of the core Future Internet platform. Examples of these use cases are a smarter electricity grid, a more efficient international logistics chain

Mobile devices the Internet can now be accessed from a wide variety of mobile devices including smart phones

enjoying multimedia communications, taking advantage of advanced software services, buying and selling, keeping in touch with family and friends,

The very success of the Internet is now creating obstacles to the future innovation of both the networking technology that lies at the Internet's core and the services that use it.

Higher degree of virtualisation for all systems: applications, services, networks, storage, content, resources and smart objects.

The Towards In-Network Clouds in Future Internet chapter explores the architectural co-existence of new and legacy services and networks, via virtualisation of connectivity and computation resources and self management capabilities,

On one hand, it aims at achieving a full interoperation among the different entities constituting the ICT environment, by means of the introduction of Semantic Virtualization Enablers.

Internet Architecture, Limitations, Processing, Handling, Storage, Transmission, Control, Design Objectives, EC FIARCH group. 1 Introduction The Internet has evolved from a remote access to mainframe computers and slow

enjoying multimedia communications, taking advantage of advanced software services, buying and selling, keeping in touch with family and friends,

It is expected that the number 8 T. Zahariadis et al. of nodes (computers, terminals mobile devices, sensors, etc.

, 3d videos, interactive environments, network gaming, virtual worlds, etc. compared to the quantity and type of data currently exchanged over the Internet.

computers (e g.,, terminals, servers, etc. CPUS, etc. and handlers (software programs/routines) that generate and treat as well as query

and access Data storage of data: refers to memory, buffers, caches, disks, etc. and associated logical data structures.

Transmission of data: refers to physical and logical transferring/exchange of data. Control of processing, storage, transmission of systems and functions:

as the onset of the phenomenon will still cause thousands of cache servers to request the same documents from the original site of publication. 3. 3 Transmission Limitations The fundamental restrictions that have been identified in this category are:

Software & Service Architectures & Infrastructures, D4: Networked Enterprise & Radio frequency identification (RFID) and F5: Trust and Security.

Foundations for the Study of Software Architecture. ACM SIGSOFT Software engineering Notes 17,4 (1992) 17 Papadimitriou, D.,et al.

ACM Computer Communications 33 (17), 2105 2115 (2010) 19 Freedman, M.:Experiences with Coralcdn: A Five-Year Operational View.

ACM SIGCOMM Computer Communication Review 39 (5)( 2009) 26 Eggert, L.:Quality-of-Service: An End System Perspective.

ACM SIGCOMM Computer Communication Review (Oct. 2010), http://www2. research. att. com/bala/papers/ccr10-priv. pdf 33 W3c Workshop

This paper aims to explore the architectural co-existence of new and legacy services and networks, via virtualisation of connectivity and computation resources and self management capabilities,

In-Network Clouds, Virtualisation of Resources, Self management, Service plane, Orchestration plane and Knowledge plane. 1 Introduction The current Internet has been founded on a basic architectural premise, that is:

This paper aims to explore the architectural co-existence of new and legacy services and networks, via virtualisation of resources and self management capabilities,

In-Network virtualisation provides flexibility, promotes diversity, and promises security and increased manageability. We define In-Network clouds as an integral part of the differentiated Future Internet architecture,

and sharing a common physical substrate of communication nodes and servers managed by multiple infrastructure providers.

Virtualisation Plane (VP), Management Plane (MP), Knowledge Plane (KP), Service Plane (SP), and Orchestration Plane (OP) as depicted in Fig. 1. These planes are new higher-level artefacts,

and servers Towards In-Network Clouds in Future Internet 21 Fig. 1. In-Network Cloud Resources within the network.

Together these distributed systems form a software-driven network control infrastructure that will run on top of all current networks (i e. fixed

The governance functionality of the OP monitors the consistency of the AMSS'actions, it enforces the high level policies

This implies that the Orchestration Plane may use very local knowledge to deserve a real time control as well as a more global knowledge to manage some long-term processes like planning. 2. 3 Virtualisation Plane Overview Virtualisation hides the physical characteristics 14,16 of the computing

This paper uses system virtualisation to provide virtual services and resources. System virtualisation separates an operating system from its underlying hardware resources;

resource virtualisation abstracts physical resources into manageable units of functionality. For example, a single physical resource can appear as multiple virtual resources (e g.,

, the concept of a virtual router, where a single physical router can support multiple independent routing processes by assigning different internal resources to each routing process;

Virtualisation enables optimisation of resource utilisation. However, this optimisation is confined to inflexible configurations within a single administrative domain.

This paper extends contemporary virtualisation approaches and aims at building an infrastructure in which virtual machines can be relocated dynamically to any physical node or server regardless of location, network,

and storage configurations and of administrative domain. The virtualisation plane consists of software mechanisms to abstract physical resources into appropriate sets of virtual resources that can be organised by the Orchestration Plane to form components (e g.,

, increased storage or memory devices (e g.,, a switch with more ports or even networks. The organisation is done

the vspi and the vcpi (Virtualisation System Programming interface and Virtualisation Component Programming interface, respectively. A set of control loops is formed using the vspi and the vcpi,

Fig. 2. Virtualisation Control Loop Virtualisation System Programmability Interface (vspi. The vspi is used to enable the Orchestration Plane

Virtualisation Component Programming interface (vcpi. Each physical resource has associated an and distinct vcpi. The vcpi is fulfilling two main functions:

and to request virtual resources to be constructed from that physical resource by the vcpi of the Virtualisation Plane.

i) the Context Executive (CE) Module which interfaces with other entities/context clients,(ii) the Context Processing (CP) Module which implements the core internal operations related to the context processing

The Context Information Base (CIB) provides flexible storage capabilities, in support of the Context Executive and Context Processor modules.

they monitor hardware and software for their state, present their capabilities, or collect configuration parameters.

A monitoring mechanism and framework was developed to gather measurements from relevant physical and virtual resources and CCPS for use within the CISP.

, the number of CPUS,(ii) N-time queries, which collect information periodically, and (iii) continuous queries that monitor information in an ongoing manner.

CCPS should be located near the corresponding sources of information in 28 A. Galis et al. order to reduce management overhead.

This can include common operations such as getting the state of a server with its CPU

We note that the monitoring information retrieval is handled by the Virtualisation Plane. The reader collects the raw measurement data from all of the sensors of a CCP.

which can measure attributes from CPU, memory, and network components of a server host, were created.

We can also measure the same attributes of virtualised hosts by interacting with a hypervisor to collect these values.

, physical nodes and servers) subject to constraints determined by the Orchestration Plane. The Management Plane is designed to meet the following functionality:

It monitors the network and operational context as well as internal operational network state in order to assess if the network current behaviour serve its service purposes.

and continuous migration of virtual routers into hosts (i e. physical nodes and servers) subject to constraints determined by the Orchestration Plane.

and issued as open source 10, which aims to create a highly open and flexible environment for In-Network Clouds in Future Internet.

Full design and implementation of all software platforms are presented in 10. vcpi (Virtual Component Programming interface is the VP's main component dealing with the heterogeneity of virtual resources

also part of the KP, provides functionality to add powerful and flexible monitoring facilities to system clouds (virtualisation of networks and services.

control and management of programmable or active sessions over virtual entities, such as servers and routers.

RNM (Reasoning and Negotiation Module), a core element of the KP, which mediates and negotiates between separate federated domains.

V3 UCL's Experimental Testbed located in London consisting of 80 cores with a dedicated 10 Gbits/s infrastructure

and Grid5000-an Experimental testbed located in France consisting of 5000 cores and linked by a dedicated 10 Gbits/s infrastructure.

4 Conclusion This work has presented the design of an open software networked infrastructure (In-Network Cloud) that enables the composition of fast and guaranteed services in an efficient manner,

and service resources provided by an virtualisation environment. We have described also the management architectural and system model for our Future Internet,

Virtualisation Plane (VP), Management Plane (MP), Knowledge Plane (KP), Service Plane (SP) and Orchestration Plane (OP). The resulting software-driven control network

and relevant analysis on network virtualisation and service deployments were carried out on a large-scale testbed.

Virtualising physical network and server resources has served two purposes: Managing the heterogeneity through introduction of homogeneous virtual resources and enabling programmability of the network elements.

A vital component of such a virtualisation approach is a common management and monitoring interface of virtualised resources.

Platforms and Software systems for an Autonomic Internet. IEEE Globecom 2010; 6-10 dec.,, Miami, USA (2010) 4. Galis, A.,et al.:

Towards the Future Internet, IOS Press, Amsterdam (2009) 5. Chapman, C.,et al.:Software Architecture Definition for On-demand Cloud Provisioning.

ACM HPDC, 21-25, Chicago hpdc2010. eecs. northwestern. edu (June 2010) 6. Rochwerger, B.,et al.:

and Protocols For Computer Communications (Karlsruhe, Germany, SIGCOMM'03, Karlsruhe, Germany, August 25 29,2003, pp. 3 10.

A Survey of Network Virtualization. Journal Computer networks: The International Journal of Computer and Telecommunications Networking 54 (5)( 2010) 15.

Galis, A.,Denazis, S.,Bassi, A.,Berl, A.,Fischer, A.,de Meer, H.,Strassner, J.,Davy, S.,Macedo, D.,Pujolle, G.,Loyola, J. R

IOS Press, Amsterdam (2009), http://www. iospress. nl/16. Berl, A.,Fischer, A.,De Meer, H.:

Using System Virtualization to Create Virtualized Networks. Electronic communications of the EASST 17,1 12 (2009), http://journal. ub. tu-berl. asst/article/view/218/219 J. Domingue et al.

and operators thanks to the success of novel, extremely practical smartphones, portable computers with easy-to-use 3g USB modems and attractive business models.

SHV) and special application areas (virtual reality experience sharing and gaming) will further boost this process and set new challenges to mobile networks.

or to put online gaming on the next level deeply impregnated with social networking and virtual reality. Even though video seems to be a major force behind the current traffic growth of the mobile Internet,

The evolution of DSL access architecture has shown in the past that pushing IP routing and other functions from the core to the edge of the network results in sustainable network infrastructure.

The core part of EPS called Evolved Packet Core (EPC) is extended continuously with new features in Release 10 and 11.

Due to the collateral effects of this change a convergence procedure started to introduce IP-based transport technology in the core and backhaul network:

Release 5 (2003) introduced the IP Multimedia Subsystem (IMS) core network functions for provision of IP services over the PS domain,

keep track of the location of mobile devices and participate in GTP signaling between the GGSN and RNC.

, the Evolved Packet Core (EPC. Compared to four main GPRS PS domain entities of Release 6,

and three main functional entities in the core, i e. the Mobility Management Entity (MME), the Serving GW (S-GW) and the Packet data Network GW (PDN GW).

Towards Scalable Future Internet Mobility 41 entities in the same residential/enterprise IP network without the user plane traversing the core network entities.

LTE is linked to the Evolved Packet Core (EPC) in the 3gpp system evolution, and in EPC, the main packet switched core network functional entities are still remaining centralized,

keeping user IP traffic anchored. There are several schemes to eliminate the residual centralization and further extend 3gpp. 3. 2 Ultra Flat Architecture One of the most important schemes aiming to further extend 3gpp standards is the Ultra Flat Architecture (UFA) 16 20.

with the exception of certain control functions still provided by the core. UFA represents the ultimate step toward flattening IP-based core networks

e g.,, the EPC in 3gpp. The objective of UFA design is to distribute core functions into single nodes at the edge of the network, e g.,

, the base stations. The intelligent nodes at the edge of the network are called UFA gateways.

to reduce the number of HIP Base Exchanges in the access and core network, and to enable delegation of HIP-level signaling of the MN by the UFA GWS.

but still remain in the core network. A good example is the Global HA to HA protocol 34

DIMA (Distributed IP Mobility Approach) 35 can also be considered as a core-level scheme by allowing the distribution of MIP Home Agent (the normally isolated central server) to many and less powerful interworking servers called Mobility

Core network nodes are mainly simple IP routers. The scheme applies DHT and Loc/ID separation:

a special information server is required in the network, which can also Flat Architectures: Towards Scalable Future Internet Mobility 45 be centralized or distributed.

, by using Hi3 50 for core-level distribution of HIP signaling plane) are also feasible. 5 Conclusion Flat architectures infer high scalability

and IP-enabled radio base station (BS) entities are connected directly to the IP core infrastructure.

due to lack of core controller entities base stations are managed no more centrally; hence failure diagnostics and recovery must be handled in a fully distributed and automated way.

but it comes with the benefits of scalability, fault tolerance and flexibility. Optimization of handover performance is another key challenge for flat networks.

Since all the BSS are connected directly to the IP core network, hiding mobility events from the IP layer is much harder.

Journal of Computer Communications 31 (10), 2457 2467 (2008) J. Domingue et al. Eds.):) Future Internet Assembly, LNCS 6656, pp. 51 66,2011.

and Alex Galis2 1 Waterford Institute of technology WIT Telecommunications Software and Systems Group TSSG, Co. Waterford, Ireland {jmserrano, sdavy, mjohnsson, wdonnelly}@ tssg. org

by replacing a plethora of proprietary hardware and software platforms with generic solutions supporting standardised development and deployment stacks.

and Architecture Design Project for New Generation Network 3 argue that the importance of wireless access networks requires a more fundamental redesign of the core Internet Protocols themselves.

Section VII presents the summary and outlook of this research. Finally some bibliography references supporting this research are included. 2 Challenges for Future Internet Architectures This section focuses on interdisciplinary approaches to specify data link and crossdomain interoperability to,

The relationships between Network Virtualisation and Federation 16 21 22 23 and the relationship between Service virtualisation (service clouds) and federation 17 are the support of a new world of solutions defining the Future Internet.

the software that manages them, and the actors who direct such management. In federation management end-to-end communication services involve configuring service

In the current Internet typical large enterprise systems contain thousands of physically distributed software components that communicate across different networks 27 to satisfy end-to-end services client requests.

middleware and hardware levels (3. Analysis) that can be gathered, processed, aggregated and correlated (4. Mapping) to provide knowledge that will support management operations of large enterprise applications (5. Federated Agreements)

62 M. Serrano et al. 6. 2 Federation of Network and Enterprise Management Systems Typical large enterprise systems contain thousands of physically distributed software components that communicate across different networks

Challenges in this scenario relies on how monitoring at the network level can provide knowledge that will enable enterprise application management systems to reconfigure software components to better adapt applications to prevailing network conditions.

and Outlook In the future Internet new designs ideas of Federated Management in Future Internet Architectures must consider high demands of information interoperability to satisfy service composition requirements being controlled by diverse,

Algorithms and processes to allow federation in enterprise application systems to visualize software components, functionality and performance.

or redeploy software components realizing autonomic application functionality. Guidelines and exemplars for the exchange of relevant knowledge between network and enterprise application management systems.

Computer Communications (July 2010), 63 pp. http://www1. cse. wustl. edu/jain/papers/ftp/i3survey. pdf 12.

Platforms and Software systems for an Autonomic Internet. In: IEEE Globecom 2010, Miami, USA, 6-10 december (2010) 14.

1st IFIP/IEEE Manfi Intl Workshop, In conjunction 11th IEEE IM2009, Long island, NY, USA, June 2009, IEEE Computer Society Press, Los

IOS Press, Amsterdam (2009) 24. Feldmann, A.:Internet clean-slate design: what and why? ACM SIGCOM Computer Communication Review 37 (3)( 2007) 25.

Strassner, J.,Agoulmine, N.,Lehtihet, E.:FOCALE A Novel Autonomic Networking Architecture. ITSSA Journal 3 (1), 64 79 (2007) 26.

The core contribution of this paper is the distillation of an initial model for RWI based on an analysis of these state of art architectures and an understanding of the challenges.

An identification of a core set of functions and underlying information models, operations and interactions that these architecture have in common.

software artifacts and humans connected to it. The RWI assumes that the information flow to

in order to monitor and interact with the physical entities that we are interested in. The digital world consists of:

or application software that intends to interact with Resources and Eoi. Providing the services and corresponding underlying information models to bridge the physical

Accountability and traceability can be achieved by recording transactions and interactions taking place at the respective system entities. 3. 2 Smart Object model At its core,

and the software components implementing the interaction endpoints from the user perspective (Resource End point REP). Furthermore,

and their relationships in the RWI system model A REP is a software component that represents an interaction end-point for a physical resource.

A REP Host is a device that executes the software process representing the REP. As mentioned before,

a computer in the network or an embedded server may act as the REP host for a resource,

low-power sensor nodes, from attacks by hosting their REPS on more powerful hardware. Unlike other models, the Smart Object model considers also real-world entities in its model

The system is based on the OSGI service middleware and consists of two main sub systems: the service platform openaal and the ETALIS Complex event processing system (icep. fzi. de.

i e. eventdriven one. 4. 3 PECES The PECES architecture PECES provides a comprehensive software layer to enable the seamless cooperation of embedded devices across various smart spaces on a global scale in a context-dependent

The PECES middleware architecture enables dynamic group-based communication between PECES applications (Resources) by utilizing contextual information based on a flexible context ontology.

Although Resources are not directly analogous to PECES middleware instances, gateways to these devices are more resource-rich

and can host middleware instances, and can be queried provided that an application-level querying interface is implemented.

must be running the PECES middleware before any interaction may occur. Both one-shot and continuous interactions are supported between components

/A According to W3c Semantic Sensor Network Ontology Observation&measuremen t, role, agent, service and resource ontologies PECES Implicit via middleware Expressive (based on ontologies),

Role-based access control for individual middleware components N/A EPC and value-added sensing EPCIS standard SENSEI Execution manager responsible for maintenance of long lasting requests

References ASPIRE Advanced Sensors and lightweight Programmable middleware for Innovative RFID Enterprise applications, FP7, http://www. fp7-aspire. eu/CONET Cooperating Objects Noe,

/PECES PERVASIVE Computing in Embedded systems, FP7, http://www. ict-peces. eu/Semsorgrid4env Semantic Sensor Grids for Rapid Application Development for Environmental Management, FP7

univocally and persistently identify the resources within IDN-middleware independent of their physical locations; in the lower layer are used Uniform Resource Locators (URL) to identify resource replicas as well as to access them.

The implementations of IDN-SA are a set of different software modules one module for each layer.

Each module, implemented using an HTTP server, will offers a REST interface. The interaction between IDN-compliant applications and IDN-SA follows the HTTP protocol as defined in REST architectural style too.

IOS Press, Amsterdam (2010) 2. Ayers, D.:From here to There. IEEE Internet Comput 11 (1), 85 89 (2007) 3. European commission Information Society and Media.

a Scalable Middleware Infrastructure for Smart Data Integration, in D. In: Giusto, D.,et al. eds.)

The Effects of Layering and Encapsulation on Software Development Cost and Quality. IEEE Trans. Softw.

and Vincenzo Suraci2 1 University of Rome La Sapienza, Computer and System Sciences Department Via Ariosto 25,00185 Rome, Italy {castrucci, dellipriscoli, pietrabissa}@ dis

it aims at achieving a full interoperation among the different entities constituting the ICT environment, by means of the introduction of Semantic Virtualization Enablers,

Future Internet architecture, Cognitive networks, Virtualization, Interoperation. 1 Introduction Already in 2005, there was the feeling that the architecture

when the exponential growth of small and/or mobile devices and sensors, of services and of security requirements began to show that current Internet is becoming itself a bottleneck.

, users, contents, services, network resources, computing resources, device characteristics) via virtualization and data mining functionalities; the metadata produced in this way are then input of intelligent cognitive modules

example of Resources include services, contents, terminals, devices, middleware functionalities, storage, computational, connectivity and networking capabilities, etc.;

for the sake of brevity, simply referred to as"Cognitive Framework")adopting a modular design based on middleware"enablers".

the Semantic Virtualization Enablers and the Cognitive Enablers. The Cognitive Enablers represent the core of the Cognitive Framework

and are in charge of providing the Future Internet control and management functionalities. They interact with Actors, Resources and Applications through Semantic Virtualization Enablers.

The Semantic Virtualization Enablers are in charge of virtualizing the heterogeneous Actors, Resources and Applications by describing them by means of properly selected, dynamic, homogeneous,

context-aware and semantic aggregated metadata. The Cognitive Enablers consist of a set of modular, technology-independent, interoperating enablers which,

on the basis of the aggregated metadata provided by the Semantic Virtualization Enablers, take consistent control

The control and management decisions taken by the Cognitive Enablers are handled by the Semantic Virtualization Enablers,

Cognitive Future Internet Framework Actors Users Network Providers Prosumer Developers Content Providers Service Providers Applications Semantic Virtualization Enablers Cognitive Enablers Identity

thanks to the aggregated semantic metadata provided by the Semantic Virtualization Enablers, the control and management functionalities included in the Cognitive Enablers have a technology-neutral, multi-layer, multi-network vision of the surrounding Actors, Resources and Applications.

Mobile Terminals, Base Stations, Backhaul network entities, Core network entities. The selection and the mapping of the Cognitive Framework functionalities in the network entities is a critical task

It can be realized through the implementation of appropriate Cognitive Middleware-based Agents (in the following referred to as Cognitive Managers)

Core Network entities. There not exist a unique mapping between the proposed conceptual framework over an existing telecommunication network.

Indeed the software nature of the Cognitive Manager allows a transparent integration in the network nodes.

The Metadata Handling and the Elaboration functionalities (and in particular, the Cognitive Enablers which are the core of the proposed architecture) are independent of the peculiarities of the surrounding Resources, Actors and Applications.

With reference to Fig. 2, the Sensing, Metadata Handling, Actuation and API functionalities are embedded in the Semantic Virtualization Enablers,

5) The transparency and the middleware (firmware based) nature of the proposed Cognitive Manger architecture makes relatively easy its embedding in any fixed/mobile network entity (e g.

Mobile Terminals, Base Station, Backhaul network entities, Core network entities: the most appropriate network entities for hosting the Cognitive Managers have to be selected environment by environment.

Moreover, the Cognitive Managers functionalities (and, in particular, the Cognitive Enabler software) can be added/upgraded/deleted through remote (wired and/or wireless) control.

The framework has been implemented as a Linux Kernel Module and it has been installed in test-bed machines and in a legacy router1 for performance evaluation.

and two IEEE 802. 3u links at 100 Mbit/s. Fig. 3. Test scenario 1 We have modified the firmware of a Netgear router (Gigabit Open source Router with Wireless

-N and USB port; 453 MHZ Broadcom Processor with 8 MB Flash memory and 64 MB RAM;

a WAN port and four LAN up to 1 Gigabit/s) and cross-compiled the code,

Interoperation among heterogeneous entities is achieved by means of their virtualization, obtained thanks to the introduction of Semantic Virtualization Enablers.

At the same time, the Cognitive Enablers, which are the core of the Cognitive Managers, can potentially benefit from information coming from all layers of all networks

and can take consistent and coordinated context-aware decisions impacting on all layers. Clearly which Cognitive Enabler have to be activated,

Section 1 presents works in the area of Future Internet and ontology in computer systems. Section 2 describes the concepts of the Entity Title Model and the ontology at network layers.

Also can be created other kinds of classification, such as hardware, software and network, among others. Some one of them (not all) can be used as resources in others relevant literature.

The benefits for the use of the propositional logic for network formalization is the implementation facility in software and hardware.

For the communication between the layers running in a Distributed Operating system, without the traditional sockets used in TCP IP,

by the direct use of the Raw Socket to communicate with the Distributed Operating system, without the use of IP, TCP, UDP and SCTP.<

for example, 4ward, Autoi OSKMV planes (Orchestration, Service Enablers, Knowledge management and Virtualisation planes) and the Content-Centric can use this model collaboratively.

International Journal of Human and Computer Studies, 43 (5 6): 907 928 (1995) 16 ITU-T:

Proceedings of the Fall Joint Computer Conference. AFIPS November 14-16, Volume 31, pp. 525 534.

IOS Press, Amsterdam (2009) 32 Tselentis, G.,et al.:Towards the Future Internet-Emerging Trends from European Research.

IOS Press, Amsterdam (2010) 33 Tsiatsis, V.,Gluhak, A.,Bauge, T.,Montagut, F.,Bernat, J.,Bauer, M.,Villalonga, C.,Barnaghi, P

IOS Press, Amsterdam (2010) 34 Vissers, C.,Logrippo, L.:The Importance of the Service Concept in the Design of Data communications Protocols.

An Ontological Approach to Computer system Security. Information security Journal: A Global Perspective (2010) 36 Wong, W.:

Another dimension of a service provider's win is decreased a traffic volume from its own content servers and reduced load of the servers,

Peers and the ETMS servers, providing rating information, are located in these stub-ASES, which are interconnected via a hub-AS containing the initial seed.

19th IEEE International Conference on Computer Communications and Networks (ICCCN 2010), Zürich, Switzerland (August 2010) 11.

since the software has already been developed for the initial scenario and it is simply a matter of deploying

for example mobile devices have multiple interfaces. MPTCP supports the use of multiple paths between source and destination.

one issue today is how to choose what path to use between two servers amongst the many possibilities-MPTCP naturally spreads traffic over the available paths.

However, the protocol implementation should not impact hardware offloading of segmentation and check-summing. One reason that MPTCP uses TCP-Options for signalling (rather than the payload) is that it should simplify offloading by network cards that support MPTCP,

In this example, traffic between the two servers (at the bottom) travels over two paths through the switching fabric of the data centre (there are four possible paths.

(which involves the OS vendor updating their stack) and adoption (which means that MPTCP is actually being used

and deployment is decided mainly by the OS (Operating system) vendor and not the end user.)Therefore we believe that a more promising initial scenario is an end user that accesses content, via wireless LAN and 3g, from a provider that controls both end user devices and content servers 26 for example,

Nokia or Apple controls both the device and the content server, Nokia Ovi or Apple App store.

Benefits: MPTCP improves resilience -if one link fails on a multi-homed terminal, the connection still works over the other interface.

Both the devices and servers are under the control of one stakeholder, so the end user‘unconsciously'adopts MPTCP.

For instance, it is necessary to think about the benefits and costs for OS vendors, end users, applications and ISPS (Internet service providers.

The CDN server sends premium packets (perhaps for IPTV) as Conex-Not-Marked or Conex-Re-echo.

and then the host's software would automatically send the user's premium traffic (Voip say) as Conex-enabled.

or end user at a time. 5 Enhancing the Framework One important development in telecoms is virtualisation. Although the basic idea is longstanding,

Roll out of the software should be cheaper, therefore the expected benefits of the deployment can be less.

Every user can immediately use the new (virtualised) software, so effectively a large number of users can be enabled simultaneously.

if there is some problem with the new software. Virtualisation is not suitable for all types of software, for instance new transport layer functionality, such as MPTCP and CONEX,

needs to be on the actual devices. 142 P. Eardley et al. There is an analogy with the digitalisation of content

Virtualisation should similarly lower the cost of distribution in other words, it eases deployment. Another aspect is the interaction of a new protocol with existing protocols.

ACM SIGCOMM Computer Communications Review 40 (2)( 2010) 8. Kostopoulos, A.,Warma, H.,Leva, T.,Heinrich, B.,Ford, A.,Eggert, L.:

Computer Communication Review 35,2 (2005) 20. Key, P.,Massoulie, P.,Towsley, D.:Combined Multipath Routing and Congestion Control:

MSR-TR-2005-111 (2005), http://research. microsoft. com/pubs/70208/tr-2005-111. pdf 21.

HTTP Extensions for Simultaneous Download from Multiple Mirrors, draft-ford-http-multi-server, work in progress (2009) 25.

Furthermore, the Internet is increasingly pervading society 3. Widespread access to the Internet via mobile devices, an ever-growing number of broadband users worldwide,

such as processing and storage capabilities of servers and networking infrastructure. For example, routing table memory of core Internet routers can be considered a public good that retail ISPS have an incentive to over-consume by performing prefix de-aggregation with Border Gateway Protocol (BGP.

Another type of scarce Internet resources is network identifiers, like IPV4 addresses and especially Provider Independent ones that ease net An Approach to Investigating Socioeconomic Tussles 153 work management and avoid ISP lock in.

when multiple candidate servers are available, a consumer may prefer the one offering better Qos,

while a provider selects the server that minimizes its cost; e g.,, this is possible if the provider operates a local DNS service.

Due to economies of scale the thin-client paradigm, where most applications run on a remote server, is considered to achieving energy savings but to the disadvantage of the server provider.

However under some assumptions, Wifi hotspots can consume much less energy than UMTS (Universal mobile telecommunications system) networks.

One challenge for the technologists designing new hardware, software systems, and platforms, however, is to be aware that technology is not value-free,

To some extent, this message has already been taken on board by many policy makers, computer scientists, and systems designers.

and WIMAGIC try to design technical solutions that achieve efficient spectrum usage for mobile devices. Following the increasing consensus on benefits of incorporating economic incentive mechanisms in technical solutions, several projects like Trilogy, Smoothit, ETICS,

Towards the Future Internet-Emerging Trends from European Research, IOS Press, Amsterdam (2010) 16. Trilogy:

The concept of Platform-as-a-service provides joint development and execution environments for software and services, with common framework features and easy integration of functionality offered by third parties.

and provide an outlook to their mitigation, embedded in a systematic security risk management process. In cloud computing,

For example, it must be assumed that the core routers forward packets at line-speeds of tens of Gigabits per second

and can refer to abstractions of any granularity, such as software components, individual nodes, or ASES.

An FPGA based hardware accelerator has been developed for PLA 24 accelerating cryptographic operations. Security Design for an Inter-Domain Publish/Subscribe Architecture 171 Fig. 1. Publications can refer to other publications persistently using long-term Aids.

In another dimension, the rendezvous system is split into common rendezvous core and scope-specific implementations of scope home nodes that implement the functionality for a set of scopes.

This pub/sub primitive is the only functionality implemented by the rendezvous core. We refer to our work in 5 for a detailed description of the rendezvous security mechanisms.

When a cached result cannot be found in the rendezvous core, the subscription reaches the scope,

5th international workshop on Software engineering and middleware, pp. 98 105 (2005) 8. Merkle, R.:Secrecy, authentication,

ACM Transactions on Computer systems 2 (4), 277 288 (1984) 14. Lagutin, D.,Visala, K.,Tarkoma, S.:

IEEE Computer Society Press, Los Alamitos (2004) 20. Carpenter, B.:rfc1958: Architectural Principles of the Internet.

Hardware subtask final report. Helsinki University of Technology, Tech. Rep (2008), http://www. tcs. hut. fi/Software/PLA/new/doc/PLA HW FINAL REPORT. pdf 25.

Lagutin, D.:Securing the Internet with Digital Signatures. Doctoral dissertation, Department of computer science and Engineering, Aalto University, School of Science and Technology (2010) Engineering Secure Future Internet Services Wouter Joosen1, Javier Lopez2, Fabio Martinelli3,

It will be essential to integrate various activities that need to be addressed in the scope of secure service engineering into comprehensive software and service life cycle support.

yet the Future Internet stretches the present know how on building secure software services and systems:

and reassessed continuously. 1. 2 The Need for Engineering Secure Software Services The need to organize,

integrate and optimize the research on engineering secure software services to deal effectively with this increased challenge is pertinent and well recognized by the research community and by the industrial one.

and damaged reputation. 1. 3 Research Focus on Developing Secure FI Services Our focus is on the creation and correct execution of a set of methodologies, processes and tools for secure software development.

approving that the developed software is secure. Assurance must be based on justifiable evidence, and the whole process designed for assurance.

integrating the former results in (5) a risk-aware and cost-aware software development life-cycle (SDLC),

The first three activities represent major and traditional stages of (secure) software development: from requirements over architecture and design to the composition and/or programming of working solutions.

and methodologies for software construction as well as researching about new ways to take this complexity into account in a holistic manner.

The design phase of the software service and/or system is a timely moment to enforce

The software architecture encompasses the more relevant elements of the application, providing either a static or/and a dynamic view of the application.

which comprise software elements, the externally visible properties of those elements, and the relationships among them. 182 W. Joosen et al.

assess and reason about security mechanisms at an early phase in the software development cycle. The research topics one must focus on in this subarea relate to model-driven architecture and security, the compositionality of design models and the study of design patterns for FI services and applications.

Until this point in the software and service development process, different concerns security among them of the whole application have been separated into different models,

A design pattern is a general repeatable solution to a commonly occurring problem in software design.

both from a general perspective and from a security perspective for security-critical software systems. 4 Security Support in Programming Environments Security Support in Programming Environments is not new;

Securing Future Internet Service is inherently a matter of secure software and systems. The context of the future internet services sets the scene in the sense that (1) specific service architectures will be used,

and (3) a broad range of programming technologies will be used to develop the actual software and systems.

Some of these properties have been embedded in the security specific elements of the software design; other may simply be high priority security requirements that have articulated such as the appropriate treatment of concurrency control and the avoidance of race conditions in the code,

Middleware Aspects. The research community should re-investigate service-oriented middleware for the Future Internet

with a special emphasis on Engineering Secure Future Internet Services 185 enabling deployment, access, discovery and composition of pervasive services offered by resource-constrained nodes. 4. 2

Lock-free wait-free algorithms for common software abstractions (queues, bags, etc. are one of the most effective approaches to exploit multi-core parallelism.

Programming support must include methods to ensure the adherence of a particular program to well-known programming principles or best-practices in secure software development.

Trustworthy applications need run-time execution monitors that can provably enforce advanced security policies 19,3 including fined-grained access control policies usage control policies

Assurance will play a central role in the development of software based services to provide confidence about the desired security level.

seamlessly informing and giving feedback at each stage of the software life cycle by checking that the related models

Obviously the security support in programming environments that must be delivered will be essential to incept a transverse methodology that enables to manage assurance throughout the software and service development life cycle (SDLC.

security assurance and risk and cost management during SDLC. 5. 1 Security Assurance The main objective is to enable assurance in the development of software based services to ensure confidence about their trustworthiness.

Our core goal is to incept a transverse methodology that enables to manage assurance throughout the software development life cycle (SDLC.

penetration testing that leverages on the high-level models that are generated in early stages of the software life cycle,

and cost aware SDLC should be based on an incremental and iterative process that is accommodated to an incremental software development process.

While the software development proceeds through incremental phases, the risk and cost analysis will undergo new iterations for each phase.

and cost analyses will propagate through the software development phases and become more refined. In order to support the propagation of analysis results through the phases of the SDLC Engineering Secure Future Internet Services 189 one needs to develop methods and techniques for the refinement of risk analysis documentation.

In order to accommodate to a modular software development process, as well as effectively handling the heterogeneous and compositional nature of Future Internet services,

Work partially supported by EU FP7-ICT project NESSOS (Network of Excellence on Engineering Secure Future Internet Software Services and Systems) under the grant agreement n. 256980.

Software Architecture In practice, 2nd edn. Addison-Wesley, Boston (2003) 3. Bauer, L.,Ligatti, J.,Walker, D.:

An agent-oriented software development methodology. Autonomous Agents and Multi-Agent Systems 8, 203 236 (2004) 6. Clavel, M.,da Silva, V.,de O. Braga, C.,Egea, M.:

IEEE Computer Society Press, Los Alamitos (1981), doi: 10.1109/SFCS. 1981.32 10. Erlingsson, U.,Schneider, F. B.:

IEEE Computer Society Press, Los Alamitos (2000) 11. France, R.,Fleurey, F.,Reddy, R.,Baudry, B.,Ghosh, S.:

IEEE Computer Society Press, Los Alamitos (2007) 12. Giorgini, P.,Mouratidis, H.,Zannone, N.:Modelling security and trust with secure tropos.

SPAQU'08 (Int. Workshop on Software Patterns and Quality)( 2008) 18. Lazouski, A.,Martinelli, F.,Mori, P.:

and public acceptance. 1 Introduction The vision of the Internet of Services (Ios) entails a major paradigm shift in the way ICT systems

In the Ios, services are business functionalities that are designed and implemented by producers, deployed by providers,

However, the new opportunities opened by the Ios will only materialize if concepts, techniques and tools are provided to ensure security.

thereby significantly improving the all-round security of the Ios. In this chapter, we give a brief overview of the main scientific and industrial challenges for such verification tools,

and public acceptance of the Ios. We proceed as follows. In Sections 2 and 3, we discuss, respectively,

For instance, Tulafale 6, a tool by Microsoft Research based on Proverif 7, exploits abstract interpretation for verification of web services that use SOAP messaging, using logical predicates to relate the concrete

, a layer of software modules that carry out the translation from application-level specification languages (such as BPMN and BPEL,

Google and the US Computer Emergency Readiness Team (US-CERT) were informed and the vulnerability was kept confidential until Google developed a new version of the authentication service

and that could have allowed a malicious web server to impersonate a user on any Google application.

, USB tokens or smart cards. Sensitive cryptographic keys, stored inside the token, should not be revealed to the outside

and translators to and from the core formal models should be devised and migrated to the selected development environments.

Two valuable migration activities have been carried out by building contacts with core business units. First, in the trail of the successful analysis of Google's SAML-based SSO, an internal project has been run to migrate AVANTSSAR results within SAP Netweaver Security

and Outlook As exemplified by these case studies and success stories, formal validation technologies can have a decisive impact for the trust

and security of the Ios. The research innovation put forth by AVANTSSAR aims at ensuring global security of dynamically composed services

These advances will significantly improve the all-round security of the Ios, and thus boost its development and public acceptance.

IEEE Computer Society Press, Los Alamitos (2001) 8. Bodei, C.,Buchholtz, M.,Degano, P.,Nielson, F.,Nielson, H r.:

Proceedings of the 17th ACM conference on Computer and Communications security (CCS 2010), pp. 260 269.

IEEE Computer Society Press, Los Alamitos (2008) 12. Ciob aca, S.,Cortier, V.:Protocol composition for arbitrary primitives.

IEEE Computer Society Press, Los Alamitos (2010) 13. Clarke, E. M.,Grumberg, O.,Peled, D. A.:

Proceedings of 17th ACM conference on Computer and Communications security (CCS 2010), pp. 351 360. ACM Press, New york (2010) 22.

Web Services Business Process Execution Language vers. 2. 0 (2007), http://docs. oasis-open. org/wsbpel/2. 0/OS/wsbpel

-v2. 0-OS. pdf 25. Pnueli, A.:The Temporal Logic of Programs. In: Proceedings of the 18th IEEE Symposium on Foundations of Computer science, pp. 46 57.

IEEE Computer Society Press, Los Alamitos (1977) 26. T. Dierks and E. Rescorla. The Transport Layer Security (TLS) Protocol, Version 1. 2. IETF RFC 5246 (Aug. 2008) 27.

and Matthias Schunter2 1 Maastricht University, The netherlands glott. ruediger@gmail. com 2 IBM Research Z urich, R uschlikon, Switzerland huselmar@de

. ibm. com, mts@zurich. ibm. com 3 TU Darmstadt, Germany ahmad. sadeghi@trust. rub. de Abstract.

Cloud computing goes beyond technological infrastructure that derives from the convergence of computer server power, storage and network bandwidth.

FIA projects like RESERVOIR or VISION are conducting research on core technological foundations of the cloud-of-clouds such as federation technologies, interoperability standards or placement policies for virtual images or data

Many of these developments can be expected to be transferred into the Future Internet Core Platform project that will launch in 2011.

Similarly, IBM has launched a FISMA compliant Federal Community Cloud in 2010. Other cloud providers also adapt basic service security to the needs of specific markets and communities.

Following its software-plus-services strategy announced in 2007 Microsoft has developed in the past years several Saas cloud services such as the Business Productivity Online Suite (BPOS.

While all of them may be delivered from a multi-tenant public cloud for the entry level user, Microsoft offers dedicated private cloud hosting

and supports third-party or customer-site hosting. This allows tailor made solutions to specific security concerns-in particular in view of the needs of larger customers.

In the same way, the base security of Microsoft public cloud services is adapted to the targeted market.

Whereas Microsoft uses, e g.,, for the Office Live Workspace-in analogy to what Google does with Gmail-unencrypted data transfer between the cloud and the user

cloud services for more sensitive markets (such as Microsoft Health Vault) use SSL encryption by Default on the other hand commodity public cloud services such as the Amazon EC2 are still growing

, Novell, IBM), virtual private networking (e g.,, Amazon Virtual Private cloud), encryption (e g.,, Amazon managed encryption services)

Sharing resources such as operating systems, middleware, or actual software requires a case-by-case design of isolation mechanisms.

In particular the last example of Software-as-a-service requires that each data instance is assigned to a customer

this machine may use a database server (Middleware isolation) and provide services to multiple individual departments (Application isolation).

software quality plays an important role in avoiding disruptions and service outages: Clouds gain efficiency by industrializing the production of IT services through complete end-to-end automation.

Another source of failure stems from the fact that large-scale computing clouds are built often using low-cost commodity hardware that fails (relatively) often.

The consequence of these facts is automated that fault tolerance problemdetermination, and (self-)repair mechanisms will be needed commonly in the cloud environment

or recover from software and hardware failures. For building such resilient systems, important tools are data replication,

A more practical solution is to use Trusted Computing to verify correct policy enforcement 6. Trusted computing instantiation as proposed by the Trusted Computing Group (TCG) uses secure hardware to allow a stakeholder

such channels are frozen often in hardware and thus cannot easily be reduced. 218 R. Glott et al.

Trustworthy Clouds Underpinning the Future Internet 219 5 Outlook The Path Ahead Cloud computing is not new it constitutes a new outsourcing delivery model that aims to be closer to the vision of true utility computing.

IEEE Computer Society Press, Los Alamitos (2010), doi: 10.1109/ICDCSW. 2010.39 4. Cabuk, S.,Dalton, C i.,Eriksson, K.,Kuhlmann, D.,Ramasamy, H. V.,Ramunno, G.,Sadeghi, A r.,Schunter

From http://www. symantec. com/connect/blogs/w32stuxnet-dossier 6. Chow, R.,Golle, P.,Jakobsson, M.,Shi, E.,Staddon, J.,Masuoka, R

Top threats to cloud computing, version 1. 0. March 2010), http://www. cloudsecurityalliance. org/topthreats/csathreats. v1. 0. pdf 8. Computer and Communication

Proceedings of the 4th International Workshop on Large scale Distributed systems and Middleware, Z urich, Switzerland. LADIS'10, pp. 12 17.

& service brokers (2010), http://www. processor. com/editorial/article. asp? article=articles%2fp3203%2f39p03%2f39p03. asp 15.

Proceedings of the 16th ACM conference on Computer and communications security, Chicago, Illinois, USA. CCS'09, pp. 199 212.

and visualize the use of their data stored in a remote server or in the cloud.

which monitors and informs the user on the compliance with a previously agreed privacy policy.

, servers, services, applications) provided by the cloud that are provisioned rapidly with a minimal management effort

such as setting and comparing user preferences with server privacy policies, expressing conditions on complex secondary usage cases,

In fact, this sticky policy will be sent to the server and follow the data in all their lifecycle to specify the usage conditions.

supporting a multilevel nested policy describing the data handling conditions that are applicable for any third party collecting the data from the server.

when a server storing personal data decides to share the data with a third party Obligations: Obligations in sticky policies specify the actions that should be carried out after collecting

it checks if there is any access restriction for the data before sending it to any server.

a data provider recovers the server's privacy policy in order to compare it to its preferences and verify

it monitors all the events related to the usage of the collected data. These event notifications are handled by the obligation engine

If a third party requests some data from the server, the latter becomes a data provider and acts as a user-side engine invoking access control and matching modules,

What should motivate the data collectors/processors to implement such technology? Actually, in many cases, their business model relies on the as-less-restricted-aspossible use of private data.

In other words, we suppose that the server enforces correctly the sticky policies, but, actually, nothing prevents him from creating a back door in his database

and obligation engine also gives the possibility of providing a monitoring console. The monitoring can be accessible by any data owner

Fig. 3 shows a very simple example of how the remote administrative console could be structured,

it notably requires a high level of trust in the data collector/processor. We presented some initial thoughts about how this problem can be mitigated through the usage of a tamper proof implementation of the architecture.

Enterprise privacy authorization language (EPAL 1. 1). IBM Research Report (2003) Data Usage Control in the future Internet Cloud 231 3. Bonneau, J

Trust and tamper-proof software delivery. In: Proceedings of the 2006 international workshop on Software engineering for secure systems.

It provides a core infrastructure, and also a playground for future discoveries and innovations, combining research with experimentation.

The chapter by Zseby et al. entitled Multipath Routing Experiments in Federated Testbeds demonstrates the practical usefulness of federation and virtualisation in heterogeneous testbeds.

A Resource Adapter (a concept similar to device drivers) wraps a domain's resource API in order to create a homogeneous API defined by Panlab.

an auction site prototype modeled after ebay. com. It provides a virtualized distributed application that consists of three components, a web server, an application server, a database and a workload generator,

Furthermore it can be deployed in a virtualized environment using Xen server technology, which allows regulating system resources such as CPU usage and memory,

and provides also a monitoring tool, Ganglia, that measures network metrics, such as round trip time and other statistics,

and using Xen server technology to regulate CPU usage. During this scenario the adaptive admission control and resource allocation algorithm is tested against network metrics, like round trip time and throughput.

so that resource like CPU usage and network throughput get high values. During the setup, the researcher wants to test http proxy software written in C programming language that implements an admission algorithm.

Figure 1 displays the 240 C. Tranoris P. Giacomin, and S. Denazis setup for the discussed scenario.

The setup consists of 3 work load http traffic generators, making requests through a hosting unit.

needs to monitor the CPU usage of the Web application and Database machines. Then the algorithm should be able to set new CPU capacity limits on both resources.

Additionally the algorithm should be able to start and stop the work load generators on demand. 3 Technical Environment, Testbed Implementation and Deployment From the requirements of the use case,

-Linux machines for the RUBIS based work load generators-A Linux machine for the hosting the algorithm unit,

capable of compiling C and Java software-Linux machines for running XEN server where on top will run the RUBIS Web app

compile the software and execute it. The user will not have access to the RUBIS resources

all the components are managed based on Virtual machines by a XEN server. The implemented RAS instantiate all these Virtual machines and configure the internal components according to end-user needs.

used IP for the testbed, memory, hard disk size, number of clients, ramp up time for the requests and a parameter used during the execution of the experiment called Action

The Proxy Unit exposes parameters such as used IP for the testbed, memory, hard disk size, username,

and a CPU CAPACITY used to set the max cpu capacity of the resource. A Use-Case on Testing Adaptive Admission Control 241 Fig. 2. The Resource adapters of the available testbed resources Fig. 3. RADL definition for the RUBIS application

Figure 3 displays the RADL definition for the RUBIS application server. The Configuration Parameters section describes the exposed parameters to the end user.

Figure 4 displays the use case setup as can be done inside the VCT tool of Panlab.

Figure 5 displays this condition where the System Under Test (SUT) is our algorithm. FCI automatically creates all the necessary code that the end user can then inject inside the algorithm's code.

the java listing displays how we can access the resources of this VCT. FCI creates a java class,

()is able to give back the CPU usage of the database resource. 5 Conclusions The results of running an experiment in Panlab are encouraging in terms of moving the designed algorithms from simulating environments to near production environments.

In such an environment federation and virtualization of resources are key features that should be supported in a future Internet.

Network Virtualization (NV) techniques 5, 17 allow the establishment of such separate slices on top of a joint physical infrastructure (substrate.

Routing slices as an architectural concept is known as Transport Virtualization (TV) 23,24. These concepts have roots in the work on active networks,

Also the acquisition of specific measurement equipment is often difficult in local labs due to the high costs of such hardware.

and outlook to the enhancements of federated facilities. 2 Experiment Objectives and Requirements for a Concurrent Multipath Transport Alternative multipath transport services in future federated networks might employ concurrent or consecutive

and installation of arbitrary software but is distributed only within Germany, has limited a access, and currently provides no federation method.

Booking of Resources With the SFA software it was possible to book nodes in Planetlab, Planetlab Europe and in the VINI Testbed.

and use arbitrary software on the G-Lab nodes. We assume that such features are of interest for many experimenters,

and hardware to support active and passive high precision measurement. Such an infrastructure helps experimenters to perform measurements

and software tools to the public and to share their experience. Further, free T-Rex seeks to employ standardized instruments to improve the comparability and openness of scientific results in the field of future Internet research.

Overcoming the internet impasse through virtualization. IEEE Computer, 34 41 (April 2005) 6. Anerousis, N.,Hjlmtysson, G.:

Service level routing on the Internet. In: IEEE GLOBECOM'99, vol. 1, pp. 553 559 (2002) 7. Becke, M.,Dreibholz, T.,Yyengar, J.,Natarajan, P.,Tuexen, M.:

Network virtualization: Breaking the performance barrier. ACM Queue,(Jan./Feb. 2008) 18. Santos, T.,Henke, C.,Schmoll, C.,Zseby.

ACM SIGCOMM Computer Communication Review 35 (5), 71 74 (2005) 20. Phuoc Tran-Gia. G-Lab:

Re-sequencing Buffer Occupancy of a Concurrent Multipath Transmission Mechanism for Transport System Virtualization. In:

Using Concurrent Multipath Transmission for Transport Virtualization: Analyzing Path Selection. In: Proceedings of the 22nd International Teletraffic Congress (ITC), Amsterdam, Netherlands (Sep. 2010) J. Domingue et al.

in order to experiment on the improvement of Qos features by using the Self-NET software for self management over a Wimax network environment.

or Monitor-Decide-Execute Cycle (MDE) and consists of the Network Element Cognitive Manager (NECM)

which is a software tool that generates traffic at both Uoa end machines. This is a Java based platform that manipulates two independent entities,

and Self-NET software federation (ITGLOG), printing and plotting specific metrics (ITGDEC, ITGPLOT) and remotely controlling the traffic generation (ITGAPI).

The experiment required development of an additional BS control software and deployment of IP routing

We implemented A BS control software (i e. NECM) to allow dynamically collect Wimax link information from the BS

The NECM of the Wimax BS constantly monitors network device statistics (e g.,, UL/DL used capacity, TCP/UDP parameters, service flows),

IOS Press, Amsterdam (2010) 7. Airspan homepage, http://www. airspan. com 8. Distributed Internet traffic Generator, http://www. grid. unina. it/software

The very success of the Internet is creating obstacles to the future innovation of both the networking technology that lies at the Internet's core and the services that use it.

providing a natural complement to the virtualization of resources-by setting up and tearing down composed services, based on negotiated SLAS.

Manageability of the current network typically resides in client stations and servers, which interact with network elements (NES) via protocols such as SNMP (Simple Network Management Protocol).

Furthermore, the diversity of services as well as the underlying hardware and software resources comprise management issues highly challenging, meaning that currently,

a diversity in terms of hardware resources leads to a diversity of management tools (distinguished per vendor).

providing an accurate reflection of the real world, delivering fine-grained information and enabling almost real-time interaction between the virtual world and real world.

Among the core drivers for the FI are increased reliability, enhanced services, more flexibility, and simplified operation.

001 0. 001 0. 018 Communication Phase 22.192 1. 711 2. 601 22.405 Monitor Phase 2. 760 2. 561 3

new methods (related to embedded and/or autonomous management, virtualization of systems and network resources, advanced and cognitive networking of information objects),

IOS Press, Amsterdam (2009) 10. Organization for Economic Co-operation Development (OECD: The Seoul Declaration for the Future of the Internet Economy.

IBM Corporation (2008) 30. Prehofer, C.,Bettstetter, C.:Self-organization in Communication Networks: Principles and Design Paradigms.

Proceedings of the International Conference on Ultra Modern Telecommunications (ICUMT-2009), pp. 1 6. IEEE Computer Society Press, Los Alamitos (2009) 32.

open-source multimedia framework, player and server, http://www. videolan. org/vlc J. Domingue et al. Eds.):

In 7, the issue of server selection is being investigated by proposing a node selection algorithm with respect to the worst-case link stress (WLS) criterion.

laptops and other network-enabled devices). Thus, a fitness function is presented which is able to evaluate the eligibility of each candidate node

According to this scenario, a node which acts as a traffic source like a laptop or a camera is out of the coverage of the infrastructure.

INFOCOM 2006, 25th IEEE International Conference on Computer Communications (2006) 3. Rong, B.,Hafid, A.:

Computer Communications 31,1763 1776 (2008) 5. Verma, A.,Sawant, H.,Tan, J.:Selection and navigation of mobile sensor nodes using a sensor network.

Future Internet, Virtualization, Dynamic Provisioning, Virtual Infrastructures, Convergence, Iaas, Optical Network, Cloud 1 Introduction Over the years, the Internet has become a central tool for society.

which in the core-network segment are mostly based on optical transmission technology, but also in the access segments gradual migration to optical technologies occurs.

The Cloud technologies are emerging as a new provisioning model 2. Cloud stands for ondemand access to IT hardware or software resources over the Internet.

and the virtualization paradigm with dynamic network provisioning as a way towards such a sustainable future Internet.

and the service middleware layer. Each layer is responsible for implementing different functionalities covering the full end-to-end service delivery from the service layer to the physical substrate.

Central to this novel architecture is the infrastructure virtualization layer which abstracts, partitions and interconnects infrastructure resources contained in the physical infrastructure layer.

and managing the network resources constituting the Virtual Infrastructure) is closely interacting with the virtualization layer. 3. Finally,

a service middleware layer is introduced to fully decouple the physical infrastructure from the service level.

Network Control Plane NIPS Network+IT Provisioning Services PIP Physical Infrastructure Provider SML Service Middleware Layer VI Virtual Infrastructure VIO Virtual

It provides means to continuously monitor what the effect of scaling will be on response time performance, quality of data security, cost aspect, feasibility, etc.

These procedures are based on a strong inter-cooperation between the NCP+and the service middleware layer (SML) via a serviceto-network interface, named NIPS UNI during the entire VI service life cycle.

These requirements describe not only the characteristics of the required connectivity in terms 19 http://www. ens-lyon. fr/LIP/RESO/Software/vxdl/home. html 316 P. Vicat

+In anycast services the SML provides just a description of the required IT resources (e g. in terms of amount of CPU),

Finally, another key element for the control plane is the interaction with the infrastructure-virtualization layer,

The overall architectural blueprint complemented by the detailed design of particular components feeds the development activities of the GEYSERS project to achieve the complete software stack

and evaluate prototypes of the different software components creating and managing optical virtual infrastructures. The other goal is to evaluate the performance and functionality of such a virtualized infrastructure in a realistic production context.

CCGRID'09, p. 1. IEEE Computer Society Press, Los Alamitos (2009), doi: 10.1109/CCGRID. 2009.97 20 Mintotalpower:

A Novel Architecture for Virtualization and Co-Provisioning of Dynamic Optical Networks and IT Services.

The economic importance of the service sector is a major motivation for services research both in the software industry and academia.

but cloud computing is acknowledged generally to be the provision of IT capabilities, such as computation, data storage and software on-demand, from a shared pool, with minimal interaction or knowledge by users.

service providers, software developers and users as follows6: -Infrastructure as a service offering resources such as a virtual machine or storage services.

-Platform as a service providing services for software vendors such as a software development platform or a hosting service.

The ability to trade IT-services as an economic good is seen as a core feature of the Internet of Services.

and Marco Pistore7 1 Intel, Ireland, {joe. m. butler, michael. nolan}@ intel. com 2 Telefónica Investigación y Desarrollo, Spain, juanlr@tid. com 3 SAP AG, Germany

, wolfgang. theilmann@sap. com 4 ENG, Italy, francesco. torelli@eng. com 5 Technische Universität Dortmund, Germany, ramin-yahyapour@udo. edu

Furthermore, we propose an SLA management framework that can become a core element for managing SLAS in the future Internet.

The service paradigm is a core principle for the Future Internet which supports integration, interrelation and inter-working of its architectural elements.

e g. the offering of a software service requires infrastructure resources, software licenses or other software services.

We propose an SLA management framework that offers a core element for managing SLAS in the future Internet.

, business, software, and infrastructure) on the other. With a set of four complementary use case studies, we are able to evaluate our approach in a variety of domains

) supports arbitrary service types (business, software, infrastructure) and SLA terms, (3) covers the complete SLA and service lifecycle with consistent interlinking of design-time, planning

business, software and infrastructure. The framework communicates to external parties, namely customers who (want to) consume services

On the highest level, we distinguish the Framework Core, Service Managers (infrastructure and software), deployed Service Instances with their Manageability Agents and Monitoring Event Channels.

The Framework Core encapsulates all functionality related to SLA management, business management, and the evaluation of service setups.

Infrastructure-and Software Service Managers contain all service-specific functionality. The deployed Service Instance is the actual service delivered to the customer

and managed by the framework via Manageability Agents. Monitoring Event Channels serve as a flexible communication infrastructure that allows the framework to collect information about the service instance status. Furthermore,

Business SLA Manager Software SLA Manager Infrastructure SLA Manager Business Manager Service Evaluation Infrastructure Service Manager Software Service Manager Customer

3rd Party Manageability Agent Infrastructure Service<<provider relations>><negotiate>><customer relations>>Monitored Event Channel<<control/track>><evaluate>><prepare/manage>><prepare/manage>><publish>><adjust>>Manageability Agent Software Service<<adjust>>deployed infrastructure service deployed software service<<negotiate>>framework core

The ERP hosting use case (Section 4) contains many aspects of a software cloud. 3. 3 Interlinkage with System Management SLA-driven system management is the primary approach discussed in this paper.

We assume a virtualisation-enabled data centre style configuration of server capacity, and a broad range of services in terms of relative priority, resource requirement and longevity.

Software services could potentially be selected by choosing a virtual machine template which contains pre-loaded applications,

but software layer considerations are considered not core to this Use Case and are dealt more comprehensively with in the ERP Hosting Use Case.

Such a solution typically consists of a software package (an application) but also some business-level activities,

At the next level, there are the actual software applications, such as for example a hosted ERP SOFTWARE package. At the next level, there are the required middleware components

which are used equally for different applications. At the lowest layer, there are the infrastructure resources, delivered through an internal or external cloud.

Each service layer is associated with a dedicated SLA, containing service level objectives which are specific to this layer.

The Application SLA is mainly about the throughput capacity of the software solution, its response time,

The Middleware SLA specifies the capacity of the middleware components, the response time guarantee of the middleware components

and the costs required for the offering. The Infrastructure SLA specifies the characteristics of the virtual or physical resources (CPU speed, memory,

and storage) and again the costs required for the offering. The use case successfully applies the SLA framework by realizing distinct SLA Managers for the 4 layers and also 4 distinct Service Managers that bridge to the actual support department

the application, the middleware, and the infrastructure artefacts. From a technical perspective, the most difficult piece in the realization of the whole use case was the knowledge discovery about the nonfunctional behaviour of the different components, e g. the performance characteristics of the middleware.

We collected a set of model-driven architecture artefacts, measurements, best practise rules and managed to consistently interlink them

-enabling of core Telco services and their addition with services from third parties (as Internet, infrastructure, media or content services).

additionally Service Aggregator integrates software layer (from SLA@SOI framework architecture. And finally Bank prototype is implemented using the top layer, business.

In this way it is necessary to outline also is executed the provision of Telco web service wrappers by Software SLA Manager in an application server

SMS wrappers deployed in the application server of the corresponding virtual machine has to connect and execute different tasks with core mobile network systems that are behind Telefónica Software Delivery Platform (SDP).

The compo 336 J. Butler et al. nents that can be connected also in the use case are the monitors of the services (SMS and Infrastructure services.

To take care about the violations, track interfaces are used to connect the adjustment components in each SLA Manager.

while typical software/hardware guarantee terms constraint the quality of each single execution of a service, in this use case the guarantee terms constraint the average value of KPIS computed for hundreds of executions

We explained a generalpurpose SLA management framework that can become a core element for managing SLAS in the future Internet.

and capabilities on arbitrary service artefacts, including infrastructure, network, software, and business artefacts. Four complementary industrial use cases demonstrated the applicability and relevance of the approach.

Last, we plan to open up our development activities via an Open source Project. The first framework version fully published as open source can be found at 5. Open Access.

This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution,

Whitepaper IBM developerworks (March 2008), http://www. ibm. com/developer works/autonomic/library/ac-edge4/4. Theilmann, W.,Winkler, U

LNCS, vol. 6369, Springer, Heidelberg (2010) 5. SLA@SOI Open source Framework. First full release by December 2010, http://sourceforge. net/projects/sla-at-soi 6. SLA@SOI project:

Knowledge management and Virtualisation planes), presented in 3, to better monitor the networks, as the semantic information can directly be handled by the Net-Ontology and DL-Ontology layers.

Using the TCP IP protocols architecture there are some limitations for the software-driven control network infrastructure

can also contribute to the translations of the MBT (Model-based Translator) software package, by the use of the FINLAN formal representation in OWL.

use of the CPU, memory assignment, packets lost and others. The invocation of the methods can be done by the AMSS,

The Autoi open source implements a scalable and modular architecture to the deployment, control and management of active sessions used by virtual entities.

like the Diverter, the Session Broker and the Virtualisation Broker. There are many others but these are essentials.

as maximum and minimum, requisites for an instance (memory size, storage pool size, number of virtual CPUS, 346 E. Santos et al.

Based on the Autoi Java open source, in the ANPI demo, the ANPISDD class is prepared to use the IP and TCP (port 43702) protocols.

as in the following sample code extracted from the ANPISDD. java code. 348 E. Santos et al. public class ANPISDD extends Thread {private Serversocket server;

+""s1=server. accept(;.With the use of the FINLAN library this communication can be done replacing the IP

Nevertheless, the future intentions are to implement the FINLAN ontology in Linux operating system kernel level,

since the methods proposed would be available at the operating system level. 4 Conclusions This paper has presented the FINLAN ontology works in a collaboration perspective with some Future Internet projects.

Future work will implement the FINLAN ontology at the Linux kernel level and run performance

Platforms and Software systems for an Autonomic Internet. In: IEEE Global Communications Conference (2010) 14 Rubio-Loyola, J.,Astorga, A.,Serrat, J.,Lefevre, L.,Cheniour, A.,Muldowney, D.,Davy, S.,Galis

The Internet of Services is seen as a core component of the Future Internet: The Future Internet is polymorphic infrastructure,

so that software agents are able to process and reason with the information in an automatic and 352 J. Domingue et al. flexible way.

and consuming of functionalities of existing pieces of software. In particular, WSDL is used to provide structured descriptions for services, operations and endpoints,

The use of services as the core abstraction for constructing Linked Data applications is therefore more generally applicable than that of current data integration oriented mashup solutions.

which links service descriptions with users ratings, tags and comments about services in a separate server.

This addressing scheme should be easily resolvable such that software clients are able to access easily underlying descriptions.

Architectural styles and the Design of Network-based Software Architectures. Phd Thesis, University of California (2000) 8. Mcilraith, S. A.,Son, T. C.,Zeng, H.:

Second, the development of advanced networking technologies in the access and core parts, with Qos assurance is seen.

Third, the todays'software technologies support the creation and composition of services while being able to take into account information regarding the transport/terminal contexts

Based on virtualization, the network can offer enhanced transport and adaptation-capable services. This chapter will introduce

9. The virtualisation as a powerful tool to overcome the Internet ossification by creating overlays is discussed in 10-11.

or multiple core network domains having content aware processing capabilities in terms of Qos, monitoring, media flow adaptation,

while a more radical approach can also be envisaged towards full virtualization (i e. independent management and control per VCAN).

the SM@SP instructs the SP/CP servers how to mark the data packets. The information to be used in content aware classification can be:

AS1 AS2 AN HB SP/CP server AS3 VCAN1/MQC1 VCAN2/MQC2 VCAN3/MQC3 L2, L3, L4 headers High level headers

Scalability is achieved by largely avoiding per-flow signalling in the core part of the network. In the new architecture, MANE also can act as content caches,

or co-locating CP's content servers in NPS'premises, nevertheless, an individual CC may also be a private CP.

IOS Press, Amsterdam (2009) 4. Schönwälder, J.,et al.:Future Internet=Content+Services+Management. IEEE Communications Magazine 47 (7), 27 33 (2009) 5. Zahariadis, T.,et al.:

IOS Press, Amsterdam (2009) 6. Huszák, Á.,Imre, S.:Content-aware Interface Selection Method for Multi-Path Video Streaming in Best-effort Networks.

IOS Press, Amsterdam (2009) 8. Martini, M. G.,et al.:Content Adaptive Network Aware Joint Optimization of Wireless Video Transmission.

IOS Press, Amsterdam (2009) 10. Anderson, T.,et al.:Overcoming the Internet Impasse through Virtualization. Computer 38 (4), 34 41 (2005) 11.

Chowdhury, N m.,Boutaba, R.:Network Virtualization: State of the art and Research Challenges. IEEE Communications Magazine 47 (7), 20 26 (2009) 12.

Levis, P.,et al.:The Meta-Qos-Class Concept: a Step Towards Global Qos Interdomain Services.

Proc. IEEE, Softcom, Oct. 2004 (2004) 13. Paris Flegkas, et al. Provisioning for Interdomain Quality of Service:

due to a bandwidth bottleneck at the server side from which all users request the content.

and end-user characteristics 382 N. Ramzan and E. Izquierdo such as decoding and display capabilities usually tend to be non-homogeneous and dynamic.

and MDC offers an efficient encoding for applications where content needs to be transmitted to many non-homogeneous clients with different decoding and display capabilities.

MDC combined with path/server diversity offers robust video delivery over unreliable networks and/or in peer-to-peer streaming over multiple multicast trees.

Performance evidence of software proposal for Wavelet Video Coding Exploration group, ISO/IEC JTC1/SC29/WG11/MPEG2006/M13146, 76th MPEG Meeting, Montreux

and search engines are expected to be able to understand underlying semantics in content and match it to the query.

and this process 398 Q. Zhang and E. Izquierdo took only a few seconds on a PC with Pentium D CPU 3. 40ghz and 2. 00gb of RAM.

IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 994 999 (1997) 4. Chang, E.,Goh, K.,Sychay, G.,Wu, G.:

IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2 (2003) 12. Hoiem, D.,Sukthankar, R.,Schneiderman, H.,Huston, L.:

IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2 (2004) 13. Kherfi, M. L.,Ziou, D.:

and Signal Processing ICASSP'04, vol. 3, IEEE Computer Society Press, Los Alamitos (2004) 16.

Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 1, pp. 200 206 (1999) 19.

Innovation lies at the core of smart enterprises and includes not only products, services and processes but also the organizational model and full set of relations that comprise the enterprise's value network.

and these combinations require the federation and integration of appropriate software building blocks. A new generation of enterprise systems comprising applications

fine-tuned to the needs of enterprise users by leveraging a basic infrastructure of utility-like software services.

Future Internet, Future Enterprise Systems, component-based software engineering, COTS, SOA, MAS, smart objects, FINES, FINER. 1 Introduction In recent years, software

and realising enterprises software applications. In essence, while enterprise management and planning services will be increasingly available from the‘cloud',in a commoditised form,

the Internet of Services (Ios), Internet of things (Iot) and smart objects, Internet of Knowledge (Iok), Internet of People (Iop.

Together, they need to cooperate in developing a new breed of services, tools, software packages, interfaces and user interaction solutions that are not available at the present time.

In particular on the first and the second GRC that concern the development of new FINESS capable of offering to the business experts the possibility of directly governing the development of software architectures.

if such software architectures will correspond to the enterprise architectures, and will be composed by elements tightly coupled with business entities.

seen as the new frontier to software components aimed at achieving agile system architectures. Section V provides some conclusions

methods and tools, supporting the idea that large software systems can be created starting from independent, reusable collections of preexisting software components.

This technical area is referred often to as Component Based Software engineering (CBSE. The basic idea of software componentization is quite the same as software modularization,

but mainly focused on reuse. CBSE distinguishes the process of"component development"from that of"system development with components 9. CBSE laid the groundwork for the Object oriented Programming (OOP) paradigm that in a short time imposed itself over the preexisting modular software development techniques.

OOP aims at developing applications and software systems that provide a high level of data abstraction and modularity (using technologies such as COM,.

, NET, EJB and J2ee. Another approach to componentization is that of the Multi Agent Systems (MAS),

heterogeneous, interacting software agents. Agents mark a fundamental difference from conventional software modules in that they are inherently autonomous and endowed with advanced communication capability 10.

On the other side, the spread of the Internet technologies and the rising of new communication paradigms, has encouraged the development of loosely coupled and highly interoperable software architectures through the spread of the Service-Oriented approach,

and the consequent proliferation of Service-Oriented Architectures (SOA). SOA is an architectural approach whose goal is to achieve loose coupling among interacting software services, i e.,

, units of work performed by software applications, typically communicating over the Internet 11. In general, a SOA will be implemented starting from a collection of components (e-services) of two different sorts.

Some services will have a‘technical'nature, conceived to the specific needs of ICT people; some other will have a‘business'nature,

'there is an active entity (a person, an organization, a computer, a robot, etc. that provides the services, with a given cost and time (not to mention SLA, etc.

where business expert can directly manage a new generation enterprise software architectures. Cloud computing represents an innovative way to architect

and the hardware and system software in the datacenters that provide those services 12. Cloud computing may be considered the basic support for a brand new business reality where FINERS can easily be searched,

to ease software development processes. Conversely, we propose to base a FINES architecture on building blocks based on business components.

with the protocols for issuing (as client) or responding (as server) to request messages. It is structured according to the grounding of OWL-S. 2 http://esw. w3. org/Sweoig/Taskforces/Communityprojects/Linkingopendata 3 Universal Resource Identifier,

Tangible entity, from computers to aircrafts, to buildings and furniture. Intangible entity, for which a digital image is mandatory.

summarised in the sentence‘The Network is the Computer'.'As it happens with early intuitions,

As a next prophecy we propose the Enterprise is the Computer, meaning that an enterprise,

Iot, Ios, Multi-Agent Systems, Cloud computing, Autonomic Systems) and, in parallel, some key areas of the enterprise that will start to benefit of the FINES approach.

Environmental Modelling & Software 24 (5)( 2009) 9. Crnkovic, I.,Larsson, S.,Chaudron, M.:Component-based Development Process and Component Lifecycle.

Component-oriented software development, Special issue on alaysis and modeling in software development, pp. 160 165 (1992) 11.

IOS Press, Amsterdam (2010) 16. Papazoglou, M. P.:Web Services: Principles and Technology. Prentice-hall, Englewood Cliffs (2007) 17.

Research projects following this direction have focused on microprocessor design, computer design, power-on-demand architectures and virtual machine consolidation techniques.

Large ICT companies, like Microsoft which consumes up to 27mw of energy at any given time 1,

and in the server farms is considered not since no special equipment is deployed in the GSN.

Management and technical policies will be developed to leverage virtualization which helps to migrate virtual infrastructure resources from one site to another based on power availability.

Core nodes are linked by an underlying high speed optical network having up to 1 000 Gbit/s bandwidth capacity provided by CANARIE.

in comparison to electronic equipments such as routers and aggregators 4. The migration of virtual data centers over network nodes is indeed a result of a convergence of server and network virtualizations as virtual infrastructure management.

During the service, the user monitors and controls resources as if he was the owner, allowing the user to run their application in a virtual infrastructure powered by green energy sources. 2 Provisioning of ICT Services over Mantychore FP7

MANTICORE II continued in the steps of its predecessor to implement stable and robust software while running trials on a range of network equipment.

PDU Servers (Dell Poweredge R710) To core network Wind power node architecture (Spoke) Switch (Allied Telesis) Raritan UPS (APC) PDU Servers (Dell

Poweredge R710) Hydroelectricity power node architecture (Hub) MUX/DEMUX To core network Backup Disk Arrays Gbe Tranceiver MUX/DEMUX GSN-Montreal

wind and solar types) 424 K. K. Nguyen et al. by green energy and adjust the network to the needs controlled by software.

such as routers and servers, is considered not, because no special hardware equipment is used in the GSN.

Figure 2 illustrates the architectures of a hydroelectricity and two green nodes one is powered by solar energy

The solar panels are grouped in bundles of 9 or 10 panels, each panel generates a power of 220-230w.

The wind turbine system is a 15kw generator. After being accumulated in a battery bank, electrical energy is treated by an inverter/charger in order to produce an appropriate output current for computing and networking devices.

Within each node, servers are linked by a local network, which is connected then to the core network through GE transceivers.

Data flows are transferred among GSN nodes over dedicated circuits (like light paths or P2p links), tunnels over Internet or logical IP networks.

then pushes Virtual machines (VMS) or software virtual routers from the hub to a sun or wind node (spoke node) when power is available.

which is a new software platform specific for dealing with the delivery of computing infrastructure 5. Figure 3 compares the layered architecture of the GSN with a general architecture of a cloud comprising four layers.

such as storage servers and application servers linked by controlled circuits (i e.,, lightpaths. The Platform Control plane corresponds to the Core Middleware layer,

implementing the platform level services that provide running environment enabling cloud computing and networking capabilities to GSN services.

The Cloud Middleware plane corresponds to the User-level Middleware, providing Platform as a service capabilities based on Iaas Framework components 5. The top Management plane or User level focuses on application services by making use of services provided by the lower layer

Such a migration is required for large-scale applications running on multiple servers with a high density connection local network.

This results in a reconfiguration of a large number of servers and network devices in a multi-domain environment.

Given that each VM occupies one processor and that each server has up to 16 processors,

20 servers can be moved in parallel. If each VM consumes 4gbyte memory space, the time required for such a migration is 1000sec.

The migration of data centers among GSN nodes is based on cloud management. The whole network is considered as a set of clouds of computing resources

and converges server and network virtualizations. Whilst most of cloud management solutions in the market focus particularly on computing resources,

Renewable Energy Provisioning for ICT Services in a Future Internet 427 5 Federated Network GSN takes advantage of the virtualization to link virtual resources together to span multiple cloud and substrate types.

An orchestration middleware is built to federate clouds across domains, coordinate user registration, resource allocation, stitching,

and leverage and interoperate with software outside of the GSN. Along with the participation of international nodes, there is an increasing need of support for dynamic circuits on GSN

including virtual servers and virtual routers and/or virtual switches interconnecting the servers. Such a virtual data center can be hosted by any physical network node, according to the power availability.

There is a domain controller within each data center or a set of data centers sharing the same network architecture/policy.

Virtualization techniques are shown to be the most appropriate solution to manage such a network and to migrate data centers following green energy source availability,

Extending the Argia software with a dynamic optical multicast service to support high performance digital media.

while also encompassing peripheral and less developed cities. It also emphasises the process of economic recovery for welfare and well-being purposes.

Living Labdriven innovation ecosystems may evolve to constitute the core of 4p (Public-Private-People-Partnership) ecosystems providing opportunities to citizens

section 5 presents conclusions and an outlook. 2 City and Urban Development Challenges In the early 1990s the phrase"smart city"was coined to signify how urban development was turning towards technology,

such as mobile devices (e g. smart phones), the semantic web, cloud computing, and the Internet of things (Iot) promoting real world user interfaces.

middleware and agent technologies as they become embedded into the physical spaces of cities. The emphasis on smart embedded devices represents a distinctive characteristic of smart cities compared to intelligent cities

Digital cities, from digital representation of cities, virtual cities, digital metaphor of cities, cities of avatars, second life cities, simulation (sim) city.

Smart cities, from smart phones, mobile devices, sensors, embedded systems, smart environments, smart meters, and instrumentation sustaining the intelligence of cities.

with the help of instrumentation and interconnection of mobile devices, sensors and actuators allowing real-world urban data to be collected

and wireless networks, offering high connectivity and bandwidth to citizens and organisations located in the city,(2) the enrichment of the physical space and infrastructures of cities with embedded systems, smart devices, sensors,

Future media research and technologies offer a series of solutions that might work in parallel with the Internet of things and embedded systems, providing new opportunities for content management 12,13.

large scale ontologies and semantic content Cloud services and software components City-based clouds Open and federated content platforms Cloud-based fully connected city Smart systems based on Internet of things Smart power management Portable systems Smart systems enabling integrated solutions e g. health

and care Software agents and advanced sensor fusion; telepresence Demand for e-services in the domains outlined in Fig. 1 is increasing,

There is a critical gap between software applications and the provision of e-services in terms of sustainability and financial viability.

Open source communities may also substantially contribute to the exchange of good practices and open solutions.

such as IBM, Cisco, Microsoft, are involved strongly in and are contributing to shaping the research agenda.

At the core of Periphèria lies the role of Living Labs in constituting a bridge between Future Internet technology push

and developers. 444 H. Schaffers et al. 5 Conclusions and Outlook In this paper we explored the concept of smart cities as environments of open

IBM Journal of Research & development 53 (3), 338 353 (2009) 11. European commission: Growing Regions, Growing Europe:

and its particular components, Internet of things (Iot) and Internet of Services (Ios), can become building blocks to progress towards a unified urban-scale ICT platform transforming a Smart City into an open innovation platform.

and at the service level (Ios as a suit of open and standardized enablers to facilitate the composition of interoperable smart city services).

control, and monitor complex interdependent systems of dense urban life 3. Therefore in the design of urban-scale ICT platforms,

three main core functionalities can be identified: Urban Communications Abstraction. One of the most urgent demands for sustainable urban ICT developments is to solve the inefficient use (i e. duplications) of existing or new communication infrastructures.

and interoperable communication protocols where physical and virtual things are integrated seamlessly into the information network 5. The Internet of Services (Ios):

namely Iot and Ios, can be essential building blocks in future Smart Cities open innovation platforms.

and Ios as ICT Building blocks for Smart Cities In the analysis from Forrester research 9 on the role that ICT will play in creating the foundation for Smart Cities,

Ios evolution must be correlated undoubtedly with Iot advances. Otherwise, a number of future Smart City services will never have an opportunity to be conceived due to the lack of the required links to the real world.

and challenges of implementing Iot and Ios at the city scale. Starting with the benefits of Iot technologies, they are twofold:

Considering now the Ios, it must be stressed that it is recognized widely (see for example 12) that the real impact of future Iot developments is tied heavily to the parallel evolution of the Ios. So,

a Smart City could only become a true open innovation platform through the proper harmonization of Ios and Iot.

Thus the integration of innovative principles and philosophy of Ios will engage collective end-user intelligence from Web 2. 0

The technological challenge of developing the Ios has been assumed at EU level, and actions are being initiated to overcome the undesirable dissociation between technological

Experimental Testbeds Ad hoc WSN Deployments Iotresources (sensor & actuator networks) Ios resources Testbed 1 USN-Enabler Service 1 Adaptation& Homogeneization Testbed

Control Layer GSDP SDP Entity exposure Service exposure Ios federation level Iot federation level NGN/Telco2. 0 Web2. 0 Service

Layer Fig. 1. Global Service Delivery Platform (GSDP) integrating Iot/Ios building blocks 3 Developing Urban Iot Platforms At present, some works have been reported of practical implementations

This capability will allow a seamless link between Iot and Ios, as discussed in Section 2. Also relevant will be the definition of open APIS,

the IERC cluster 24 or the emerging PPP Iot Core Platform Working group discussion 25, multiple different approaches for First Generation Iot-platforms are currently being implemented.

As a connection point between two networks (sensors networks deployed throughout the city and the core IP communication network),

and the proper basement for the new heterogeneous sensor network infrastructures needed to enable an evolving FI based on the Iot and Ios paradigms.

Node WISELIB User Developed App Tinyos Contiki Sunspot Opencom Middleware Mobility support Horizontal support Federation support Security, Privacy and Trust Fig

Tourism information in different parts of the city through mobile devices using visual and interactive experiences and in different languages.

research and service oriented initiatives on both Iot and Ios areas as WISEBED 25, SENSEI 8 and the USN Iot Platform (presented in Section 3) including Web 2. 0 and Telco 2. 0 design principles.

the Smartsantander middleware) that provide the functionality described by these requirements and is expected to accommodate additional requirements coming up from the different smart city services (use cases).

through Iot and Ios, for creating new real-life applications and services is huge in the smart city context.

providing the key components required to intertwining Iot and Ios worlds. Referred Iot USN platform is currently being evolved with the addition of new capabilities

Research Challenges for the Core Platform for the Future Internet. In: M. Boniface, M. Surridge, C. U (Eds.

) Towards the Future Internet, IOS Press, Amsterdam (2009) 17. Fisher, S.:Towards an Open Federation Alliance.


< Back - Next >


Overtext Web Module V3.0 Alpha
Copyright Semantic-Knowledge, 1994-2011