Data

(*)data_mining (2104)
Backup (32)
Big data (1251)
Computer data storage (37)
Data (11657)
Data center (89)
Data compression (2)
Data format (102)
Data sharing (53)
Data storage devices (40)
Data stream (18)
Data structure (12)
Data transmission (38)
Data type (16)
Data warehouse (108)
Database (1443)
Digital data (569)
Input data (15)

Synopsis: Ict: Data: Data:


(Focus) Eunika Mercier-Laurent-The Innovation Biosphere_ Planet and Brains in the Digital Era-Wiley-ISTE (2015).pdf

2015933946 British Library Cataloguing-in-Publication Data A CIP record for this book is available from The british Library ISBN 978-1-84821-556-6 Contents ACKNOWLEDGEMENTS...

smithsonian. com) The satellites and the spread of the Internet and mobile devices, smartphones and tablets have led to a veritable deluge of data, further accelerating the move toward the Internet of things.

It allows the large-scale dissemination, analysis and use of data for the benefit of consumers and citizens.

Analytics are used mainly to find information in large amounts of data. Other techniques of knowledge discovery, such as neural networks, genetic algorithms, induction or other multistrategy machine learning hybrid tools PIA 91

Why do sharks seem drawn to the data cables that rest on the ocean floor? Do they feel attacked?

3 http://www. digiworldsummit. com. 10 The Innovation Biosphere Google has evolved from search engine to many other services related to data collected from users

Their vision, strategy and innovation attitude have been fruitful in possession of a huge amount of data they have become the master of the world through data, the new capital.

All these data are stored in data centers that must be powered and cooled. The first European data center of Facebook was established in Luleå, Sweden.

Google also developed a machine-learning algorithm (artificial intelligence (AI)) that learns from operational data to model plant performance

How many times the same or similar data including the same pictures are registered in different databases?

and mass data simulation for medical research. The connected objects may serve to dynamically control energy consumption

increasing this way a need for storage of data. They have to respect a code of conduct.

data, text and image mining tools, creativity amplifiers, robots and drones, etc. mostly not eco-designed MER 11, MER 13a.

IT limits to data and information processing, whereas AI is about knowledge thinking and problem-solving.

Three-dimensional (3d printers, invented in the 1980s, are able to print a 3d object of almost any shape from a 3d model or other electronic data source primarily through an additive process in

Internet has facilitated the exchange of medical data and experiences. Health care practices are supported now by electronic processes and communication (ehealth.

The Internet's quick access to the patients'data is useful in an emergency, but it may also be used maliciously.

Researchers point out the absence of data related to the risk of the skin contact or the ingestion of nanoparticles BAT 14.

Most computer science training teaches how to think about data (classifying things), whereas artificial intelligence enables us to learn how to think about knowledge (problem-solving).

Their method uses data thinking and a model with 30 variables; the progress is presented only by comparing the values variable by variable.

is in essence a way to use statistical analysis of historical data to transform seemingly non-routine tasks into routine operations that can be computerized.

One participant plans matching public employment data with those from companies to create a decision support system dedicated to both recruiters and jobseekers.

and Earth science data recently made available on the Open NASA Earth Exchange (Opennex) platform on Amazon web services (AWS) in new and creative ways;

This policy analysis and measurement use a data approach and are based on statistic methods, and not on real knowledge about the current situation.

as well as the contribution of open data to the smartness of growing cities. 124 The Innovation Biosphere Numerous initiatives supported by the EU programs are described in Research*eu Results Magazine1.

To deal with information overload, new representations allow people to assimilate data in a simpler way not only through the use of existing visual channels,

Econophysics, a new field of science, tackles the analysis of economic data based on a methodology developed in physics.

and analyzes health data of administrative (billing) and lab results to detect care gaps and neglected patients.

(http://vbrant. eu) supporting the development of virtual research communities involved in biodiversity science for managing biodiversity data on the Web using biologists and computer scientists.

of 14 december 2012, http://ec. europa. eu/research/participants/data/ref/fp7/132141/hwp-201301 en. pdf. EUR 13 EUROPEAN COMMISSION, Europe in Changing World:

critical assessment of the use of analogies to derive best-estimates from existing nonspecific data, Journal of Environmental Radioactivity, vol. 136, pp. 152 161, October 2014.

OEC 05b OECD, Oslo Manual, Guidelines for Collecting and Interpreting Innovation Data, 3rd ed.,2005.


(Management for Professionals) Jan vom Brocke, Theresa Schmiedel (eds.)-BPM - Driving Innovation in a Digital World-Springer International Publishing (2015).pdf

and Richard Welch Part III Driving Innovation Through Advanced Process Analytics Extracting Event Data from Databases to Unleash Process Mining...

Another characteristic of the digital age relates to the fact that data is not only available anywhere (irrespective of location) but also anytime (irrespective of time.

Particularly, it also relates to the idea of real-time availability of data. The possibility to receive up-to-date information at any point in time is key for essential innovations in many business processes.

Integrating multiple kinds of real-time data, analytics today already enables the prediction of events like the spread of the flu

It is intriguing to think how such data integration will innovate our professional and private lives in the near future,

or to rest for a few minutes based on body data taken from the skin (vom Brocke,

and big data analytics that allow for real-time process decisions based on data available from products in use. Overall, we can observe distinct ways in which BPM can serve as a source of innovation.

Four chapters present latest findings on the role of analyzing extant data for realizing innovations in a process context.

Wil van der Aalst reports on Extracting Event Data from Databases to Unleash Process Mining. He introduces an approach to create event logs from underlying databases as a fundamental prerequisite for the application of processmining techniques

and processing the data in real-time is known. However the existing organization and business processes becomes a barrier for improvements.

In general the availability of data is never an issue. The usability of data on the other hand hinders the concepts from flourishing in the factories.

Interoperability is consistently an issue. Visibility in the value chain is a prerequisite for a proactive reaction.

The data processing, from the time an event occurs in manufacturing (for example a measurement of quality data) until management is able to make sense of the event and its consequences

We want to be able to make decisions based on real time data, which can be done using, e g.,

consistent and managed applications of models Dispersed intelligence Distributed intelligence Data, information, knowledge, models and expertise are used available

and data service to its targeted customers (primarily teens and young adults) consistent with its youthful, innovative brand.

To offer this service (sign up to ongoing voice/data provisioning) it could have created all the secondary,

relations Procurement Service fulfillment Compensation Invoicing Product data management Service provisioning Component fabrication IT service management Product design & development Shipping Corporate communications Knowledge management

removing the need to re-enter data when they return to their office, and receive immediate feedback on potential drug interactions and suggested next steps.

An industrial site inspector can input inspection data directly, triggering maintenance requests. Enterprise mobile applications can improve efficiency

The result is a flood of data that may contain valuable information, if that information can be detected.

A variety of data-focused technologies are combined to achieve these goals, including complex event processing, pattern analysis and detection, big data processing, predictive analytics and automated decisioning.

Using predetermined process models, historical data from the executing and past processes, and simulation techniques to project forward from 56 S. Kemsley a point in time,

what-if scenarios to determine optimal preemptive actions based on the current context of the process instance and historical data for similar instances.

Absence of separation between content contributors and consumers as well as low input efforts mean lowered thresholds for contributing data and knowledge.

or risks of data loss, may prevent organizations from (successfully) implementing SM in a business process life cycle (Kemsley, 2010).

Data put online can quickly go viral. A typical case of the virulence and unpredictability of SM is the United airlines breaks guitars video clip

or information, could be used as incriminating 62 P. Trkman and M. Klun data in court proceedings.

The monitoring phase can benefit from including SM for (1) receiving the (quantitatively measured) data and feedback from all stakeholders of the network and (2) sharing the process performance results with co-workers and customers/end-users alike.

(3) statistical analysis of SM data to provide possibilities for process improvement. 4. 1 Modeling Phase for Internal Participants The modeling of business processes provides a shared and comprehensive understanding of the business

Employees are involved actively in preparing the process model by contributing the needed data or knowledge,

Table 1 A framework for classification of SM inclusion in business process life cycle Internal participants External participants Process modeling phase Involving the employees in process modeling Gathering data

present and share data or artifacts (video, pictures or other forms of non-textual content).

All participants should have access to the monitored data and thus, in some way receive feedback about the business process they are participating in

Gathering the data required for the analysis can be time-consuming and fragmentary. Achieving a high response rate with surveys

and similar data gathering tools can be challenging. The already existing involvement of users in SM can simplify data contribution.

Including SM in the monitoring process provides stakeholders throughout the organization with a chance to contribute

4. 6 Monitoring Phase for External Participants One possibility of incorporating SM in the monitoring phase is also to make acquired data publicly accessible.

The statistical analysis of available data flows and other SM measures enable the evaluation of alternative process designs.

The findings refer to data integration and business process support as the main benefits of enterprise systems

An insurer can access actual driving behaviour data through an insurance telematics program. As a result, the insurance premium can be adjusted individually Process Innovation with Disruptive Technology in Auto Insurance 87 based on driving behaviour,

semi-fixed installation using the power and data outlets, or a smartphone, as illustrated in Fig. 1. The probe monitors

through use of this data a particular driver's behaviour can be assessed. 2. 2 The Vendor Movelo's Motivation to Commercialize the Application In late 2009,

and the price calculation is done based on risk-criteria data i e. age, number of years with a driver's license and type of car that the end user wants to insure.

Usage grade is calculated by the application by comparing the odometer data (mileage) with the mileage recorded in the application.

filtering GPS data combined with sensor fusion from the accelerometer and gyroscope in the smartphone,

and combined with map-data in the smartphone (Ha ndel et al.,2014). ) The complementary parameters for the risk-assessment process

The pilot generated in total big data containing 4, 500 driving hours and 250,000 km road vehicle traffic data (Ha ndel et al.

The data quality was assured by rigorous soft computing methods. However, the results did not fulfil all the initiative goals.

road type, distance driven) The rich driving data help predict driving risks, and the loss costs for highest risk driving behaviour.

Price calculation Based on the static demographic data and historical statistics Based on the dynamic changes of driving behaviour (UBI) Customers get an accurate and personalized price.

information (data generated by the disruptive technology), business process design for the core technology implementation, product/services Process Innovation with Disruptive Technology in Auto Insurance 97 implementation, individual organization readiness for innovation implementation, towards business models and the outer

Process Innovation with Disruptive Technology in Auto Insurance 101 Part III Driving Innovation Through Advanced Process Analytics Extracting Event Data from Databases to Unleash Process Mining Wil M

This paper uses a novel perspective to conceptualize a database view on event data. Starting from a class model and corresponding object models it is shown that events correspond to the creation, deletion,

binds, and classifies data to create flat event logs that can be analyzed using traditional process-mining techniques.

data is rapidly changing the Business Process Management (BPM) discipline (Aalst, 2013a; Aalst & Stahl, 2011;

and only organizations that intelligently use the vast amounts of data available will survive (Aalst, 2014).

Today's main innovations are intelligently exploiting the sudden availability of event data. Out of the blue, Big data has become a topic in board-level discussions.

The abundance of data will change many jobs across all industries. Just like computer science emerged as a new discipline from mathematics

we now see the birth of data science as a new discipline driven by the torrents of data available in our increasingly digitalized world. 1 The demand for data scientists is rapidly increasing.

all data related to social interaction. The Iop includes e-mail, facebook, twitter, forums, Linkedin, etc. Theinternet of Things (Iot:

refers to all data that have a spatial dimension. With the uptake of mobile devices (e g.,

It is not sufficient to just collect event data. The challenge is to exploit it for process improvements.

Process-mining techniques form the toolbox of tomorrow's process 1we use the term digitalize to emphasize the transformational character of digitized data. 106 W. M. P. van der Aalst scientist.

Process mining connects process models and data analytics. It can be used: to automatically discover processes without any modeling (not just the controlflow,

but also other perspectives such as the data-flow, work distribution, etc.),to find bottlenecks and understand the factors causing these bottlenecks,

Despite the abundance of powerful process-mining techniques and success stories in a variety of application domains, 2 a limiting factor is the preparation of event data.

The Internet of Events (Ioe) mentioned earlier provides a wealth of data. However, these data are a not in a form that can be analyzed easily,

and need to be extracted, refined, filtered, and converted to event logs first. The starting point for process mining is an event log.

or data elements recorded with the event (e g.,, the size of an order. If a BPM system or some other process-aware information system is used,

Therefore, we provide a database view on event data and assume that events leave footprints by changing the underlying database.

Extracting Event Data from Databases to Unleash Process Mining 107 technology often provides so called redo logs that can be used to reconstruct the history of database updates.

Although the underlying databases are loaded with data, there are no explicit references to events, cases, and activities.

Therefore, we need to relate raw event data to process instances using a single well-defined view on the process.

In the remainder we assume that data is stored in a database management system and that we can see all updates of the underlying database.

or data elements recorded with the event (e g.,, the size of an order. Table 1 shows a small fragment of a larger event log.

2013b) for more information on the data possibly available in event logs. Flat event logs such as the one shown in Table 1 can be used to conduct four types of process mining (Aalst, 2011.

Extracting Event Data from Databases to Unleash Process Mining 109 The Prom framework provides an open source process-mining infrastructure.

Instead, we focus on the event data used for process mining. 3 Guidelines for Logging The focus of this paper is on the input side of process mining:

event data. Often we need to work with the event logs that happen to be available,

There can be various problems related to the structure and quality of data (Aalst, 2011; Jagadeesh Chandra Bose, Mans, & Aalst, 2013.

These guidelines make no assumptions on the underlying technology used to record event data. In this section, we use a rather loose definition of event data:

events simply refer to things that happen and that they are described by references and attributes.

and analyzing event data. Different stakeholders should interpret event data in the same way. GL2:

There should be structured a and managed collection of reference and variable names. Ideally, names are grouped hierarchically (like a taxonomy or ontology.

d) Celonis process mining (Celonis Gmbh)( Color figure online) Extracting Event Data from Databases to Unleash Process Mining 111 specific extensions (see for example the extension mechanism of XES (IEEE Task force

, usage of data. GL7: If possible, also store transactional information about the event (start, complete,

Event data should be as raw as possible. GL11: Do not remove events and ensure provenance.

Sensitive or private data should be removed as early as possible (i e.,, before analysis). However, if possible, one should avoid removing correlations.

We aim to exploit the hidden event data already present in databases. The content of the database can be seen as the current state of one or more processes.

, BPM/WFM systems) record event data in the format shown in Table 1. To create an event log

we often need to gather data from different data sources where events exist only implicitly.

In fact, for most process-mining projects event data need to be extracted from conventional databases. This is often done in an ad hoc manner.

Extracting Event Data from Databases to Unleash Process Mining 113 Definition 1 (Unconstrained Class Model) Assume V to be some universe of values (strings

there cannot be two concerts on the same day in the same concert hall Fig. 2 Example of a constrained class model (Color figure online) Extracting Event Data from Databases to Unleash Process Mining 115

Extracting Event Data from Databases to Unleash Process Mining 117 Definition 6 (Events) Let CM ðc;

model (Color figure online) Extracting Event Data from Databases to Unleash Process Mining 119 Next we define the effect of an event occurrence, i e.,

Dedicated process-mining formats like XES or MXML allow for the storage of such event data.

one may convert it into a conventional event by Extracting Event Data from Databases to Unleash Process Mining 121 taking tsi as timestamp and eni as activity.

1) scope the event data,(2) bind the events to process instances (i e.,, cases), and (3) classify the process instances. 6. 1 Scope:

Determine the Relevant Events The first step in converting a change log into a collection of conventional events logs is to scope the event data.

One way to scope the event data is to consider a subset of event namesens EN.

Process cubes are inspired by the well-known OLAP (Online Analytical Processing) data cubes and associated operations such as slice,

However, there are also significant differences because of the process-related nature of event data. For example, process discovery based on events is incomparable to computing the average or sum over a set of numerical values.

Next to the automated discovery of the underlying process based on raw Extracting Event Data from Databases to Unleash Process Mining 123 event data,

but about getting the event data needed for all of these techniques. We are not aware of any work systematically transforming database updates into event logs.

but cannot be applied easily to selections of the Internet of Events (Ioe) where data is distributed heterogeneous

Process mining seeks the confrontation between real event data and process models (automatically discovered or handmade).

and filtering the event data. The twelve guidelines for logging presented in this paper show that the input-side of process mining deserves much more attention.

This paper focused on supporting the systematic extraction of event data from database systems. Regular tables in a database provide a view of the actual state of the information system.

Slicing, dicing, rolling up and drilling down event data for process mining. In M. Song, M. Wynn,

Data scientist: The engineer of the future. In K. Mertins, F. Benaben, R. Poler, & J. Bourrieres (Eds.),

Extracting Event Data from Databases to Unleash Process Mining 125 Aalst, W. van der, Barthelmess, P.,Ellis, C,

IEEE Transactions on Knowledge and Data Engineering, 16 (9), 1128 1142. ACSI. 2013). ) Artifact-centric service interoperation (ACSI) project home page.

Towards a unified view of data. ACM Transactions on Database Systems, 1, 9 36. Cohn, D,

A data-centric approach to modeling business operations and processes. IEEE Data Engineering Bulletin, 32 (3), 3 9. Cook, J,

. & Wolf, A. 1998). Discovering models of software processes from event-based data. ACM Transactions on Software engineering and Methodology, 7 (3), 215 249.

Cook, J, . & Wolf, A. 1999). Software process validation: Quantitatively measuring the correspondence of a process to a model.

It's high time we consider data quality issues seriously. In B. Hammer, Z. Zhou, L. Wang,

Extracting Event Data from Databases to Unleash Process Mining 127 Reichert, M, . & Weber, B. 2012).

Rediscovering workflow models from event-based data using little thumb. Integrated Computer-Aided Engineering, 10 (2), 151 162.

Moving to reliable, valid and ultimately credible decisions about innovations through evidence-based decision-making requires an ability to work with data

available and quality data that can be used as evidence. They also require a capability to collect

analyze and interpret such data to prepare for decisions. Table 2 summarizes relevant requirements. These scientific capabilities can obviously be provided by universities and research institutions.

and interpret data using rigorous scientific methods, research can provide additional innovation support services: Novel conceptual perspectives:

Evidence that is converted from data gathered and analyzed scientifically can provide a solid and trustworthy platform for decisionmaking about innovations, their potential, pitfalls and consequences.

Table 2 Requirements for evidence-based innovation decisions Capability Requirements Data awareness Identifying appropriate data Finding available data Understanding the quality of Data science

and gather objective data that can be used as facts in innovation decisions. I use the term digital infrastructure in a deliberately loose manner,

or complement historical transaction data with real-time data and analytics, such as in-memory technology. These digital infrastructures provide ample opportunities for evidence-based management in process innovation.

that is, while more data can be generated, more can also be analyzed and used. A classical example is that of Google analytics that offers free analysis of web browsing behavior, ready at the fingertips of any decision-maker.

and analyze large volumes of data in realtime (vom Brocke, Debortoli, Mu ller, & Reuter, 2014).

Traditionally, fact finding in support of decision-making in the context of BPM methodologies such as Six Sigma and others has always been hampered by sheer pragmatic concerns about the feasibility, resourcing and costing of data collection efforts.

Data that is generated on digital platforms is located typically at the other end of the scale:

Data points are generated well beyond the sample size required to reach conclusive findings about the data.

It is no longer acceptable not to peruse available data and evidence in making process-related decisions.

this is a research challenge where data such as store size, quality of baking, number of competitors in the market, customer demographics,

Having examined these factors by studying technology data (such as point-of-sales, HR and payroll systems, census data about customer demographics) as well as empirical data from studying the stores and process participants themselves,

conclusions can be made about the occurrence of positive deviance. In a nutshell, in our example the findings were as follows:

who use data from an information system, together with their detailed knowledge of local customers, local events and all other factors that will influence sales.

Combining evidence from past sales data, forecasting algorithms as well as observations and evidence from how store managers operate,

System MAPE System MAE Trend line MAPE Trend line MAE MAE Fig. 4 Data analytics in the replenishment process 2 In replenishment,

data scientists are becoming an essential resource in developing a capability to identify, understand, analyze and interpret evidence in support of innovation decisions about business processes.

Data scientist: The sexiest job of the 21st century. Harvard Business Review, 90 (10), 70 76.

what data to gather and when and how to make decisions. A number of approaches for flexible process management have emerged in recent years.

and to collect different subsets of data at different points in the process, with as few restrictions as possible.

In a second step, data is collected with respect to the chosen performance measures. This is followed (third and fourth steps) by identifying samples of exceptional performance from the data

and analyzing the data in an exploratory manner in order to identify what factors might underpin the identified exceptional performance (positive deviance).

In a fifth step, statistical tests are used to identify correlations and causal links between the identified factors and positive deviance.

and (2) the values of data attributes after each activity execution in a case. As an example, consider a doctor who needs to choose the most appropriate therapy for a patient.

Historical data referring to patients with similar characteristics can be used to predict what therapy will be the most effective one

and for every data input that can be given to this activity, the probability that the execution of the activity with the corresponding data input will lead to the fulfillment of the business goal.

To this aim, we apply a combination of simple string matching techniques with decision tree learning. An approach for the prediction of abnormal terminations of business processes has been presented by Kang

conversion and transformation as well as versioning of model data are available. Generally, two file formats are supported:

(3) creating a homogeneous data basis for different application and analysis scenarios. Moreover, the authors aim at publishing the model corpus in terms of open models,

/It contains 98 reference model entries with lexical data and meta-data, such as the number of contained single models.

Data and Knowledge Engineering, 68 (9), 793 818. doi: 10.1016/j. datak. 2009.02.015. Vogelaar, J. J. C. L.,Verbeek, H. M. W.,Luka, B,

data view or organizational view) are modeled, it is important to maintain a systematic relationship between modeling elements from different views to ensure all models are integrated properly.

Data management (2nd ed.).Norwood, MA: Artech House. Sweller, J. 1988. Cognitive load during problem solving:

The empirical data was derived from a series of workshops and interviews with the key stakeholders along the process steps, in conjunction with observations.

Due to an increased degree of digital connectedness and increased flow of data from assets and actions within the ecosystem, there are great possibilities to ensure that the value production becomes even more coordinated

and collaboration between engaged actors through its ability to enable actors to share desired data.

and performance metrics, allowing correct measurement data to be obtained and for the results to be interpreted based on relevant contextual factors (explanatory factors),

what data should be replicated in a management dashboard. 3. 8 Innovation 2: Information Sharing Platforms for Situational Awareness:

The ambition with a management dashboard is to enable digital images providing status of the D2d process for key stakeholders with relevant data in real time for the purpose of increased punctuality and customer satisfaction.

and small third party developers design the latest traveller support services using commonly available data. There are a number of novel insights to be made.

Second, consumers of digital services (e g. the travellers) are also suppliers of feedback data, encompassing feedback on digital services, new ideas on digital services, the use of physical infrastructures and transport

which data should be provided. Fig. 7 Example of a passenger dashboard channelized via different media 210 M. Lind

Extracting event data from databases to unleash process mining. In J. Brocke & T. Schmiedel (Eds.

It is also responsible for obtaining data about the performance indicators, their historical data and current values,

and that uses business intelligence mechanisms to extract data from performance indicators. Both regular business processes and adapters are modeled using BPMN notation

All variables from the process instance are copied to the adapters so all adapters can have access to the process data.

On the basis of this data, the adapters can compute recommendations and store them in the Context Provider.

, data elements, values, organizational units, or temporal characteristics. Finally, similar to classical by design Flexible Workflow Management System (WFMS) Adapted Workflow Instance Run-time Design-time Control Process Control Requirement Reference Control

Current technological progress and the ongoing trends to analyze business data quasi in real-time will,

Data Knowledge Engineering, 53,129 162. Betke, H.,Kittel, K, . & Sackmann, S. 2013). Modeling controls for compliance An analysis of business process modeling languages.

Data and Knowledge Engineering, 50,9 34. Sackmann, S. 2011. Economics of controls. In Proceedings of the international workshop on information systems for social innovation 2011 (ISSI 2011)( pp. 230 236.

Zachman's enterprise architecture framework (1987) categorizes different artifacts of organizational data that are required for IT development, e g. design documents, specifications, and models.

1) what (data),(2) how (function or process),(3) where (network),(4) who (people),(5) when (time),

Furthermore, Process Responsibility takes over ownership of processes, master data, and customized system settings. This includes the definition of process trainings and participation in the appointments of process management roles.

Master data management. Burlington: Morgan Kaufmann. Markus, M. L, . & Jacobson, D d. 2010). Business process governance.

He has published more than 200 research papers, a o. in ACM Transactions on Software engineering and Methodology, IEEE Transaction on Software engineering, Information systems, Data & Knowledge Engineering,

BBIG data, 3, 7, 10,22, 53,95, 106,250 Bottom up approach, 61 BPM. See Business process management (BPM) BPMN.

BPM Driving Innovation in a Digital World, Management for Professionals, DOI 10.1007/978-3-319-14430-6 305 DDASHBOARD, 206 210 Data awareness

, 13,18, 21,75 84,139, 297 Event data, 13,105 125 Event log, 13 107 113,121 125,151, 152 Evidence-based BPM, 13,129 142 Evidence-based management, 130,132 134,136 137 Execution, 24,36, 40,66 67,146, 147,150


< Back - Next >


Overtext Web Module V3.0 Alpha
Copyright Semantic-Knowledge, 1994-2011