On this page:

Do you know the value of your machine data?

WAGO Analytics

When it comes to optimizing your own machines or systems, the challenge is usually to improve and quantify process knowledge and to transfer it back into the process. WAGO Analytics, a Minden-based company, supports users from data acquisition to analysis and provides intuitive visualization of the dependencies within the systems. The connections that are uncovered are integrated into the processes, allowing you to fully exploit the potential for optimization.

In joint projects, WAGO works closely with customers to develop tailored solutions for the profitable data use within the specific application.

The Benefits for You:

  • Identification of optimization potential
  • Improved quality levels
  • Tailored analytics solution
  • Greater efficiency and lower costs
  • Improved process stability

Six Steps from Data Acquisition
to Profitable Use


Step 1

Step 1: Gathering Raw Data from Various Data Sources


Step 2

Step 2: Processing the Data


Step 3

Step 3: Continuous Data Acquisition


Step 4

Step 4: Explorative Data Analysis and Selection of the Right Representation


Step 5

Step 5: Integration into the Operating Process


Step 6

Step 6: Leveraging Correlations and Optimization Potential

What Is Necessary for Successful Implementation?


Raw data from various data sources

To capture machine and sensor data, users need different hardware products that provide the corresponding data pool. WAGO offers a wide range of components for this purpose. The corresponding devices support all standard interfaces and established industrial protocols. In addition to the WAGO I/O System 750 with PFC family controllers and various modules for measurement and sensor data acquisition, users have access to IoT Boxes from WAGO for retrofitting. They are incredibly versatile and ideal for simple machine and system connections. The data can be forwarded to a cloud service or edge computer. The WAGO product portfolio also covers these areas.

Software Platform

Data Analysis – Centralized or Decentralized

The machine and system data that has been collected is then available for both centralized and decentralized analysis. The advantage of the centralized approach is that all data is in the cloud and can be accessed at any time By contrast, with the decentralized approach, the machine and system data can be analyzed in the system directly, for example. This is where the advantages of Docker® technology come into play. The PFC200 Series Controller and the new Edge devices are already Docker®-ready. This allows use of state-of-the-art software and numerous applications within the custom analytics solution.

WAGO Helps You Use
Your Data Profitably

We help you get the tailored analytics solution you need.

1. Gathering Raw Data from Various Data Sources

In the first step, the relevant data sources are identified together with the relevant domain expert. The various machine and system interfaces are read out independently of the respective protocol. Values are accessed from the controller directly, and additional sensors are installed if necessary. The analytics solution should be integrated into the existing controller. Therefore, the automation engineering responsible for the system is consulted on the data acquisition setup.

2. Processing the Data

In the second step, the data is time-synchronized. The relevant information is extracted and decoded in a uniform format. Irrelevant data is filtered out and removed. In addition, relevant key figures are calculated on an ongoing basis. This step is particularly important, because a clean database is the basis for the success of an analytics project.

3. Continuous Data Acquisition

Then in the third step, an individual data logger for the machine or system is put into operation. The data is stored and used for in-depth analysis. A variety of useful data is generated through continuous data acquisition. This can be implemented in the form of test plans, together with the domain expert. Depending on the use case, it may also be sufficient to let the machine run for a longer period of time.


4. Explorative Data Analysis and Selection of the Right Representation

In the fourth step, the exploratory data is analyzed, and the right representations are selected. In offline analyses, dependencies and relationships are extracted, interpreted and visualized. Rare events are revealed. In close cooperation between the data scientist and the domain expert, the first successes are achieved in uncovering potential for optimization. Complex algorithms are often unnecessary. However, the exploratory data analysis also involves evaluating algorithms from machine learning and AI for different use cases in offline analyses. If the desired use case cannot be represented with the data from the existing database, either new sensors are installed, or the test plans are adapted.


5. Integration into the Operating Process

In the fifth step, the analyses and visualizations that have been optimized for the machine or system are integrated into the operating process. Once again, the automation engineer is consulted about the integration into the control system.

6: Leveraging Correlations and Optimization Potential

In the sixth step, the customer exploits the interrelationships and potential for optimization, benefiting from the advantages offered by a tailored analytics solution. If necessary, the analytics solution can be expanded in a further iteration for the next use case.


Integrated Analytics for Industry

from Data Acquisition to Data Analysis

FAQ – Analytics

General Questions

The potential offered by analytics is immense. Common applications include optimization of manipulated variables, reduction of downtime and minimization of quality and process fluctuations. With tailored, approaches, we help you exploit potential for optimization and increase the effectiveness of your processes. Data analysis can also be used to implement new business models.

Working together, we develop a solution to capture the existing database. All relevant protocols and interfaces can be used. If necessary, WAGO offers a comprehensive portfolio for installing additional sensors.

Data acquisition and recording are part of an analytics project. No database needs to be available when starting the project. If you already have data, it will be integrated into the evaluation. If the solution to your problem requires precise detection of rare events, longer data collection periods are usually needed. The aim is to generate a useful variety of data. As the amount of data increases, the precision of data analysis increases.

Depending on the application, a local solution (centralized solution) may be possible.

We make individual dashboards available and reporting functions to help you keep an eye on your analytics application and perform analyses independently.

The range of proven methods is wide. Depending on the use case, supervised or unsupervised learning algorithms are be used. Model-based methods that integrate expert knowledge are also used. For some use cases, visualizations of the live data also add value.

In the first step, it makes sense to focus on a part of your system where you see potential for optimization. After the results have been successfully integrated into your process, the next use case can then be identified and processed.

Processes and Methods

Data science, machine learning and artificial intelligence all intersect. WAGO Analytics allows you to use methods that lie in this intersection for automation technology.

The computing time needed for data analysis grows with the amount of data being processed. Not every analysis method is suitable for live applications.

The range of proven methods is wide. At present, neural networks are receiving a lot of attention. However, they are only suitable for applications like recognition of events. This does not suffice to give you a complete picture of your machine or system.

WAGO Analytics allows integration of your analyses into live operation. In contrast, statistics concerns the investigation of historical data.

“Big data” refers to large data volumes. Data science involves the analysis of data, which may be big data, but can also be the live data from your machine or system. Machine learning is a class of methods for generating new information from data and is part of data science.

Approach and Procedure during the Analysis

First, the use case is discussed and evaluated. Data is recorded, and your understanding of the process is quantified. After that, further steps are planned and re-evaluated together with your process experts.

After a discussion and evaluation of the application, a data logger for the various data sources is put into operation, and the data is pre-processed. In offline analyzes, the application options are developed and evaluated in coordination with your process experts. If you are happy with the results, the solution is then integrated into your operating process. We remain available as your partner throughout subsequent steps.

Not all data is relevant and consistent. Data cleansing is designed to transform the data into a highly consistent format with high information density. As a result, no unnecessary data is transported, visualized or forwarded to the data analysis algorithm, so your analytics solution operates efficiently.

Focus on the smallest possible section of your system that you want to investigate. Formulate a specific question, and use your understanding of the process to determine what data provides important information.

Email us!

Innovation & Technology

Additional service offerings: