Q&A

The Knowledge of experts will not be obsolete in Industry 4.0

Page: 3/4

Related Companies

But big isn’t always beautiful: 100-per-cent data acquisition from the work of a machining centre, for instance, already means in realtime (35 process parameters per millisecond) an annual data volume of 5.8 terabytes. How do you filter out the interesting facts from it?

Michael Königs: For extracting interesting information and process parameters, we use both statistical methods (machine learning) and algorithms developed specifically for this purpose, which enable expert and domain knowledge to be integrated. Quite generally, though, it’s correct to say that continuous data acquisition will entail a huge quantity of data. There are approaches now that involve not acquiring all data continuously at the maximum scanning rate, but only at particular times, after defined events – e.g. a threshold limit’s being violated – or for certain processes. Other approaches compress the data volume by forming process parameters. Still other approaches, in their turn, deliberately utilise the large quantities of data in order to identify patterns using appropriate mathematical algorithms. It depends very closely on the application involved which approach is the likeliest to be successful.

Additional Information
INFO
What is Big Data?

One way to define what constitutes big data is using the “three V's”: Volume, variety and velocity. This and many other definitions compare big data to smaller amounts of data that are still processable using traditional data processing applications. Another criteria is often seen in the addition of external data to processing internal data. In industry application this internal data is collected with sensors that measure all movements and conditions in the machine and parameters that affect the machine.

Moreover, big data sets are predominately hard to visualise. Another problem that arises is the issue of storage and processing. Today, a common solution is to store big data in the cloud. In addition to the management problem, the amount of collected data is constantly increasing while analysing methods and programs still are not broadly present in companies.

The hope that lies in big data is that by using complex algorithms and analysing methods manufacturers will get valuable insights that can be used for a more efficient and predictable production.

Gallery

Can this quantity of data still be handled with the customary hardware or is a supercomputer needed – a quantum computer?

Michael Königs: Present-day technology (if used correctly) will mostly suffice in our disciplines if individual or partial models are developed by experts. The broad, interdisciplinary interconnection and use of these models, however, swiftly brings the currently available hardware up against its limits. A cloud environment geared to these needs, in which both statistically and physically motivated individual models can be interlinked and executed with responsively fit-or-purpose computing resources, can provide the requisite connectivity and computing power here. But I don’t believe that an environment of this kind can be achieved only by using quantum computers.

Academics at the Fraunhofer Institute for Production Systems and Design Technology (IPK) have expressed the view it would be more sensible to make an intelligent preselection in the vicinity of the machine prior to storage, before then a downsized data record (“smart data”) is transferred to the cloud. What do you think of this approach?

Alexander Epple: This is an idea I can basically go along with. At the WZL, too, we’re examining local data processing and interpretation, and thus refining the data to create smart data in the immediate vicinity of the machinery system concerned. The advantages of this local data pre-processing are obvious. However, there are also firms who store and process all raw data unfiltered in a central system – like a cloud.

What is your conception of a cloud, and how can it be utilised?

Alexander Epple: By “cloud”, we mean a model for locationally independent, on-demand network access to a shared pool of configurable IT resources that can be demand-responsively utilised and enabled again. These subsume not only network bandwidths and computer hardware, but also services and applications. In the context of big data, cloud platforms, precisely by virtue of this scalability, and the broad availability of analytical algorithms, offer good preconditions for downstream analysis of data quantities that are too large, too complex, too weakly structured or heterogeneous to be evaluated manually or using classical methods of data processing. Upstream data transmission may prove to be technically challenging, however. A local data acquisition system at the machine is comparatively simple to design if the data only have to be forwarded. This, of course, has substantial advantages in terms of maintenance and roll-out, but conversely poses tough challenges for the bandwidth required to transmit the data. Local data pre-processing and compression can reduce this. However, every data compression operation entails a loss of information that may be irrelevant for ongoing analyses but be totally crucial for future scenarios. Sometimes, you only realise afterwards that the information no longer available would actually have been helpful in order to interpret a particular phenomenon.

Both these approaches have their own advantages, and it will depend on the strategy of the application partner concerned as to which approach he decides to pursue. Quite generally, we are observing a certain amount of scepticism when it comes to storing data centrally in a cloud system. But there are also options for a local “company cloud”. Even data evaluation directly at just one local machinery system can already offer major potentials for raising productivity.

(ID:44763302)