One of the most fascinating aspects of industry lies in having a huge amount of data and information, through which value can be extracted and it can provide a truly amazing benefit.
Knowing how to extract, transform, and move these data is a winning key in several respects:
- A deeper domain knowledge;
- Improving logistics;
- Cost and consumption efficiencies.
It’s no secret that the technologies that can help in building a value extraction process are numerous, and for this reason, choosing the right architecture can be difficult, expensive, and time-consuming. Imagine being able to use a tool that can encapsulate several technologies in one place, codeless and with a clear and intuitive graphical interface. Imagine being able to transport your data, transform and feed it to AI models, while managing ML models lifecycle, all in real-time.
Well, this tool exists and it is called Helicon: the ultimate codeless platform that simplifies Machine Learning tasks on top of event-based environments. The core of Radicalbit’s Helicon is the plan editor, which is backed by Kafka Stream topologies. Designed to combine streaming event analysis and AI, Helicon simplifies and accelerates developments in Advanced Analytics projects and Machine Learning enabled Decision Support Systems.
Helicon’s chemical industry best practice: the case of Alcoplast
Thanks to Radicalbit’s long-time partner, C.si.Co, it is possible to support manufacturing companies in digital transformation processes and Industry 4.0 – in particular.
C.Si.Co. is a company that has been working since 2004 in software engineering processes for the industrial sector. Its technological know-how and domain knowledge facilitates the integration needs in diverse levels of production: field, machine, cell and department control, supervision, MES and ERP integration. Food/Enological, Tobacco, Pharmaceutical, Chemical, Machinery, and Building Automation are just a few of the sectors in which C.Si.Co works.
Radicalbit joined its forces with the C.Si.Co team to give full support to Alcoplast, in a compelling and effective partnership. Alcoplast is an Italian company involved in the processing of plastic materials using chemical processes that take place on various plants.
Using sensors installed in each plant, a constant flow of data can be generated regarding values such as temperature, pressure, material load, and volume of steam flowing through the pipes. The exchange of data across all these nodes is done through the OPC UA (Open Platform Communications Unified Architecture) protocol, which allows data sharing between programmable logic controllers (PLCs), human machine interfaces(HMIs), servers and clients.
The need is to have a solution that can identify anomalies at the exact moment they happen, having the ability to understand what went wrong and take timely action.
Based on these needs it was decided to implement two AI solutions, one for each plant:
- Advance prediction of the time remaining at the end of a process (early warning systems)
- Detection of anomalies in the incoming data streams.
How to receive data in Helicon
In order to establish the communication between OPC UA and Helicon, the platform provides a gateway software, so what the user has to do is to set the appropriate configurations, such as:
- Connector type (read or write);
- Protocol (OPC UA in this case but we support also other protocols like S7, BACnet, and MODBUS);
- Endpoint to reach.
One of the main pros of software gateway is to make the solution up and running in an incredibly fast way, avoiding expensive installation (in both time and cost) to be put in place on the plant. However, if needed, the Helicon gateway can run on dedicated hardware on the edge.
Whereas it is necessary to use a physical third party external gateway and so without using the built-in solutions, Helicon offers different options:
- use our clients (we support different languages and protocols) who publishes/subscribes a message on/to a topic stream;
Generally this feature allows you to read from everywhere and write to everywhere. As said before, Helicon uses Kafka as one of its key technology enablers and so, once the communication is up, the sensor’s data is pushed inside a topic stream.
How data is transported and transformed
Once the data is flowing inside Helicon, we can handle them through the operators who will build the pipeline. The idea is that, starting from incoming raw data, we want to process them in a streaming fashion, and then put them in our Machine Learning models.
The model output will be an outcome flow itself, where for each data in input, an output will be calculated. Finally, this output will be pushed in a new topic stream which will used for two different purposes:
- Populate a real-time and interactive dashboard;
- Write the predictions again in OPC UA through the internal gateway.
In detail, a pipeline is a Directed Acyclic Graph (DAG), in which all the nodes are connected without loops. Its definition happens inside the Editor in which the user can design all the required operations to transform the data. The baseline is to have at least one Source and one Sink and in the middle all the other operators. We can define four different types:
- Source: this is the operator used for connecting to the topic filled with the incoming raw data deriving from OPC UA;
- Simple Transformations: these are the operators used for the base transformations of the data like filtering, name conversion or unnest json fields;
- Complex Transformations: these are the operators used for advanced transformation like the Models Predictor, able to send the data to a served model, ready to return the computation;
- Sink: this is the operator in which the user creates the output topic stream used for populating the dashboard and rewriting the outcomes to OPC UA.
For the sake of completeness, we show below the pipeline created for one of the two plants:
We start with the source operator, then we compute simple transformations to make the data ready to be sent to the model and finally we are able to sink the outcomes to the output stream.
How the application of AI takes place
Two different models have been implemented, one for each plant where the training and validation steps happened with a huge historical batch of data.
In the first, using sensor values such as pressure, temperature and amperometry the goal is to predict in advance the remaining time at the end of the chemical process. The chosen model is an XGBoost Regressor which has reached a Mean Absolute Error (MAE) of 30 minutes in the first four hours of the process.
The second one, starting from sensors like pressure, temperature and load of material, the goal is to identify an anomalous trend within the last available two hours. A Deep Learning model called Convolutional Autoencoder has been applied. It is able to detect anomalous trends that differ from the expected paths viewed in the training set.
As said before, Helicon provides an operator called Models Predictor in which the user has to set the input features and the output fields. Just these fields will be used to create the output flow stored inside the topic stream to write in OPC UA and in the dashboard.
Once the model is trained and validated in a local machine, it is ready to be uploaded inside Helicon and in a while it will be served through a Kubernetes node. All the technical aspects are hidden from the user so that he does not have to worry about configuring files or deploying containers. All the MLOps operations happen behind the scenes.
A graphical interface will communicate to the user the correct loading of the model and will inform him of its current status (for example if it is running).
The user can also upload different versions of the same model and then ask the Models Predictor Operator to use only the last available one.
Real-time data monitoring
A powerful feature in Helicon is the Real Time Data Monitoring, which enables the visualization of the incoming data inside the model, directly inside the platform. How can this help? Whether the models should identify some anomalies or processes lasting more than usual, the user can visualize the trendline in a specific temporal range in order to monitor by himself where the problem is.
Real Time Data Monitoring is available after the pipeline deployment, and to access it, all is needed is to click on the “plus” button above the operators (shown in figure 1) and choose the sensor to visualize in real time (shown in figure 2). In a while appears the visualization of the values in a time series plot (shown in figure 3).
The industrial sector covers an important part of the economic fabric of many countries. It generates an impressive amount of data that often derives from sensors or machinery, which if known to use properly, can bring a significant advantage in terms of economic savings, energy efficiency and logistics optimization. Given the nature of being able to manage data flows in real-time, Helicon is a valid and competitive tool not only for managing sensor data but also for the application of Artificial Intelligence solutions.
As described in the use case created for Alcoplast, the Helicon application provides a monitoring tool that takes place simultaneously on different systems and sensors, which, if they were to be supervised by human operators, would require a much greater effort and the use of many resources.
How to reduce energy waste with AI
Applying AI to real-time IoT data brings prompts advantages for companies, like considerable reductions in energy and raw material waste. The alarming worldwide and Europe’s current energy market situation is quite clear to everyone. According to a report by Allianz,...