Flink-JPMML – Streaming Machine Learning Model Serving on Flink (Part 1)

    Show all

    Flink-JPMML – Streaming Machine Learning Model Serving on Flink (Part 1)

    Make crucial predictions as the data comes

    Walking by the hottest IT streets in these days means you’ve likely heard about achieving Online Machine Learning, i.e. moving AI towards streaming scenario and exploiting the real time capabilities along with new Artificial Intelligence techniques. Alongside, you will listen also about lack of research related to this scope, in respect of which the audience is quickly expanding.

    If we try to investigate it a little bit deeper then, we realize that a former step is missing: nowadays, well-known streaming applications still don’t get the concept of Model Serving properly and industries still lean on lambda architecture in order to achieve the goal.

    Suppose a bank has a concrete frequently updated batch trained Machine Learning model (e.g. an optimized Gradient Descent applied to past buffer overflow attack attempts) and it wants to deploy the model directly to their own canary Distributed IDS – backed by the streaming system – in order to achieve real time responses about the quality of the model.

    Comments are closed.