Enable Adaptive AI and Decision Intelligence with Helicon

Dec 21, 2022 | by Radicalbit, Helicon

happy data scientist

Gartner has recently included Adaptive AI as one of the Top Strategic Technology Trends in 2023. It can be seen as an AI system that adapts to changing real-world situations and continuously evolves based on real-time data. Adaptive AI leverages event stream processing to retrain learning models and thus adjust for unforeseeable circumstances. 

This is the approach we have been pursuing in Radicalbit with our Helicon platform. By offering an out-of-the-box solution for enriching AI implementations with data in motion, we lay the groundwork for online machine learning systems that can enhance decision making, save costs, and increase efficiency at a company-wide level. 

adaptive AI

In this regard, Helicon as an Adaptive AI-enabler is instrumental in implementing Decision Intelligence, a conceptual and technological framework that evolves organizational decision-making by including the application of machine learning at scale and the integration of self-learning models and data streaming. Thanks to Decision Intelligence, organizations can rely on protean data-driven practices and self-learning analytical techniques to respond not only to ever-changing business needs, but also to concept & data drift. This in turn accelerates value from real-time data and drastically reduces time-to-market.

Helicon Pipeline

To maintain flexibility and responsiveness to real-world scenarios it is also important to ensure integration with different data sources. Helicon offers two different API sets: on the one hand, high-level, easy-to-use APIs catering to non-engineering professionals such as data scientists; on the other hand, Helicon includes low level APIs designed for data and software engineers, allowing to leverage industry-standard data integration and ETL/ELT tools.

To maintain flexibility and responsiveness to real-world scenarios it is also important to ensure integration with different data sources. Helicon offers two different API sets: on the one hand, high-level, easy-to-use APIs catering to non-engineering professionals such as data scientists; on the other hand, Helicon includes low level APIs designed for data and software engineers, allowing to leverage industry-standard data integration and ETL/ELT tools.

Drawing on the notions of Adaptive AI and Decision Intelligence, we set out to drive sustainable innovation by combining simplification, automation and artificial intelligence. This mission is reflected in Helicon’s latest developments, which and can summarized as follows:

  • Streams
    • Kafka Consumer & Producer native, standard, production-ready and cloud-based APIs, simplifying and accelerating data integration
    • Improved Avro schema management, including real-time data inspection, advanced merging schema strategies, and inferring schema definition by tabular (CSV) or JSON file dataset example upload
    • Stream metrics, observing KPIs like storage, production, consumption, and throughput for any single partition
  • MLOps
    • Possibility to make inferences by calling deployed models using a secure and easy-to-use HTTP API
    • Improved models serving based on brand new Seldon Core V2
    • Models signature (schema) management both for input and output
    • Real-time data exploration on any pipeline topology node, for both input and output schema. This feature is particularly useful for monitoring deployed models
  • Pipelines
    • Possibility to manually scale up and down the workload on Helicon’s internal Kubernetes cluster by leveraging the pipeline instances number per job
    • Ad-hoc Python operator to write single message transformations function, ideal for use cases such as pre-processing and post-processing complex model features on top of streaming data
    • Improved Pipelines metrics for KPIs like Throughput, Latency, CPU Usage, and Memory Usage

In the coming months, we’ll continue to evolve Helicon in order to efficiently respond to ever-changing business and technological challenges. If you want to see for yourself how it can increase productivity and collaboration among data teamsvisit our website to learn more and start your free Helicon trial!

Kafka Summit 2022: Keynote highlights

Kafka Summit 2022: Keynote highlights

Hello, Kafka enthusiasts!  Kafka Summit, the premier event for developers, architects, data engineers, DevOps professionals, and streaming data lovers got finally back in person on April 25th-26th at the magnificent venue The O2 in London. Hosted by Confluent, the...

Tackling Data Challenges for a real-time MLOps platform

Tackling Data Challenges for a real-time MLOps platform

Event stream processing systems’ main features are high throughput and low latency; this is the reason why we put Kubernetes, Seldon, and Apache Kafka together to develop a brand-new MLOps platform – Helicon, which is able to serve together thousands of ML models,...