About This Webinar
Audience: Architects, Data Engineers
Technical level: Introductory
The Lightbend project family has grown: Introducing Cloudflow, the fastest way to build streaming data pipelines with Akka Streams, Flink, and Spark.
We built Cloudflow to mitigate common failure patterns in projects moving from batch to stream processing: processing data and serving machine learning models in real time requires different ingestion and processing capabilities, as well as scalability and availability requirements, especially when deployed to and cloud-native architectures. Cloudflow spans all aspects of the application lifecycle to dramatically accelerate time to market for streaming data applications.
In this talk by Craig Blitz, Senior Product Director at Lightbend, we take a look at Cloudflow from the builder perspective, including:
- Why and What? How to get started with Cloudflow in various modes: Kubernetes environment, non-Kubernetes local developer sandbox, and as part of the Lightbend Platform for data flow visualization and serious enterprise needs.
- How does it help developers? Cloudflow extends build tools and offers a streamlet abstraction API to allow them to focus on business logic, along with the appropriate tool of choice for each stage of processing (e.g. Akka Streams, Spark, or Flink).
- How does it help DevOps? Cloudflow federates Kubernetes operators and extends kubectl to automate operational tasks such as deployment, managing connections between processing stages, serialization, schema type checking, scaling, and application evolution.