Flink Concept Of Time

VerticalServe Blogs
7 min readJun 3, 2024

--

In this blog, we’ll explore the different types of time in Flink, including event time, ingestion time, and processing time. Understanding these concepts is essential for implementing accurate and efficient time-bound stream processing applications.

Time processing

Let’s start with the understanding of time bound stream processing. There are different types of time-based processing in streaming, including

1. Time series analytics 2. Window-based aggregations 3. Time-bound alerting and 4. Complex event processing. Each of these use cases relies heavily on accurately handling time within your stream processing applications.

Choosing the right notion of time — whether it’s event time, storage time, ingestion time, or processing time — is crucial for the correctness and efficiency of your application. For example, in time series analytics, event time is often the best choice as it reflects the actual occurrence of events, providing precise historical insights.

Window-based aggregations, such as tumbling or sliding windows, allow for grouping events into logical chunks based on time, enabling effective data summarization and analysis. Time-bound alerting helps in generating timely notifications based on event occurrences within specified time frames, ensuring prompt responses to critical situations.

Complex event processing involves detecting patterns and sequences of events over time, which requires a sophisticated understanding of temporal relationships and timing accuracy.

In this session, we’ll delve into each of these time-based processing types, discussing best practices and strategies for implementing them effectively in your Flink applications.

In this slide we will go over different types of time in our Flink pipeline. In this pipeline we are doing Window based aggregation of incoming sensor events from Kafka topic and pushing the counted results to another Kafka sink topic.

Let’s understand event time. Event time is the time when the event producer created the event, and it is generally embedded in the event itself.

For example, as shown in the slide, a sensor is generating events and pushing them to a real-time message bus like Kafka. Here, the sensor creates an event payload with elements like customer ID, tower ID, and event timestamp. This event timestamp field represents the exact time the sensor event was generated, and this is what we call event time.

Event time allows for precise time-bound processing. For instance, if we want to determine how many events were generated by the sensor in one hour, we can use the event timestamp from the event payload. This ensures accurate and reliable analysis.

A key aspect of event time is that the progression of time depends on the event record timestamps, not on the wall clock. This allows us to handle out-of-order events and maintain the correct sequence based on when the events actually occurred.

One of the pros of using event time is its deterministic nature, ensuring that events are processed based on their actual occurrence time. However, a con is that events can arrive out of order, which can add latency to event processing as we wait for late arrivals to ensure accurate ordering.

Flink utilizes a concept of Watermarks to handle event time, we will go over that with an example shortly.

Let’s understand the other notions of time: Storage Time and Ingestion Time.

First, Storage Time. Storage time is the time when an event gets stored in an intermediate data store. As shown in the diagram, sensor events are being stored in Kafka partitions. Kafka generates a timestamp at the moment the event is stored and includes it in the event’s Kafka header. This timestamp is what we refer to as storage time.

Next, let’s talk about Ingestion Time. Ingestion time is the time when an event is ingested by the Flink source, such as when a Kafka source reads events from the partitions. This ingestion time is essentially the clock time at the source operator level when the event is read into the Flink pipeline.

Let’s understand Processing Time.

Processing time is the time when an operator processes the event. For example, a Window Operator will open a window based on the system clock and perform all time operations accordingly. This is the simplest form of time in stream processing.

The pros of using processing time are that it is fast and straightforward, as it relies solely on the system clock and not on the events themselves. This simplicity can make it an attractive option for certain applications.

However, there are also cons to using processing time. It is not deterministic, meaning that if queues are backed up, the event’s processing time will be delayed. This lack of determinism can lead to inconsistencies in time-bound processing, especially under heavy load or network latency.

Here is an example of using processing time in Flink. In this example, the Flink job processes events in 5-second tumbling windows based on the system’s processing time, aggregating the events within each window.

Event Time Implementation

Finally let’s briefly look at the Event time processing example in Flink. As discussed earlier, Flink uses Watermarks for event time processing, these are special markers which are ingested in the stream.

Watermarks in Apache Flink are a mechanism to handle event-time processing in the presence of out-of-order events. They allow Flink to progress the event time, even when events are arriving out of order, by providing a notion of “time progress.” Watermarks help to determine when an event is late and when to trigger the processing of windows.

We will do a deep dive on Watermarks in a separate session and cover best practices around watermarks strategies.

Code snippet in the slide demonstrates how to extract event time from the event payload and use it for windowed aggregations. Here we are using a watermark strategy to handle out of bound events.

Comparison Event Time Vs Processing Time

So finally, let’s compare Event Time versus Processing Time across several key aspects.

Time Basis: Event Time is based on the timestamp embedded within the event, reflecting when the event was actually created by the producer. Processing Time, however, is based on the system clock at the moment the event is processed by an operator.

Determinism: Event Time is deterministic, ensuring events are processed in the correct order based on their actual occurrence. Processing Time is non-deterministic, meaning the order and timing of event processing can vary, especially under heavy load or delays.

Handling Out-of-Order Events: Event Time is designed to handle out-of-order events, making it suitable for applications that require precise event sequencing. Processing Time does not inherently manage out-of-order events, which can lead to inconsistencies in the data processing.

Accuracy: Event Time provides high accuracy for time-bound processing, such as historical analysis and event pattern detection. Processing Time offers lower accuracy because it relies on the system clock rather than the event’s actual timestamp.

Dependency: Event Time depends on the event timestamps embedded within each event, requiring accurate event creation times. Processing Time is independent of the events’ timestamps, relying solely on the system clock.

Latency: Event Time can introduce higher latency as it waits for late-arriving events to ensure accurate ordering. Processing Time typically has lower latency since it processes events as they arrive, without waiting for potential delays.

Complexity: Event Time is more complex to implement due to the need to manage late and out-of-order events. Processing Time is simpler and easier to implement, as it doesn’t require handling these complexities.

Use Cases:

  • Event Time: Ideal for time series analysis, event pattern detection, and precise historical reporting.
  • Processing Time: Suitable for real-time alerting and simple stream processing tasks where timing precision is not critical.

By understanding these differences, you can choose the right time concept for your Flink applications, ensuring your stream processing is both robust and efficient.

Flink Session:

About — Flink POD — Flink Committers & Experts

FlinkPOD — https://www.FlinkPOD.com

FlinkPOD is a specialized consulting team of VerticalServe, helping clients with Flink Architecture, Production Health Checks, Implementations etc.

VerticalServe Inc — Niche Cloud, Data & AI/ML Premier Consulting Company, Partnered with Google Cloud, Confluent, AWS, Azure…50+ Customers and many success stories..

Website: http://www.VerticalServe.com

Contact: contact@verticalserve.com

--

--

No responses yet