Handling "state refresh" in Flink ConnectedStream - apache-flink

We're building an application which has two streams:
A high-volume messages stream
A large static stream (originating from some parquet files we have lying around) which we feed into Flink just to get that Dataset into a saved state
We want to connect the two streams in order to get shared state, so that the 1st stream can use the 2nd state for enrichment.
Every day or so, the parquet files (2nd streams's source) are updated, and that will require us to clear the state of the 2nd stream and rebuild it (will probably take about 2 minutes).
The question is, can we block/delay messages from the 1st stream while this process is running?
Thanks.

There's currently no direct/easy way to block one stream on another stream, unfortunately. The typical solution is to buffer the ingest stream while you load (or re-load) the enrichment stream.
One approach you could try is to wrap your ingest stream in a custom SourceFunction that knows when to not generate data, based on some external trigger (which is the same signal you'd use to know that you have Parquet data to re-load).

Sounds a bit like your case is similar to Flip-23, which explores Model Serving in Apache Flink.
I think it all boils down to how (and if) your static stream is keyed:
if it is keyed in a similar way as your fast data, then you can key both streams, connect them and then have access to the keyed context.
if the static stream events are not keyed in a similar fashion maybe you should consider emitting control events which will trigger a refresh of those static files from an external source (eg s3). That's easier said than done as there is no trivial way to guarantee that all parallel instances of your fast stream will get the control event.
You can use ListState as a buffer, how you can access this though depends on the shape of your data.
It might help, if you shared a bit more info about the shape of your data (eg are you joining on a key? are you simply serving a model? other?).

Related

Reading two streams (main and configs) in sequential in Flink

I have two streams, one is main stream let's say in example of fraud detection I have transactions stream and then I have second stream which is configs, in our example it is rules. So I connect main stream to config stream in order to do processing. But when first time flink starts and we are adding job it starts consuming from transactions and configs stream parallel and when wants process transaction it sometimes see that there is no config and we have to send transaction to dead letter queue. However, what I want to achieve is, if there is patential config which I could get a bit later I want to get that config first then get transaction in order to process it rather then sending it to dead letter queue. I have the same key for transactions and configs.
long story short, is there a way telling flink when first time job starts try to consume one stream until there isn't new value then start processing main stream? How I can make them kind of sequential?
The recommended way to approach this is to connect the 2 streams and apply a RichCoFlatMap that will allow you to buffer events from main while you're waiting to receive the config events.
Check out this useful section of the Flink tutorials. The very last paragraph actually describes your problem.
It is important to recognize that you have no control over the order in which the flatMap1 and flatMap2 callbacks are called. These two input streams are racing against each other, and the Flink runtime will do what it wants to regarding consuming events from one stream or the other. In cases where timing and/or ordering matter, you may find it necessary to buffer events in managed Flink state until your application is ready to process them. (Note: if you are truly desperate, it is possible to exert some limited control over the order in which a two-input operator consumes its inputs by using a custom Operator that implements the InputSelectable interface.
So in a nutshell you should connect your 2 streams and have some kind of ListState where you can "buffer" your main elements while waiting to receive the rules. When you receive an element from the config stream, you check whether you had some pending elements "waiting" for that config in your ListState (your buffer). If you do, you can then process these elements and emit them through the collector of your flatmap.
Starting with version 1.16, you can use the hybrid source support in Flink to read all of once source (configs, in your case) before reading the second source. Though I imagine you'd have to map the events to an Either<config, transaction> so that the data stream has consistent record types.

Flink windowAll aggregate than window process?

We are aggregating some data for 1 minute which we then flush onto a file. The data itself is like a map where key is an object and value is also an object.
Since we need to flush the data together hence we are not doing any keyBy and hence are using windowAll.
The problem that we are facing is that we get better throughput if we use window function with ProcessAllWindowFunction and then aggregate in the process call vs when we use aggregate with window function. We are also seeing timeouts in state checkpointing when we use aggregate.
I tried to go through the code base and the only hypothesis I could think of is probably it is easier to checkpoint ListState that process will use vs the AggregateState that aggregate will use.
Is the hypothesis correct? Are we doing something wrong? If not, is there a way to improve the performance on aggregate?
Based on what you've said, I'm going to jump to some conclusions.
I assume you are using the RocksDB state backend, and are aggregating each incoming event into into some sort of collection. In that case, the RocksDB state backend is having to deserialize that collection, add the new event to it, and then re-serialize it -- for every event. This is very expensive.
When you use a ProcessAllWindowFunction, each incoming event is appended to a ListState object, which has a very efficient implementation -- the serialized bytes for the new event are simply appended (the list doesn't have to be deserialized and re-serialized).
Checkpoints are timing out because the throughput is so poor.
Switching to the FsStateBackend would help. Or use a ProcessAllWindowFunction. Or implement your own windowing with a KeyedProcessFunction, and then use ListState or MapState for the aggregation.

Using Broadcast State To Force Window Closure Using Fake Messages

Description:
Currently I am working on using Flink with an IOT setup. Essentially, devices are sending data such as (device_id, device_type, event_timestamp, etc) and I don't have any control over when the messages get sent. I then key the steam by device_id and device_type to preform aggregations. I would like to use event-time given that is ensures the timers which are set trigger in a deterministic nature given a failure. However, given that this isn't always a high throughput stream a window could be opened for a 10 minute aggregation period, but not have its next point come until approximately 40 minutes later. Although the calculation would aggregation would eventually be completed it would output my desired result extremely late.
So my work around for this is to create an additional external source that does nothing other than pump fake messages. By having these fake messages being pumped out in alignment with my 10 minute aggregation period, even if a device hadn't sent any data, the event time windows would have something to force the windows closed. The critical part here is to make it possible that all parallel instances / operators have access to this fake message because I need to close all the windows with this single fake message. I was thinking that Broadcast state might be the most appropriate way to accomplish this goal given: "Broadcast state is replicated across all parallel instances of a function, and might typically be used where you have two streams, a regular data stream alongside a control stream that serves rules, patterns, or other configuration messages." Quote Source
Questions:
Is broadcast state the best method for ensuring all parallel instances (e.g. windows) receive my fake messages?
Once the operators have access to this fake message via the broadcast state can this fake message then be used to advance the event time watermark?
You can make this work with broadcast state, along the lines you propose, but I'm not convinced it's the best solution.
In an ideal world I'd suggest you arrange for the devices to send occasional keepalive messages, but assuming that's not possible, I think a custom Trigger would work well here. You can extend the EventTimeTrigger so that in addition to the event time timer it creates via
ctx.registerEventTimeTimer(window.maxTimestamp());
you also create a processing time timer, as a fallback, and you FIRE the window if the window still exists when that processing time timer fires.
I'm recommending this approach because it's simpler and more directly addresses the specific need. With the broadcast state approach you'll have to introduce a source for these messages, add a broadcast state descriptor and stream, add special fake watermarks for the non-broadcast stream (set to Watermark.MAX_WATERMARK), connect the broadcast and non-broadcast streams and implement a BroadcastProcessFunction (that probably doesn't really do anything), etc. It's a lot of moving parts spread across several different operators.

Process elements after sinking to Destination

I am setting up a flink pipeline that reads from Kafka and sinks to HDFS. I want to process the elements after the addSink() step. This is because I want to setup trigger files indicating that writing data (to the sink) for a certain partition/hour is complete. How can this be achieved? Currently I am using the Bucketing sink.
DataStream messageStream = env
.addSource(flinkKafkaConsumer011);
//some aggregations to convert message stream to keyedStream
keyedStream.addSink(sink);
//How to process elements after 3.?
The Flink APIs do not support extending the job graph beyond the sink(s). (You can, however, fork the stream and do additional processing in parallel with writing to the sink.)
With the Streaming File Sink you can observe the part files transition to the finished state when they complete. See the JavaDoc for more information.
State lives within a single operator -- only that operator (e.g., a ProcessFunction) can modify it. If you want to modify the keyed value state after the sink has completed, there's no straightforward way to do that. One idea would be to add a processing time timer in the ProcessFunction that has the keyed state that wakes up periodically and checks for newly finished part files, and based on their existence, modifies the state. Or if that's the wrong granularity, write a custom source that does something similar and streams or broadcasts information into the ProcessFunction (which will then have to be a CoProcessFunction or a KeyedBroadcastProcessFunction) that it can use to do the necessary state updates.

Apache Flink:Window checkpoint

I want to know how to checkpoint a window. For example, windowed wordcount:
DataStream<Tuple3<String, Long, Long>> counts =
// split up the lines in pairs (2-tuples) containing: (word,1)
text
.flatMap(new Tokenizer())
.assignTimestampsAndWatermarks(new timestamp())
.keyBy(0)
.timeWindow(Time.seconds(2))
.process(new CountFunction())
Q1: What state should I save in CountFunction()? Do I need to save the buffer element of the window? Should I use ListState to store the buffered data in the window and use ValueState to store the current sum value?
Q2: When the fault occurs, how are the elements in the window handled? What happens when the window is restored?
Thank you for the help.
All of the state needed for Flink's windowing APIs is managed by Flink -- so you don't need to do anything. So long as checkpointing is enabled, the window buffer will be checkpointed and restored as needed.
Normally the CountFunction won't have any state that needs to be checkpointed. If the job fails while CountFunction is in the middle of iterating over the window's contents, the job will be rewound, and CountFunction will be called again with the same inputs.
If you do need to keep state in your CountFunction, then see Using per-window state in ProcessWindowFunction for information on how to go about that. It sounds like you will want to use globalState() (state that endures across all time), which you can access via the Context object passed to your process window function.
While you don't have a keyed stream, I suggest you use the keyed state mechanism described above. You can transform your non-keyed stream into a keyed stream by using keyBy with a constant key.

Resources