I am having quite hard time to understand flink windowing principals and would be very pleased if you could point me in the right direction.
My purpose is to count the number of recurring events for a time interval and generate alert events if the number of recurring events is greater than a threshold.
As I understand, windowing is a perfect match for this scenario.
Additional requirement is to generate an early alert if recurring events count in a window is 2 (i.e. alert should be generated without waiting window end).
I thought that an alert event generating process window function can be used to aggregate windowed events and a custom trigger can be used to emit early results from the window based on the recurring events count (before the watermark reaches the window’s end timestamp).
I am using event-time semantics and having problems/questions for the custom trigger .
You can find the actual implementation in the gist: https://gist.github.com/simpleusr/7c56d4384f6fc9f0a61860a680bb5f36
I am using keyed state to keep track of element count in the window encounteredElementsCountState
Upon receiving first element I register EventTimeTimer to the window end. This is supposed to trigger FIRE_AND_PURGE for window closing and working as expected.
If the count exceeds threshold , I try to trigger early fire. This also seems to be successful, processwindow function is called immediately after this firing.
The problem is, I had to insert below check to the code without understanding the reason. Because the previously collected elements were again supplied to onElement method:
if (ctx.getCurrentWatermark() < 0) {
logger.debug(String.format("onElement processing skipped for eventId : %s for watermark: %s ", element.getEventId(), ctx.getCurrentWatermark()));
return TriggerResult.CONTINUE;
}
I could not figure out the reason. What I see is that when this happens the watermark value is (ctx.getCurrentWatermark()) Long.MIN_VALUE (that leaded to the above check). How can this happen?
This check seems to avoid duplicate early event generation, but I do not know why this happens and is this workaround is appropriate.
Could you please advice why the same elements are processed twice in the window?
Another question is about the keyed state usage. Does this implementation leaks any state after window is disposed? I am trying to clear all used states in clear method of the trigger but would that be enough?
Regards.
Each task has currentWatermark initialized to Long.MIN_VALUE, and this remains the local value of currentWatermark until larger watermarks have arrived from all of that task's input streams. Hopefully knowing that will help you better understand what's going on.
For what it's worth, often it's more straightforward to implement this kind of logic with a ProcessFunction than with the Window API.
Related
My Flink job has to compute a certain aggregation after each working shift. Shifts are configurable and look something like:
1st shift: 00:00am - 06:00am
2nd shift: 06:00am - 12:00pm
3rd shift: 12:00pm - 18:00pm
Shifts are the same every day for operational purposes, there is no distinction between days of the week/year. The shifts configuration can vary over time and can be non-monotonous, so this leaves out of the table a trivial EventTime window like:
TumblingEventTimeWindows.of(Time.of(6, HOURS)) as some of the shifts might be shrunk or spanned overtime, or a couple hours break in between might be inserted...
I have come up with something based on a GlobalWindow and a custom Trigger:
LinkedList<Shift> shifts;
datastream.windowAll(GlobalWindows.create())
.trigger(ShiftTrigger.create(shifts))
.aggregate(myAggregateFunction)
where in my custom trigger I attempt to discern if an incoming event passes the end time of the on-going working shift, and fire the window for the shift:
#Override
public TriggerResult onElement(T element, long timestamp, GlobalWindow window, TriggerContext ctx) throws Exception {
// compute the end time of the on-going shift
final Instant currentShiftEnd = ...
// fire window for the shift if the event passes the end line
if (ShiftPredicate.of(currentShiftEnd).test(element)) {
return TriggerResult.FIRE_AND_PURGE;
}
return TriggerResult.CONTINUE;
}
Omitting the code for state management and some memoization optimizations, this seems to be working fine in a streaming use case: the first event coming in after a shift endtime, triggers the firing and the aggregation for the last shift.
However the job can be run bounded for date parameters (eg: for reprocessing past periods), or be shutdown prematurely for a set of expected reasons. When this sort of thing happens, I observe that the last window is not fired/flushed,
ie: the last shift of the day ends at midnight, and right over should
start the 1st shift of the next day. An event comes at 23:59pm and the
shift is about to end. However, the job is just running for the day of
today, and at 00:00 it finishes. Since no new element arrived to the
custom trigger passing the line to trigger the window firing, the
aggregation for the last shift is not calculated, however, some
partial results are still expected, even if nothing is happening in
the next shift or the job terminates in the middle of the on-going
shift.
I've read that the reason for this is:
Flink guarantees removal only for time-based windows and not for other
types, e.g. global windows (see Window Assigners)
I have taken a look inside the org.apache.flink.streaming.api.windowing package to look for something like a TumblingEventTimeWindows or DynamicEventTimeSessionWindows that I could use or extend with an end hour of the day, so that I can rely on the default event-time trigger of these firing when the watermark of the element passes the window limit, but I'm not sure how to do it. Intuitively I'd wish for something like:
shifts.forEach(shift -> {
datastream.windowAll(EventTimeWindow.fromTo(DAILY, shift.startTime, shift.endTime))
.aggregate(myAggregateFunction);
});
I know for use cases of arbitrary complexity, what some people do is ditching the Windows API in detriment of low-level process functions, where they "manually" compute the window by holding elements as managed state of the operator, while at given rules or conditions they fit and extract results from a defined aggregate function or accumulator. Also in a process function, is possible to pin point any pending calculations by tapping into the onClose hook.
Would there be a way to get this concept of recurrent event time windows for certain hours of a day every day by extending any of the objects in the Windows API?
If I understand correctly, there are two separate questions/issues here to resolve:
How to handle not having uniform window boundaries.
How to terminate the job without losing the results of the last window.
For (1), your approach of using GlobalWindows with a custom ShiftTrigger is one way to go. If you'd like to explore an alternative that uses a process function, I've written an example that you will find in the Flink docs.
For a more fluent API, you could create a custom WindowAssigner, which could then leverage the built-in EventTimeTrigger as its default trigger. To do this, you'll need to implement the WindowAssigner interface.
For (2), so long as you are relying on event time processing, the last set of windows won't be triggered unless a Watermark large enough to close them arrives before the job is terminated. This normally requires that you have an event whose timestamp is sufficiently after the window's end that a Watermark large enough to trigger the window is created (and that the job stays running long enough for that to happen).
However, when Flink is aware that a streaming job is coming to a natural end, it will automatically inject a Watermark with its timestamp set to MAX_WATERMARK, which has the effect of triggering all event time timers, and closing all event time windows. This happens automatically for any bounded sources. With Kafka (for example), you can also arrange for this by having your deserializer return true from isEndOfStream.
Another way to handle this is to avoid canceling such jobs when they are done, but to instead use ./bin/flink stop --drain [-p savepointPath] <jobID> to cleanly stop the job (with a savepoint), while draining all remaining window results (which it does by injecting one last big watermark (MAX_WATERMARK)).
In the documentation of FlinkCEP, I found that I can enforce that a particular event doesn't occur between two other events using notFollowedBy or notNext.
However, I was wondering If I could detect the absence of a certain event after a time X.
For example, if an event A is not followed by another event A within 10 seconds, fire an alert or do something.
Could be possible to define a FlinkCEP pattern to capture that situation?
Thanks in advance,
Humberto
Although Flink CEP does not support notFollowedBy at the end of a Pattern, there is a way to implement this by exploiting the timeout feature.
The Flink training includes an exercise where the objective is to identify taxi rides with a START event that is not followed by an END event within two hours. You'll find a solution to this exercise that uses CEP
here.
The main idea would be to define a Pattern of A followed by A within 10 seconds, and then capture the case where this times out.
Usecase: using EventTime and extracted timestamp from records from Kafka.
myConsumer.assignTimestampsAndWatermarks(new MyTimestampEmitter());
...
stream
.keyBy("platform")
.window(TumblingEventTimeWindows 5 mins))
.aggregate(AggFunc(), WindowFunc())
.countWindowAll(size)
.apply(someFunc)
.addSink(someSink);
What I want: Flink extracts timestamp and emits watermark for each record for an initial interval (e.g. 20 seconds), then it can periodically emits watermark (e.g. each 10s).
Reason: If I used PeriodicWatermark, at the beginning Flink will emit watermark only after some interval and the count in my 1st window of 5 mins is wrong - much larger than the count in the subsequent windows. I had a workaround setting setAutoWatermarkInterval to 100ms but this is more than necessary.
Currently, I must use AssignerWithPeriodicWatermark or AssignerWithPunctuatedWatermark. How can i implement this approach of a combining strategy? Thanks.
Before doing something unusual with your watermark generator, I would double-check that you've correctly diagnosed the situation. By and large, event-time windows should behave deterministically, and always produce the same results if presented with the same input. If you are getting results for the first window that vary depending on how often watermarks are being produced, that indicates that you probably have late events that are being dropped when the watermarks arrive more frequently, and are able to be included when the watermarks are less frequent. Perhaps your watermarks aren't correctly accounting for the actual degree of out-of-orderness your events are experiencing? Or perhaps your watermarks are based on System.currentTimeMillis(), rather than the event timestamps?
Also, it's normal for the first time window to be different than the others, because time windows are aligned to the epoch, rather than the first event. Of course, this has the effect that the first window covers a shorter period of time than all of the others, so you should expect it to contain fewer events, not more.
Setting setAutoWatermarkInterval to 100ms is a perfectly normal thing to do. But if you really want to avoid this, you might consider an AssignerWithPunctuatedWatermarks that initially returns a watermark for every event, and then after a suitable interval, returns watermarks less often.
In a punctuated watermark assigner, both the extractTimestamp and checkAndGetNextWatermark methods are called for every event. You can use some transient (non-flink) state in the assigner to keep track of either the time of the first event, or to count events, and use that information in checkAndGetNextWatermark to eventually back off and stop producing watermarks for every event (by sometimes returning null from checkAndGetNextWatermark, rather than a Watermark). Your application will always revert back to generating watermarks for every event whenever it is restarted.
This will not yield an assigner with all of the characteristics of periodic and punctuated assigners, it's simply an adaptive punctuated assigner.
I have an always one application, listening to a Kafka stream, and processing events. Events are part of a session. And I need to do calculations based off of a sessions data. I am running into a problem trying to correctly run my calculations due to the length of my sessions. 90% of my sessions are done after 5 minutes. 99% are done after 1 hour. Sessions may last more than a day, due to this being a real-time system, there is no determined end. Session are unique, and show never collide.
I am looking for a way where I can process a window multiple times, either with an initial wait period and processing any later events after that, or a pure process per event type structure. I will need to keep all previous events around(ListState), as well as previously processed values(ValueState).
I previously thought allowedLateness would allow me to do this, but it seems the lateness is only considered for when the event should have been processed, it does not extend an actual window. GlobalWindows may also work, but I am unsure if there is a way to process a window multiple times. I believe I can used an evictor with GlobalWindows to purge the Windows after a period of inactivity(although admittedly, I did not research this yet, because I was unsure of how to trigger a GlobalWindow multiple times.
Any suggestions on how to achieve what I am looking to do would be greatly appreciated, I would also be happy to clarify any points needed.
If SessionWindows won't do the job, then you can use GlobalWindows with a custom Trigger and Evictor. The Trigger interface has onElement and timer-based callbacks that can fire whenever and as often as you like. If you go down this route, then yes, you'll also need to implement an Evictor to dispose of elements when they are no longer needed.
The documentation and the source code are helpful when trying to understand how this all fits together.
I am currently working on a streaming program which aggregates the data from a number of messages (8), the aggregation requires all 8 messages, so i am using a count window. All 8 messages share the same unique key. However there is no guarantee that all 8 messages will arrive. So my question is two-fold:
First what happens to a Flink count window that never closes? I am assuming the windows simply accumulate overtime, consuming more and more ram.
Secondly can I close a count window if it does not receive all of its messages within a given time? I am looking for a solution that is as real-time as possible, I already tried using a time window, however the time-of-flight of the messages varies between a few millisecond and 40 seconds.
So essentially is there a way to define a window that triggers at 8 messages, and evicts all messages from the window after a given time (in this case after 60 seconds)?
The answer for your question regarding never closing windows is that the part of the state reserved for them will never be freed.
Your described behaviour could be implemented with custom trigger and evictor on Global Window. The trigger could either wait the expected time or number of elements before emitting window while the evictor would evict all messages if there are less than 8. For some referential implementation you can have a look at CountTrigger(emits on count) and EventTimeTrigger(emits on time). For the evictor have a look at CountEvictor.
For cases like this where you need to combine stateful stream processing with timers, ProcessFunction can be a good choice. See https://ci.apache.org/projects/flink/flink-docs-release-1.2/dev/stream/process_function.html.