Flink event session window why not emit - apache-flink

I'm losing my mind. It took me 10 hours, but it's still not working!!!
I use flink session window join two streams.
with EventTime, and using session window join two streams one same value.
code as follow
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime);
env.setParallelism(1);
// current time
long currentTime = System.currentTimeMillis();
// topic of datahub which start consume time milliseconds
final long START_TIME_MS = 0L;
// session window gap time
final Time WIN_GAP_TIME = Time.seconds(10L);
// source function maxOutOfOrderness milliseconds
final Time maxOutOfOrderness = Time.milliseconds(5L);
// init source function
DatahubSourceFunction oldTableASourceFun =
new DatahubSourceFunction(
endPoint,
projectName,
topicOldTableA,
accessId,
accessKey,
// currentTime,
START_TIME_MS,
Long.MAX_VALUE,
20,
1000,
20);
DatahubSourceFunction tableBSourceFun =
new DatahubSourceFunction(
endPoint,
projectName,
topicTableB,
accessId,
accessKey,
START_TIME_MS,
Long.MAX_VALUE,
20,
1000,
20);
// init source
DataStream<OldTableA> oldTableADataStream =
env.addSource(oldTableASourceFun)
.flatMap(
new FlatMapFunction<List<RecordEntry>, OldTableA>() {
#Override
public void flatMap(List<RecordEntry> list, Collector<OldTableA> out)
throws Exception {
for (RecordEntry recordEntry : list) {
out.collect(CommonUtils.convertToOldTableA(recordEntry));
}
}
})
.uid("oldTableADataSource")
.setParallelism(1)
.returns(new TypeHint<OldTableA>() {})
.assignTimestampsAndWatermarks(new ExtractorWM<OldTableA>(maxOutOfOrderness));
DataStream<TableB> tableBDataStream =
env.addSource(tableBSourceFun)
.flatMap(
new FlatMapFunction<List<RecordEntry>, TableB>() {
#Override
public void flatMap(java.util.List<RecordEntry> list, Collector<TableB> out)
throws Exception {
for (RecordEntry recordEntry : list) {
out.collect(CommonUtils.convertToTableB(recordEntry));
}
}
})
.uid("tableBDataSource")
.setParallelism(1)
.returns(new TypeHint<TableB>() {})
.assignTimestampsAndWatermarks(new ExtractorWM<TableB>(maxOutOfOrderness));
and ExtractorWM code as follow
public class ExtractorWM<T extends CommonPOJO> extends BoundedOutOfOrdernessTimestampExtractor<T> {
public ExtractorWM(Time maxOutOfOrderness) {
super(maxOutOfOrderness);
}
#Override
public long extractTimestamp(T element) {
/* it's ok System.out.println(element +"-"+ CommonUtils.getSimpleDateFormat().format(getCurrentWatermark().getTimestamp()));*/
return System.currentTimeMillis();
}
}
I tested the above code to output correctly, watermark and event is right
// oldTableADataStream event and watermark'ts
OldTableA{PA1=1, a2='a20', **fa3=20**, fa4=30} 1596092987721
OldTableA{PA1=2, a2='a20', **fa3=20**, fa4=31} 1596092987721
OldTableA{PA1=3, a2='a20', **fa3=20**, fa4=32} 1596092987721
OldTableA{PA1=4, a2='a20', **fa3=20**, fa4=33} 1596092987721
OldTableA{PA1=5, a2='a20', **fa3=20**, fa4=34} 1596092987722
//
tableBDataStream event and watermark'ts
TableB{**PB1=20**, B2='b20', B3='b30'} 1596092987721
TableB{PB1=21, B2='b21', B3='b31'} 1596092987721
TableB{PB1=22, B2='b22', B3='b32'} 1596092987721
TableB{PB1=23, B2='b23', B3='b33'} 1596092987722
TableB{PB1=24, B2='b24', B3='b34'} 1596092987722
I except result as
1 a20 20 b20 b30 30
2 a20 20 b20 b30 31
3 a20 20 b20 b30 32
4 a20 20 b20 b30 33
5 a20 20 b20 b30 34
6 a20 20 b20 b30 35
but join operator is not work
DataStream<NewTableA> join1 =
oldTableADataStream
.join(tableBDataStream)
.where(t1 -> t1.getFa3()) // print element is out right
.equalTo(t2 -> t2.getPb1()) // print element is out right
.window(EventTimeSessionWindows.withGap(WIN_GAP_TIME))
// .trigger(new TestTrigger())
// .allowedLateness(Time.seconds(2))
.apply(new oldTableAJoinTableBFunc()); // test join method not work, join method not be call
join1.print(); // it's nothing
and oldTableAJoinTableBFunc code as follow
public class oldTableAJoinTableBFunc implements JoinFunction<OldTableA, TableB, NewTableA> {
#Override
public NewTableA join(OldTableA oldTableA, TableB tableB) throws Exception {
// not working
// I breakpoint join code line and debug ,but never trigger
System.out.println(
oldTableA
+ " - "
+ tableB
+ " - "
+ CommonUtils.getSimpleDateFormat().format(System.currentTimeMillis()));
NewTableA newTableA = new NewTableA();
newTableA.setPA1(oldTableA.getPa1());
newTableA.setA2(oldTableA.getA2());
newTableA.setFA3(oldTableA.getFa3());
newTableA.setFA4(oldTableA.getFa4());
newTableA.setB2(tableB.getB2());
newTableA.setB3(tableB.getB3());
return newTableA;
}
}
The problem I see is that apply(new oldTableAJoinTableBFunc()) , I breakpoint join method and debug , but never be breaked, then join method not be call. I studied source code as join method be called when pair is happen, then I print t1.getFa3() and t2.getPb1() as least one line 20 is equal, why join not be called?

Your approach to handling time and watermarking is why this isn't working. For a more in-depth introduction to the topic, see the section of the Flink training course that covers event time and watermarking.
The motivation behind event time processing is to be able to implement consistent, deterministic streaming analytics despite events arriving out-of-order and being processed at some unknown rate. This depends on the events carrying timestamps, and on those timestamps being extracted from the events by a timestamp assigner. Your timestamp assigner is returning System.currentTimeMillis, which effectively disables all of the event time machinery. Moreover, because you are using System.currentTimeMillis as the source of timing information your events can not be out-of-order, yet you are specifying a watermarking delay of 5 msec.
I doubt your job even runs for 5 msec, so it may not be generating any watermarks at all. Plus, it will actually take 200 msec before Flink sends the first watermark (see below).
For a session to end, there will have to be a 10 second interval during which no events are processed. (If you switch to using proper event time timestamps, then those timestamps will need a gap of 10+ seconds, but since you are using System.currentTimeMillis as the source of timing info, your job needs a gap of 10 real-time seconds to close a session.)
A BoundedOutOfOrdernessTimestampExtractor generates watermarks by observing the timestamps in the stream, and every 200 msec (by default) it injects a Watermark into the stream whose value is computed by taking the largest timestamp seen so far in the event stream, and subtracting from it the bounded delay (5 msec). A 10 second long event time session window will only close when a Watermark arrives that is at least 10 seconds later than the timestamp of the latest event currently in the session. For such a watermark to be created, a suitable event with a sufficiently large timestamp has to been processed.

I found the answer!
the reason is DatahubSourceFunction which consume datahub(Aliyun service like Kafka) topic and emit into flink. but when no record be consumed then the timestamp of watermarks is over and over again.
I use BoundedOutOfOrdernessTimestampExtractor generate watermark which feature is need extract timestamp from event to watermark, then the watermark is generated when there's an event, and the timestamp of the watermark is generated to the same value when there's no event.
when DatahubSourceFunction consume the last recorder and emit the last event, then no more event be emitted.
then the timestamp of the last event same as the timestamp of the last watermark(System.currentTimeMillis()).then session window never ends because all timestamp of watermarks less than window GAP + the timestamp of the last event.
the session window no end then join function not be called.

Related

Huge checkpoint size using ValueState leading to event processing lag

I have an application in flink, which does deduplication of multiple streams.
It does key-by on one string field and dedupes it by using value-state.
Using value state in RichFilterFunction.
public class DedupeWithState extends RichFilterFunction<Tuple2<String, Message>> {
private ValueState<Boolean> seen;
private final ValueStateDescriptor<Boolean> desc;
public DedupeWithState(long cacheExpirationTimeMs) {
StateTtlConfig ttlConfig = StateTtlConfig
.newBuilder(Time.milliseconds(cacheExpirationTimeMs))
.setUpdateType(StateTtlConfig.UpdateType.OnCreateAndWrite)
.setStateVisibility(StateTtlConfig.StateVisibility.NeverReturnExpired)
.build();
desc = new ValueStateDescriptor<>("seen", Types.BOOLEAN);
desc.enableTimeToLive(ttlConfig);
}
#Override
public void open(Configuration conf) {
seen = getRuntimeContext().getState(desc);
}
#Override
public boolean filter(Tuple2<String, Message> stringMessageTuple2) throws Exception {
if (seen.value() == null) {
seen.update(true);
return true;
}
return false;
}
}
The application consumes 3 streams from kafka, and each stream has its own dedupe function with ttl of 4hours.
DataStream<Tuple2<String, Message>> event1 = event1Input
.keyBy(x->x.f0)
.filter(new DedupeWithState(14400000));
DataStream<Tuple2<String, Message>> event2 = event2Input
.keyBy(x->x.f0)
.filter(new DedupeWithState(14400000));
DataStream<Tuple2<String, Message>> event3 = event3Input
.keyBy(x->x.f0)
.filter(new DedupeWithState(14400000));
Screenshots attached.
Backend properties are:
state.backend: rocksdb
state.backend.incremental: true
state.checkpoints.dir: <azure blob store>
Checkpoint configuration as on WebUI
We are using Flink 1.13.6.
The QPS of each stream is event1 - 7k, event2 - 6k, event3 - 200
Key size is ~110 bytes
Checkpoint interval is 5 mins and incremental checkpoint is enabled.
As per above configs (given that incremental checkpoint is enabled) each stream should have following checkpoint size:
event1 -> ((7000 * 60 * 5) * 110bytes) = ~220MB
Issue is the checkpoint size is very huge. It starts from 400 MB (as expected) but is going upto 2-3GB per checkpoint Checkpoint history. This results in huge backpressure in Dedupe function and overall lag in the system. Checkpoint per operator
Maybe the state is not being cleaned since it is done lazily (on read). From the initial release post (a bit old but may still stand):
When a state object is accessed in a read operation, Flink will check its timestamp and clear the state if it is expired (depending on the configured state visibility, the expired state is returned or not). Due to this lazy removal, expired state that is never accessed again will forever occupy storage space unless it is garbage collected.
I would try with a MapState per stream (without keying by) instead of a ValueState per key as you have now so the same state is continuously accessed. Or you may also be able to set up a timer in DedupeWithState that accesses the state and forces the cleanup (you may need to use a ProcessFunction to be able to set up timers) or that simply clears it.
Try something like this -
/**
* #author sucheth.shivakumar
*/
public class Check extends KeyedProcessFunction {
private ValueState<Boolean> seen;
#Override
public void open(Configuration parameters) throws Exception {
ValueStateDescriptor<Boolean> desc = new ValueStateDescriptor<>("seen", Types.BOOLEAN);
// defines the time the state has to be stored in the state backend before it is auto cleared
seen = getRuntimeContext().getState(desc);
}
#Override
public void processElement(Object value, Context ctx, Collector out) throws Exception {
if (seen.value() == null) {
seen.update(true);
// emits the record
out.collect(stringMessageTuple2);
ctx.timerService().registerProcessingTimeTimer(ctx.timestamp() + 14400000);
}
}
#Override
// this fires after 4 hrs is passed and clears the state
public void onTimer(long timestamp, OnTimerContext ctx, Collector out)
throws Exception {
// triggers after ttl has passed
if (seen.value()) {
seen.clear();
}
}

Absence of event in Apache Flink CEP

I'm new at Apache Flink CEP and I'm struggle trying to detect a simple absence of event.
What I'm trying to detect is wheter an event of type CurrencyEvent with a certain id does not occur in certain amount of time. I would like to detect the absence of such event every time that after 3000ms the event does not occur.
My pattern code looks as follows:
Pattern<CurrencyEvent, ?> myPattern = Pattern.<Event>begin("CurrencyEvent")
.subtype(CurrencyEvent.class)
.where(new SimpleCondition<CurrencyEvent>() {
#Override
public boolean filter(CurrencyEvent currencyEvent) throws Exception {
return currencyEvent.getId().equalsIgnoreCase("usd");
}
})
.within(Time.milliseconds(3000L));
So now my idea is to use timeout functions in order to detect timeout events:
DataStreamSource<Event> events = env.addSource(new TestSource(
Arrays.asList(
basicCurrencyWithMivLevelEvent("EUR", 100L, Arrays.asList("1", "2"), 200D),
basicCurrencyWithMivLevelEvent("USD", 100L, Arrays.asList("1", "2"), 200D),
basicCurrencyWithMivLevelEvent("EUR", 100L, Arrays.asList("1", "2"), 200D)
),
1636040364820L, // initial timestamp for the first element
7000 // 7 seconds between each event
));
PatternStream<Event> patternStream = CEP.pattern(
events,
(Pattern<Event, ?>) myPattern
);
OutputTag<Alarm> tag = new OutputTag<Alarm>("currency-timeout"){};
PatternFlatTimeoutFunction<Event, Alarm> eventAlarmTimeoutPatternFunction = (patterns, timestamp, ctx) -> {
System.out.println("New alarm, since after 3 seconds an event with id=usd is not detected");
//TODO: call collect
};
PatternFlatSelectFunction<Event, Alarm> eventAlarmPatternSelectFunction = (patterns, ctx) -> {
System.out.println("Select! (we can ignore it) " + patterns);
// ignore matched events
};
return patternStream.flatSelect(
tag,
eventAlarmTimeoutPatternFunction,
TypeInformation.of(Alarm.class),
eventAlarmPatternSelectFunction
);
My Test source is using event timestamps and watermarks, as shown as follows:
public class TestSource implements SourceFunction<Event> {
private final List<Event> events;
private final long initialTimestamp;
private final long timeBetweenInMillis;
public TestSource(List<Event> events, long initialTimestamp, long timeBetweenInMillis){
this.events = events;
this.initialTimestamp = initialTimestamp;
this.timeBetweenInMillis = timeBetweenInMillis;
}
#Override
public void run(SourceContext<Event> sourceContext) throws InterruptedException {
long timestamp = this.initialTimestamp;
for(Event event: this.events){
sourceContext.collectWithTimestamp(event, timestamp);
sourceContext.emitWatermark(new Watermark(timestamp));
timestamp+=this.timeBetweenInMillis;
}
}
#Override
public void cancel() {
}
}
I'm using TimeCharacteristics.EventTime.
Since the the window time (3seconds) is lower than the event time difference between every event (7 seconds), I expect to get some timeout events, but I'm getting 0.
A CEP Pattern matches a sequence of one or more events; the within(interval) clause adds an additional constraint that all of the events in the sequence must occur within the specified interval. When partial matches time out, this can be captured in a TimedOutPartialMatchHandler.
In your case, since a successfully matched Pattern consists of a single event, there can be no partial matches, and a match can never time out. (Your matching sequences are always less than 3 seconds long.)
What you can do is to extend the pattern definition to include a second event, so that to match there must be a start event followed by another event within 3 seconds. When that second event is missing, then you will have a partial match that times out.
For more flexibility than what CEP offers for implementing use cases involving missing events, you can use a KeyedProcessFunction with timers.

Flink watermark not advancing at all? Stuck at -9223372036854775808

I'm encountering similar issue to Flink EventTime Processing Watermark is always coming as -9223372036854725808 However, the suggested solutions (set parallelism and disable checkpointing) do not have any effect. In this example, I'm simply streaming 1000 events 1 second apart, and then comparing the event timestamp to ctx.timerService().currentWatermark()
>>> v=(61538659200000,0), watermark=-9223372036854775808
>>> v=(61538659201000,1), watermark=-9223372036854775808
>>> v=(61538660198000,998), watermark=-9223372036854775808
>>> v=(61538660199000,999), watermark=-9223372036854775808
public void watermarks()
throws Exception
{
final var env = StreamExecutionEnvironment.createLocalEnvironment();
env.setRuntimeMode(RuntimeExecutionMode.STREAMING);
env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime);
env.setMaxParallelism(1);
final long startMs = new Date(2020, 1, 1).getTime();
final var events = new ArrayList<Tuple2<Long, Integer>>();
for (var ii = 0; ii < 1000; ++ii ) {
events.add(new Tuple2<Long, Integer>(startMs + ii * 1000, ii));
}
env.fromCollection(events)
.assignTimestampsAndWatermarks(
WatermarkStrategy.<Tuple2<Long, Integer>>forMonotonousTimestamps()
.withTimestampAssigner((event, ts) -> event.f0))
.setParallelism(1)
.keyBy(row -> row.f1 % 2)
.process(new ProcessFunction<Tuple2<Long, Integer>, String>()
{
#Override
public void processElement(
final Tuple2<Long, Integer> value,
final Context ctx,
final Collector<String> out)
throws Exception
{
out.collect("v=" + value + ", watermark=" + ctx.timerService().currentWatermark());
}
})
.setParallelism(1)
.print()
.setParallelism(1);
final var result = env.execute();
System.out.println(result);
}
forMonotonousTimestamps is a periodic watermark generator that only generates watermarks when triggered by a timer. By default this timer fires every 200 msec (this is the autoWatermarkInterval). Your job doesn't run long enough for this timer to fire.
Bounded sources do generate a watermark with its timestamp set to MAX_WATERMARK when they reach the end of their input -- just before shutting down the job. You're not seeing this watermark in the output from your job because there are no events that follow it.
If you want to generate watermarks with every event, you can implement a custom watermark strategy that emits a watermarks in the onEvent method of the WatermarkGenerator (docs). This is usually a bad idea in production, as you'll waste CPU cycles and network bandwidth on these extra watermarks, but sometimes for testing this is helpful.
According to source code comments:
/**
* Creates a new enriched {#link WatermarkStrategy} that also does idleness detection in the
* created {#link WatermarkGenerator}.
*
* <p>Add an idle timeout to the watermark strategy. If no records flow in a partition of a
* stream for that amount of time, then that partition is considered "idle" and will not hold
* back the progress of watermarks in downstream operators.
*
* <p>Idleness can be important if some partitions have little data and might not have events
* during some periods. Without idleness, these streams can stall the overall event time
* progress of the application.
*/
default WatermarkStrategy<T> withIdleness(Duration idleTimeout) ...
So, You can try to use WatermarkStrategy.forMonotonousTimestamps.withIdleness(...)

Flink counter with timestamp

I was reading the the Flink example CountWithTimestamp and below is a code snippet from the example:
#Override
public void processElement(Tuple2<String, String> value, Context ctx, Collector<Tuple2<String, Long>> out)
throws Exception {
// retrieve the current count
CountWithTimestamp current = state.value();
if (current == null) {
current = new CountWithTimestamp();
current.key = value.f0;
}
// update the state's count
current.count++;
// set the state's timestamp to the record's assigned event time timestamp
current.lastModified = ctx.timestamp();
// write the state back
state.update(current);
// schedule the next timer 60 seconds from the current event time
ctx.timerService().registerEventTimeTimer(current.lastModified + 60000);
}
#Override
public void onTimer(long timestamp, OnTimerContext ctx, Collector<Tuple2<String, Long>> out)
throws Exception {
// get the state for the key that scheduled the timer
CountWithTimestamp result = state.value();
// check if this is an outdated timer or the latest timer
if (timestamp == result.lastModified + 60000) {
// emit the state on timeout
out.collect(new Tuple2<String, Long>(result.key, result.count));
}
}
}
My question is that if I remove the if statment timestamp == result.lastModified + 60000 (collect stmt not touched) in the onTimer, and instead replace it by another if statment if(ctx.timestamp < current.lastModified + 60000) { deleteEventTimeTimer(current.lastModified + 60000)} in the begining of processElement, would the semnatics of the program be the same? any preference of one version over the other in case of same semantics?
You are correct to think that the implementation that deletes the timer has the same semantics. And in fact I recently changed the example used in our training materials to do just that, as I prefer this approach. The reason I find it preferable is that all of the complex business logic is then in one place (in processElement), and whenever onTimer is called, you know exactly what to do, no questions asked. Plus, it's more performant, as there are fewer timers to checkpoint and eventually trigger.
This example was written for the docs back before timers could be deleted, and hasn't been updated.
You can find the reworked example I mentioned in these slides -- https://training.ververica.com/decks/process-function/ -- once you get past the registration page.
FWIW, I also recently reworked the reference solution to the corresponding training exercise along the same lines: https://github.com/apache/flink-training/tree/master/long-ride-alerts.

Looks like a bug in flink-training-exercises for the CEP example

I got a example for the CEP in the following URL
https://github.com/dataArtisans/flink-training-exercises/blob/master/src/main/java/com/dataartisans/flinktraining/exercises/datastream_java/cep/LongRides.java
And the "goal for this exercise is to emit START events for taxi rides that have not been matched by an END event during the first 2 hours of the ride."
However from the code below, it seems get a pattern to find rides have been completed in 2 hours instead of have NOT been completed in 2 hours.
It looks like the pattern firstly find the Start event , then find the End Event(!ride.isStart), and within 2 hours, so doesn't it explains as a pattern to find rides have been completed in 2 hours?
Pattern<TaxiRide, TaxiRide> completedRides =
Pattern.<TaxiRide>begin("start")
.where(new SimpleCondition<TaxiRide>() {
#Override
public boolean filter(TaxiRide ride) throws Exception {
return ride.isStart;
}
})
.next("end")
.where(new SimpleCondition<TaxiRide>() {
#Override
public boolean filter(TaxiRide ride) throws Exception {
return !ride.isStart;
}
});
// We want to find rides that have NOT been completed within 120 minutes
PatternStream<TaxiRide> patternStream = CEP.pattern(keyedRides, completedRides.within(Time.minutes(120)));
I've improved the comment in the sample solution to make this clearer.
// We want to find rides that have NOT been completed within 120 minutes.
// This pattern matches rides that ARE completed.
// Below we will ignore rides that match this pattern, and emit those that timeout.
PatternStream<TaxiRide> patternStream = CEP.pattern(keyedRides, completedRides.within(Time.minutes(120)));
OutputTag<TaxiRide> timedout = new OutputTag<TaxiRide>("timedout"){};
SingleOutputStreamOperator<TaxiRide> longRides = patternStream.flatSelect(
timedout,
new TaxiRideTimedOut<TaxiRide>(),
new FlatSelectNothing<TaxiRide>()
);

Resources