We are reading from Kinesis and writing to parquet and we use StateSpec<ValueState<Boolean>> to avoid duplicated processing of records after gracefully stopping and relaunching our pipeline from the last savepoint.
We saw that some records were duplicated because they end up falling on a different task manager on subsequent relaunches, and we use StateSpec<ValueState<Boolean>> to store stateful information about the processed records and avoid duplicates.
We are dealing with how to clear the state every certain time without the risk of losing the most recent processed records if they are needed in an upcoming stop. (i.e, we need a something like a TTL on that class).
We thought about a timer that clears the state every certain time but that doesn't meet our requirements because we need to keep the most recent processed records.
We read here that using event time processing automatically clears State information after a window expires and we would like to know if that fits with our requirement using the StateSpec class.
Otherwise, is there a class to store state that has a kind of TTL to implement this feature?
What we have right now is this piece of code that checks if the element has already processed and a method that clears the state every certain time
#StateId("keyPreserved")
private final StateSpec<ValueState<Boolean>> keyPreserved = StateSpecs.value(BooleanCoder.of());
#TimerId("resetStateTimer")
private final TimerSpec resetStateTimer = TimerSpecs.timer(TimeDomain.PROCESSING_TIME);
public void processElement(ProcessContext context,
#TimerId("resetStateTimer") Timer resetStateTimer,
#StateId("keyPreserved") ValueState<Boolean> keyPreservedState) {
if (!firstNonNull(keyPreservedState.read(), false)) {
T message = context.element().getValue();
//Process element here
keyPreservedState.write(true);
}
}
#OnTimer("resetStateTimer")
public void onResetStateTimer(OnTimerContext context,
#StateId("keyPreserved") ValueState<Boolean> keyPreservedState) {
keyPreservedState.clear();
}
Setting the timer every time we call keyPreservedState.write(true); was enough. When the timer expires keyPreservedState.clear(); only clears the element in the contexts, not the whole state.
Related
in aggregation to this question I'm still not having clear why the checkpoints of my Flink job grows and grows over time and at the moment, for about 7 days running, these checkpoints never gets the plateau.
I'm using Flink 1.10 version at the moment, FS State Backend as my job cannot afford the latency costs of using RocksDB.
See the checkpoints evolve over 7 days:
Let's say that I have this configuration for the TTL of the states in all my stateful operators for one hour or maybe more than that and a day in one case:
public static final StateTtlConfig ttlConfig = StateTtlConfig.newBuilder(Time.hours(1))
.setUpdateType(StateTtlConfig.UpdateType.OnCreateAndWrite)
.setStateVisibility(StateTtlConfig.StateVisibility.NeverReturnExpired)
.cleanupFullSnapshot().build();
In my concern all the objects into the states will be cleaned up after the expires time and therefore the checkpoints size should be reduced, and as we expect more or less the same amount of data everyday.
In the other hand we have a traffic curve, which has more incoming data in some hours of the day, but late night the traffic goes down and all the objects into the states that expires should be cleaned up causing that the checkpoint size should be reduced not kept with the same size until the traffic goes up again.
Let's see this code sample of one use case:
DataStream<Event> stream = addSource(source);
KeyedStream<Event, String> keyedStream = stream.filter((FilterFunction<Event>) event ->
apply filters here;))
.name("Events filtered")
.keyBy(k -> k.rType.equals("something") ? k.id1 : k.id2);
keyedStream.flatMap(new MyFlatMapFunction())
public class MyFlatMapFunction extends RichFlatMapFunction<Event, Event>{
private final MapStateDescriptor<String, Event> descriptor = new MapStateDescriptor<>("prev_state", String.class, Event.class);
private MapState<String, Event> previousState;
#Override
public void open(Configuration parameters) {
/*ttlConfig described above*/
descriptor.enableTimeToLive(ttlConfig);
previousState = getRuntimeContext().getMapState(descriptor);
}
#Override
public void flatMap(Event event, Collector<Event> collector) throws Exception {
final String key = event.rType.equals("something") ? event.id1 : event.id2;
Event previous = previousState.get(key);
if(previous != null){
/*something done here*/
}else /*something done here*/
previousState.put(key, previous);
collector.collect(previous);
}
}
More or less these is the structure of the use cases, and some others that uses Windows(Time Window or Session Window)
Questions:
What am I doing wrong here?
Are the states cleaned up when they expires and this scenario which is the same of the rest of the use cases?
What can help me to fix the checkpoint size if they are working wrong?
Is this behaviour normal?
Kind regards!
In this stretch of code it appears that you are simply writing back the state that was already there, which only serves to reset the TTL timer. This might explain why the state isn't being expired.
Event previous = previousState.get(key);
if (previous != null) {
/*something done here*/
} else
previousState.put(key, previous);
It also appears that you should be using ValueState rather than MapState. ValueState effectively provides a sharded key/value store, where the keys are the keys used to partition the stream in the keyBy. MapState gives you a nested map for each key, rather than a single value. But since you are using the same key inside the flatMap that you used to key the stream originally, key-partitioned ValueState would appear to be all that you need.
I have the following scenario: suppose there are 20 sensors which are sending me streaming feed. I apply a keyBy (sensorID) against the stream and perform some operations such as average etc. This is implemented, and running well (using Flink Java API).
Initially it's all going well and all the sensors are sending me feed. After a certain time, it may happen that a couple of sensors start misbehaving and I start getting irregular feed from them e.g. I receive feed from 18 sensors,but 2 don't send me feed for long durations.
We can assume that I already know the fixed list of sensorId's (possibly hard-coded / or in a database). How do I identify which two are not sending feed? Where can I get the list of keyId's to compare with the list in database?
I want to raise an alarm if I don't get a feed (e.g 2 mins, 5 mins, 10 mins etc. with increasing priority).
Has anyone implemented such a scenario using flink-streaming / patterns? Any suggestions please.
You could technically use the ProcessFunction and timers.
You could simply register timer for each record and reset it if You receive data. If You schedule the timer to run after 5 mins processing time, this would basically mean that If You haven't received the data it would call function onTimer, from which You could simply emit some alert. It would be possible to re-register the timers for already fired alerts to allow emitting alerts with higher severity.
Note that this will only work assuming that initially, all sensors are working correctly. Specifically, it will only emit alerts for keys that have been seen at least once. But from your description it seems that It would solve Your problem.
I just happen to have an example of this pattern lying around. It'll need some adjustment to fit your use case, but should get you started.
public class TimeoutFunction extends KeyedProcessFunction<String, Event, String> {
private ValueState<Long> lastModifiedState;
static final int TIMEOUT = 2 * 60 * 1000; // 2 minutes
#Override
public void open(Configuration parameters) throws Exception {
// register our state with the state backend
state = getRuntimeContext().getState(new ValueStateDescriptor<>("myState", Long.class));
}
#Override
public void processElement(Event event, Context ctx, Collector<String> out) throws Exception {
// update our state and timer
Long current = lastModifiedState.value();
if (current != null) {
ctx.timerService().deleteEventTimeTimer(current + TIMEOUT);
}
current = max(current, event.timestamp());
lastModifiedState.update(current);
ctx.timerService().registerEventTimeTimer(current + TIMEOUT);
}
#Override
public void onTimer(long timestamp, OnTimerContext ctx, Collector<String> out) throws Exception {
// emit alert
String deviceId = ctx.getCurrentKey();
out.collect(deviceId);
}
}
This assumes a main program that does something like this:
DataStream<String> result = stream
.assignTimestampsAndWatermarks(new MyBoundedOutOfOrdernessAssigner(...))
.keyBy(e -> e.deviceId)
.process(new TimeoutFunction());
As #Dominik said, this only emits alerts for keys that have been seen at least once. You could fix that by introducing a secondary source of events that creates an artificial event for every source that should exist, and union that stream with the primary source.
The pattern is very clear to me now. I've implemented the solution and it works like charm.
If anyone needs the code, then I'll be happy to share
Unfortunately,the ververica originally training is changed and redirected to another page, that caused me unable to review the intro to this example again, i do find some other examples, but for this specific one, i failed to find it, here is something around me which i have been struggling recently,
for the core part code snippet, it is the following:
The first method for dealing with ride stream
#Override
public void processElement1(TaxiRide ride, Context context, Collector<Tuple2<TaxiRide, TaxiFare>> out) throws Exception {
TaxiFare fare = fareState.value();
TimerService service = context.timerService();
System.out.println("ride time service current watermark ===> " + service.currentWatermark() + "; timestamp ===>" + context.timestamp());
System.out.println("ride state ===> " + fare);
if (fare != null) {
System.out.println("fare is not null ===>" + fare.rideId);
fareState.clear();
context.timerService().deleteEventTimeTimer(fare.getEventTime());
out.collect(new Tuple2(ride, fare));
} else {
System.out.println("update ride state ===> " + ride.rideId + "===>" + context.timestamp());
rideState.update(ride);
System.out.println(rideState.value());
// as soon as the watermark arrives, we can stop waiting for the corresponding fare
context.timerService().registerEventTimeTimer(ride.getEventTime());
}
}
The second method for dealing with fare stream
#Override
public void processElement2(TaxiFare fare, Context context, Collector<Tuple2<TaxiRide, TaxiFare>> out) throws Exception {
TimerService service = context.timerService();
System.out.println("fare time service current watermark ===> " + service.currentWatermark() + "; timestamp ===>" + context.timestamp());
TaxiRide ride = rideState.value();
System.out.println("fare state ===> " + ride);
if (ride != null) {
System.out.println("ride is not null ===> " + ride.rideId);
rideState.clear();
context.timerService().deleteEventTimeTimer(ride.getEventTime());
out.collect(new Tuple2(ride, fare));
} else {
System.out.println("update fare state ===> " + fare.rideId + "===>" + context.timestamp());
fareState.update(fare);
System.out.println(fareState.value() + "===>" + fareState.value().getEventTime());
// as soon as the watermark arrives, we can stop waiting for the corresponding ride
context.timerService().registerEventTimeTimer(fare.getEventTime());
}
}
The processElement1 is obviously for TaxiRide stream and 2 is for TaxiFare,
The first thing is that it will run processElement2 for a while before executing processElement1, i didn't find the reason until now, here is the print part
fare time service current watermark ===> -9223372036854775808; timestamp ===>1356998400000
fare time service current watermark ===> -9223372036854775808; timestamp ===>1356998400000
fare state ===> null
fare state ===> null
update fare state ===> 26===>1356998400000
update fare state ===> 58===>1356998400000
58,2013000058,2013000058,2013-01-01 00:00:00,CRD,2.0,0.0,27.0===>1356998400000
26,2013000026,2013000026,2013-01-01 00:00:00,CRD,2.0,0.0,12.5===>1356998400000
fare time service current watermark ===> -9223372036854775808; timestamp ===>1356998400000
fare state ===> null
update fare state ===> 9===>1356998400000
fare time service current watermark ===> -9223372036854775808; timestamp ===>1356998400000
fare state ===> null
update fare state ===> 47===>1356998400000
9,2013000009,2013000009,2013-01-01 00:00:00,CRD,1.0,0.0,6.0===>1356998400000
47,2013000047,2013000047,2013-01-01 00:00:00,CRD,0.9,0.0,5.9===>1356998400000
fare time service current watermark ===> -9223372036854775808; timestamp ===>1356998400000
fare state ===> null
update fare state ===> 54===>1356998400000
fare time service current watermark ===> -9223372036854775808; timestamp ===>1356998400000
54,2013000054,2013000054,2013-01-01 00:00:00,CSH,0.0,0.0,31.0===>1356998400000
The second one is that,because ValueState is about one value not a list which contains a lot of values, for each call to processElemnt2, if ride is null, it will go to else, after calling fareState.update(), it will change the value of ValueState, for my perspective, it means it thinks the previous value for ValueState is match, right? -----biggest puzzle
Thx for answering that, i do appreciate your help!
The new tutorials on state and connected streams should help you with your questions. But briefly:
You have no control over the order in which the processElement1 and processElement2 callbacks will be called. These two input streams are racing against each other, and the Flink runtime will do what it wants to regarding consuming events from one stream or the other. In cases where timing and/or ordering matter, you may find it necessary to buffer events in managed Flink state until your application is ready to process them.
ValueState is a kind of keyed state, which means that whenever the state is accessed or updated, the entry in the state backend for the key in context is read or written. The "key in context" is the key for the stream element being processed (in the case of a processElement callback), or for the key that created the timer (in the case of an onTimer callback).
Also, keep in mind that in this exercise, there is at most one TaxiRide and one TaxiFare for each key.
The reference solutions for this exercise illustrate one way of thinking about how to manage state that might otherwise leak, but this is not a situation where there is one, obviously correct answer. The purpose of this exercise is to stimulate some thinking about how to work with state and timers, and to bring some of the issues involved to the surface.
What might be our goals for a good solution? It should
produce correct results
not leak state
be easy to understand
have good performance
Now let's examine the proposed solution with these goals in mind. We find this code in processElement1 (and by the way, processElement2 is the same, just with the roles reversed between the ride and fare):
public void processElement1(TaxiRide ride, Context context, Collector<Tuple2<TaxiRide, TaxiFare>> out) throws Exception {
TaxiFare fare = fareState.value();
if (fare != null) {
fareState.clear();
context.timerService().deleteEventTimeTimer(fare.getEventTime());
out.collect(new Tuple2(ride, fare));
} else {
rideState.update(ride);
// as soon as the watermark arrives, we can stop waiting for the corresponding fare
context.timerService().registerEventTimeTimer(ride.getEventTime());
}
}
This means that
whenever an event arrives that does not complete a pair, we store it in state and create a timer
whenever an event arrives that does complete a pair, we clear the state and delete the timer for the matching event (which was stored earlier)
So it's clear that nothing can leak if both events arrive. But what if one is missing?
In that case the timer will fire at some point, and run this code, which will clearly clean up any state that might exist:
public void onTimer(long timestamp, OnTimerContext ctx, Collector<Tuple2<TaxiRide, TaxiFare>> out) throws Exception {
if (fareState.value() != null) {
ctx.output(unmatchedFares, fareState.value());
fareState.clear();
}
if (rideState.value() != null) {
ctx.output(unmatchedRides, rideState.value());
rideState.clear();
}
}
Ok, but how did we decide how long to wait? Is it enough to wait until ride.getEventTime()?
The effect of setting an event time timer for ride.getEventTime() is to wait until any out-of-orderness in the ride and fare streams has been resolved. All earlier ride and fare events will have arrived by the time the watermark reaches ride.getEventTime(), assuming the watermarking is perfect.
In these exercises, the watermarking is, in fact, perfect -- there can be no late events. But in a real-world setting, you should expect some late events, and we should expect that our implementation behaves correctly in this situation. What this reference solution will do is this:
one of the events in a matching pair will arrive first, and create a timer to arrange for its eventual deletion
that timer will fire, and the event will be cleared
the matching event arrives late, and creates another timer, in this case for a time that has already passed
the next arriving watermark triggers that timer, and the state is cleared
In other words, when an event is late, no state will leak, but the resulting join will not be produced. So, in cases where you want to still produce results despite late arriving data, you should create timers that will accommodate some lateness by retaining the necessary state for some additional period of time, e.g.,
context.timerService().registerEventTimeTimer(ride.getEventTime() + ALLOWED_LATENESS);
It's not a good idea to try to accommodate arbitrarily late events, because doing so requires keeping some state for each late event indefinitely.
What about using processing time timers instead?
Sure, that will work, but it might be more awkward to test.
Why not use State Time-To-Live instead?
That's a fine idea. In general you may want to think in terms of using State TTL for GDPR compliance (for example), and use timers to implement business logic.
I am building a streaming app using Flink 1.3.2 with scala, my Flink app will monitor a folder and stream new files into pipeline. Each record in the file has a timestamp associated. I want to use this timestamp as the event time and build watermark using AssignerWithPeriodicWatermarks[T], my watermark generator looks like below:
class TimeLagWatermarkGenerator extends AssignerWithPeriodicWatermarks[Activity] {
val maxTimeLag = 6 * 3600000L // 6 hours
override def extractTimestamp(element: Activity, previousElementTimestamp: Long): Long = {
val format = new SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ssXXX")
val timestampString = element.getTimestamp
}
override def getCurrentWatermark(): Watermark = {
new Watermark(System.currentTimeMillis() - maxTimeLag)
}
}
env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime)
env.getConfig.setAutoWatermarkInterval(10000L)
val stream = env.readFile(inputformart, path, FileProcessingMode.PROCESS_CONTINUOUSLY, 100)
val activity = stream
.assignTimestampsAndWatermarks(new TimeLagWatermarkGenerator())
.map { line =>
new tuple.Tuple2(line.id, line.count)
}.keyBy(0).addSink(...)
However, since my folder has some old data there, I don't want to process them. And the timestamp of records in older file are > 6 hours, which should be older than watermark. However, when I start running it, I can still see some initial output been created. I was wondering how the initial value of watermark been set up, is it the before the first interval or after? It might be I misunderstand something here but need some advice.
There are no operators in the pipeline you've shown that care about time -- no windowing, no ProcessFunction timers -- so every stream element will pass thru unimpeded and be processed. If your goal is to skip elements that are late you'll need to introduce something that (somehow) compares event timestamps to the current watermark.
You could do this by introducing a step between the keyBy and sink, like this:
...
.keyBy(0)
.process(new DropLateEvents())
.addSink(...)
public static class DropLateEvents extends ProcessFunction<...> {
#Override
public void processElement(... event, Context context, Collector<...> out) throws Exception {
TimerService timerService = context.timerService();
if (context.timestamp() > timerService.currentWatermark()) {
out.collect(event);
}
}
}
Having done this, your question about the initial watermark becomes relevant. With periodic watermarks, the initial watermark is Long.MIN_VALUE, so nothing will be considered late until the first watermark is emitted, which will happen after 10 seconds of operation (given how you've set the auto-watermarking interval).
The relevant code is here if you want to see how periodic watermarks are generated in more detail.
If you want to avoid processing late elements during the first 10 seconds, you could simply forget about using event time and watermarking entirely, and simply modify the processElement method shown above to compare the event timestamps to System.currentTimeMillis() - maxTimeLag rather than to the current watermark. Another solution would be to use punctuated watermarking, and emit a watermark with the very first event.
Or even more simply, you could detect and drop late events in a flatMap or filter, since you are defining lateness relative to System.currentTimeMillis() rather than to the watermarks.
My goal is to have a Flink streaming program that keeps the last N ids, where the id is extracted from an event. The sink is a Cassandra store so that the list of ids can be fetched at any time. It is important that Cassandra is updated immediately upon every event.
This can be implemented easily with mapWithState (see code below). However, there is important problem with this code. The state is keyed by userid. Some users might be active for some time and then never again. What I am worrying about is that state storage will grow forever.
How does one cleanup state for inactive keys?
case class MyEvent(userId: Int, id: String)
env
.addSource(new FlinkKafkaConsumer010[MyEvent]("vips", new MyJsonDeserializationSchema(), kafkaConsumerProperties))
.keyBy(_.userId)
.mapWithState[(Int, Seq[String]), Seq[String]] { (in: MyEvent, currentIds: Option[Seq[String]]) =>
val keepNIds = currentIds match {
case None => Seq(in.id)
case Some(cids) => (cids :+ in.id).takeRight(100)
}
((in.userId, keepNIds), Some(keepNIds))
}
.addSink { in: (Int, Seq[String]) =>
CassandraSink.appDatabase.idsTable.store(...)
}
The growing state is an important and correct observation. This will definitely happen if your keyspace is moving.
Flink 1.2.0 added the ProcessFunction which addresses this problem. A ProcessFunction is similar to a FlatMapFunction but has access to timer services. You can register timers which invoke the onTimer() callback function when they expire. The callback can be used to clean-up the state.