I am reading the book Stream Processing with Apache Flink and it is stated that “As of version 0.10.0, Kafka supports message timestamps. When reading from Kafka version 0.10 or later, the consumer will automatically extract the message timestamp as an event-time timestamp if the application runs in event-time mode*”
So inside a processElement function the call context.timestamp() will by default return the kafka message timestamp?
Coul you please provide a simple example on how to implement AssignerWithPeriodicWatermarks/AssignerWithPunctuatedWatermarks that extract (and builds watermarks) based on the consumed kafka message timestamp.
If I am using TimeCharacteristic.ProcessingTime, would ctx.timestamp() return the processing time and in such case would it be similar to context.timerService().currentProcessingTime() .
Thank you.
The Flink Kafka consumer takes care of this for you, and puts the timestamp where it needs to be. In Flink 1.11 you can simply rely on this, though you still need to take care of providing a WatermarkStrategy that specifies the out-of-orderness (or asserts that the timestamps are in order):
FlinkKafkaConsumer<String> myConsumer = new FlinkKafkaConsumer<>(...);
myConsumer.assignTimestampsAndWatermarks(
WatermarkStrategy.
.forBoundedOutOfOrderness(Duration.ofSeconds(20)));
In earlier versions of Flink you had to provide an implementation of a timestamp assigner, which would look like this:
public long extractTimestamp(Long element, long previousElementTimestamp) {
return previousElementTimestamp;
}
This version of the extractTimestamp method is passed the current value of the timestamp present in the StreamRecord as previousElementTimestamp, which in this case will be the timestamp put there by the Flink Kafka consumer.
Flink 1.11 docs
Flink 1.10 docs
As for what is returned by ctx.timestamp() when using TimeCharacteristic.ProcessingTime, this method returns NULL in that case. (Semantically, yes, it is as though the timestamp is the current processing time, but that's not how it's implemented.)
Related
I have a simple Flink stream processing application (Flink version 1.13). The Flink app reads from Kakfa, does stateful processing of the record, then writes the result back to Kafka.
After reading from Kafka topic, I choose to use reinterpretAsKeyedStream() and not keyBy() to avoid a shuffle, since the records are already partitioned in Kakfa. The key used to partition in Kakfa is a String field of the record (using the default kafka partitioner). The Kafka topic has 24 partitions.
The mapping class is defined as follows. It keeps track of the state of the record.
public class EnvelopeMapper extends
KeyedProcessFunction<String, Envelope, Envelope> {
...
}
The processing of the record is as follows:
DataStream<Envelope> messageStream =
env.addSource(kafkaSource)
DataStreamUtils.reinterpretAsKeyedStream(messageStream, Envelope::getId)
.process(new EnvelopeMapper(parameters))
.addSink(kafkaSink);
With parallelism of 1, the code runs fine. With parallelism greater than 1 (e.g. 4), I am running into the follow error:
2022-06-12 21:06:30,720 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph [] - Source: Custom Source -> Map -> Flat Map -> KeyedProcess -> Map -> Sink: Unnamed (4/4) (7ca12ec043a45e1436f45d4b20976bd7) switched from RUNNING to FAILED on 100.101.231.222:44685-bd10d5 # 100.101.231.222 (dataPort=37839).
java.lang.IllegalArgumentException: KeyGroupRange{startKeyGroup=96, endKeyGroup=127} does not contain key group 85
Based on the stack trace, it seems the exception happens when EnvelopeMapper class validates the record is sent to the right replica of the mapper object.
When reinterpretAsKeyedStream() is used, how are the records distributed among the different replicas of the EventMapper?
Thank you in advance,
Ahmed.
Update
After feedback from #David Anderson, replaced reinterpretAsKeyedStream() with keyBy(). The processing of the record is now as follows:
DataStream<Envelope> messageStream =
env.addSource(kafkaSource) // Line x
.map(statelessMapper1)
.flatMap(statelessMapper2);
messageStream.keyBy(Envelope::getId)
.process(new EnvelopeMapper(parameters))
.addSink(kafkaSink);
Is there any difference in performance if keyBy() is done right after reading from Kakfa (marked with "Line x") vs right before the stateful Mapper (EnvelopeMapper).
With
reinterpretAsKeyedStream(
DataStream<T> stream,
KeySelector<T, K> keySelector,
TypeInformation<K> typeInfo)
you are asserting that the records are already distributed exactly as they would be if you had instead used keyBy(keySelector). This will not normally be the case with records coming straight out of Kafka. Even if they are partitioned by key in Kafka, the Kafka partitions won't be correctly associated with Flink's key groups.
reinterpretAsKeyedStream is only straightforwardly useful in cases such as handling the output of a window or process function where you know that the output records are key partitioned in a particular way. To use it successfully with Kafka is can be very difficult: you must either be very careful in how the data is written to Kafka in the first place, or do something tricky with the keySelector so that the keyGroups it computes line up with how the keys are mapped to Kafka partitions.
One case where this isn't difficult is if the data is written to Kafka by a Flink job running with the same configuration as the downstream job that is reading the data and using reinterpretAsKeyedStream.
I am new to Flink. I am replacing Kafka Streams API with Flink, because Kafka Streams is internally creating multiple internal topics which is adding overhead.
However, in the Flink job, all I am doing is
Dedupe the records in given window (1hr). (Window(TumblingEventTimeWindows(3600000), EventTimeTrigger, Job$$Lambda$1097/1241473750, PassThroughWindowFunction))
deDupedStream = deserializedStream
.keyBy(msg -> new StringBuilder()
.append("XXX").append("YYY"))
.timeWindow(Time.milliseconds(3600000)) // 1 hour
.reduce((event1, event2) -> {
event2.setEventTimeStamp(Math.max(event1.getEventTimeStamp(), event2.getEventTimeStamp()));
return event2;
})
.setParallelism(mapParallelism > 0 ? mapParallelism : defaultMapParallelism);
After Deduping, I do another level of windowing and count the records before producing to kafka topic. (Window(TumblingEventTimeWindows(3600000), EventTimeTrigger, Job$$Lambda$1101/2132463744, PassThroughWindowFunction) -> Map)
SingleOutputStreamOperator<PlImaItemInterimMessage> countedStream = deDupedStream
.filter(event -> event.getXXX() != null)
.map(this::buildXXXObject)
.returns(XXXObject.class)
.setParallelism(deDupMapParallelism > 0 ? deDupMapParallelism : defaultDeDupMapParallelism)
.keyBy(itemInterimMsg -> String.valueOf("key1") + "key2" + "key3")
.timeWindow(Time.milliseconds(3600000))
.reduce((existingMsg, currentMsg) -> { // Aggregate
currentMsg.setCount(existingMsg.getCount() + currentMsg.getCount());
return currentMsg;
})
.setParallelism(deDupMapParallelism > 0 ? deDupMapParallelism : defaultDeDupMapParallelism);
countedStream.addSink(kafkaProducerSinkFunction);
With the above setup, my assumption is the destination kafka topic will get the aggregated results every 3600000ms (1 hour). But Grafana graph shows the the result emits every near 30 mins. I do not understand why, when the window is still 1 hour range. Any suggestions?
Attached the Kafka destination topic emit range below.
While I can't fully diagnose this without seeing more of the project, here are some points that you may have overlooked:
When the Flink Kafka producer is used in exactly once mode, it only commits its output when checkpointing. Consumers of your job's output, if set to read committed, will only see results when checkpoints complete. How often is your job checkpointing?
When the Flink Kafka producer is used in at least once mode, it can produce duplicated output. Is your job is restarting at regular intervals?
Flink's event time window assigners use the timestamps in the stream record metadata to determine the timing of each event. These metadata timestamps are set when you call assignTimestampsAndWatermarks. Calling setEventTimeStamp in the window reduce function has no effect on these timestamps in the metadata.
The stream record metadata timestamps on events emitted by a time window are set to the end time of the window, and those are the timestamps considered by the window assigner of any subsequent window.
keyBy(msg -> new StringBuilder().append("XXX").append("YYY")) is partitioning the stream by a constant, and will assign every record to the same partition.
The second keyBy (right before the second window) is replacing the first keyBy (rather than imposing further partitioning).
I'm reading from a Kafka cluster in a Flink streaming app. After getting the source stream i want to aggregate events by a composite key and a timeEvent tumbling window and then write result to a table.
The problem is that after applying my aggregateFunction that just counts number of clicks by clientId i don't find the way to get the key of each output record since the api returns an instance of accumulated result but not the corresponding key.
DataStream<Event> stream = environment.addSource(mySource)
stream.keyBy(new KeySelector<Event,Integer>() {
public Integer getKey(Event event) { return event.getClientId(); })
.window(TumblingEventTimeWindows.of(Time.minutes(1))).aggregate(new MyAggregateFunction)
How do i get the key that i specified before? I did not inject key of the input events in the accumulator as i felt i wouldn't be nice.
Rather than
.aggregate(new MyAggregateFunction)
you can use
.aggregate(new MyAggregateFunction, new MyProcessWindowFunction)
and in this case the process method of your ProcessWindowFunction will be passed the key, along with the pre-aggregated result of your AggregateFunction and a Context object with other potentially relevant info. See the section in the docs on ProcessWindowFunction with Incremental Aggregation for more details.
I am investigating the types of watermarks that can be inserted into the data stream.
While this may go outside of the purpose of watermarks, I'll ask it anyway.
Can you create a watermark that holds a timestamp and k/v pair(s) (this=that, that=this)?
Hence the watermark will hold {12DEC180500GMT,this=that, that=this}.
Or
{Timestamp, kvp1, kvp2, kvpN}
Is something like this possible? I have reviewed the user and API docs but may have overlooked something
No, the Watermark class in Flink
(found in
flink/flink-streaming/java/src/main/java/org/apache/flink/streaming/api/watermark/Watermark.java)
has one one instance variable besides MAX_WATERMARK, which is
/** The timestamp of the watermark in milliseconds. */
private final long timestamp;
So watermarks cannot carry any information besides a timestamp, which must be a long value.
Using Apache Flink I want to create a streaming window sorted by the timestamp that is stored in the Kafka event. According to the following article this is not implemented.
https://cwiki.apache.org/confluence/display/FLINK/Time+and+Order+in+Streams
However, the article is dated july 2015, it is now almost a year later. Is this functionality implemented and can somebody point me to any relevent documentation and/or an example.
Apache Flink supports stream windows based on event timestamps.
In Flink, this concept is called event-time.
In order to support event-time, you have to extract a timestamp (long value) from each event. In addition, you need to support so-called watermarks which are needed to deal with events with out-of-order timestamps.
Given a stream with extracted timestamps you can define a windowed sum as follows:
val stream: DataStream[(String, Int)] = ...
val windowCnt = stream
.keyBy(0) // partition stream on first field (String)
.timeWindow(Time.minutes(1)) // window in extracted timestamp by 1 minute
.sum(1) // sum the second field (Int)
Event-time and windows are explained in detail in the documentation (here and here) and in several blog posts (here, here, here, and here).
Sorting by timestamps is still not supported out-of-box but you can do windowing based on the timestamps in elements. We call this event-time windowing. Please have a look here: https://ci.apache.org/projects/flink/flink-docs-master/apis/streaming/windows.html.