Let's say Kafka messages contain flink window size configuration.
I want to read the message from Kafka and create a global window in flink.
Problem Statement:
Can we handle the above scenario by using BroadcastStream ?
Or
Any other approach which will support the above case ?
Flink's window API does not support dynamically changing window sizes.
What you'll need to do is to implement your own windowing using a process function. In this case a KeyedBroadcastProcessFunction, where the window configuration is broadcast.
You can examine the Flink training for an example of how to implement time windows with a KeyedProcessFunction (copied below):
public class PseudoWindow extends KeyedProcessFunction<String, KeyedDataPoint<Double>, KeyedDataPoint<Integer>> {
// Keyed, managed state, with an entry for each window.
// There is a separate MapState object for each sensor.
private MapState<Long, Integer> countInWindow;
boolean eventTimeProcessing;
int durationMsec;
/**
* Create the KeyedProcessFunction.
* #param eventTime whether or not to use event time processing
* #param durationMsec window length
*/
public PseudoWindow(boolean eventTime, int durationMsec) {
this.eventTimeProcessing = eventTime;
this.durationMsec = durationMsec;
}
#Override
public void open(Configuration config) {
MapStateDescriptor<Long, Integer> countDesc =
new MapStateDescriptor<>("countInWindow", Long.class, Integer.class);
countInWindow = getRuntimeContext().getMapState(countDesc);
}
#Override
public void processElement(
KeyedDataPoint<Double> dataPoint,
Context ctx,
Collector<KeyedDataPoint<Integer>> out) throws Exception {
long endOfWindow = setTimer(dataPoint, ctx.timerService());
Integer count = countInWindow.get(endOfWindow);
if (count == null) {
count = 0;
}
count += 1;
countInWindow.put(endOfWindow, count);
}
public long setTimer(KeyedDataPoint<Double> dataPoint, TimerService timerService) {
long time;
if (eventTimeProcessing) {
time = dataPoint.getTimeStampMs();
} else {
time = System.currentTimeMillis();
}
long endOfWindow = (time - (time % durationMsec) + durationMsec - 1);
if (eventTimeProcessing) {
timerService.registerEventTimeTimer(endOfWindow);
} else {
timerService.registerProcessingTimeTimer(endOfWindow);
}
return endOfWindow;
}
#Override
public void onTimer(long timestamp, OnTimerContext context, Collector<KeyedDataPoint<Integer>> out) throws Exception {
// Get the timestamp for this timer and use it to look up the count for that window
long ts = context.timestamp();
KeyedDataPoint<Integer> result = new KeyedDataPoint<>(context.getCurrentKey(), ts, countInWindow.get(ts));
out.collect(result);
countInWindow.remove(timestamp);
}
}
Related
Finally, after a month of research I found the main reason.
The main reason was IP2Location. I am using IP2Location java library to search ip address location in the BIN files. In the peak time, it causes a problem. At least i can avoid to problem by passing IP2Proxy.IOModes.IP2PROXY_MEMORY_MAPPED parameter before reading the bin files.
And also I just found that a few state object doesn't match with POJO standard which causes high load.
I am using flink v1.13, there are 4 task managers (per 16 cpu) with 3800 tasks (default application parallelism is 28)
In my application one operator has always high busy time (around %80 - %90).
If I restart the flink application, then busy time decreases, but after 5-10 hours running busy time increases again.
In the grafana, I can see that busy time for ProcessStream increases. Here is the PromethuesQuery: avg((avg_over_time(flink_taskmanager_job_task_busyTimeMsPerSecond[1m]))) by (task_name)
There is no backpressure in the ProcessStream task. To calculate backPressure time, I am using: flink_taskmanager_job_task_backPressuredTimeMsPerSecond
But I couldn't find any reason for that.
Here is the code :
private void processOne(DataStream<KafkaObject> kafkaLog) {
kafkaLog
.filter(new FilterRequest())
.name(FilterRequest.class.getSimpleName())
.map(new MapToUserIdAndTimeStampMs())
.name(MapToUserIdAndTimeStampMs.class.getSimpleName())
.keyBy(UserObject::getUserId) // returns of type int
.process(new ProcessStream())
.name(ProcessStream.class.getSimpleName())
.addSink(...)
;
}
// ...
// ...
public class ProcessStream extends KeyedProcessFunction<Integer, UserObject, Output>
{
private static final long STATE_TIMER = // 5 min in milliseconds;
private static final int AVERAGE_REQUEST = 74;
private static final int STANDARD_DEVIATION = 32;
private static final int MINIMUM_REQUEST = 50;
private static final int THRESHOLD = 70;
private transient ValueState<Tuple2<Integer, Integer>> state;
#Override
public void open(Configuration parameters) throws Exception
{
ValueStateDescriptor<Tuple2<Integer, Integer>> stateDescriptor = new ValueStateDescriptor<Tuple2<Integer, Integer>>(
ProcessStream.class.getSimpleName(),
TypeInformation.of(new TypeHint<Tuple2<Integer, Integer>>() {}));
state = getRuntimeContext().getState(stateDescriptor);
}
#Override
public void processElement(UserObject value, KeyedProcessFunction<Integer, UserObject, Output>.Context ctx, Collector<Output> out) throws Exception
{
Tuple2<Integer, Integer> stateValue = state.value();
if (Objects.isNull(stateValue)) {
stateValue = Tuple2.of(1, 0);
ctx.timerService().registerProcessingTimeTimer(value.getTimestampMs() + STATE_TIMER);
}
int totalRequest = stateValue.f0;
int currentScore = stateValue.f1;
if (totalRequest >= MINIMUM_REQUEST && currentScore >= THRESHOLD)
{
out.collect({convert_to_output});
state.clear();
}
else
{
stateValue.f0 = totalRequest + 1;
stateValue.f1 = calculateNextScore(stateValue.f0);
state.update(stateValue);
}
}
private int calculateNextScore(int totalRequest)
{
return (totalRequest - AVERAGE_REQUEST ) / STANDARD_DEVIATION;
}
#Override
public void onTimer(long timestamp, KeyedProcessFunction<Integer, UserObject, Output>.OnTimerContext ctx, Collector<Output> out) throws Exception
{
state.clear();
}
}
Since you're using a timestamp value from your incoming record (value.getTimestampMs() + STATE_TIMER), you want to be running with event time, and setting watermarks based on that incoming record's timestamp. Otherwise you have no idea when the timer is actually firing, as the record's timestamp might be something completely different than your current processor time.
This means you also want to use .registerEventTimeTimer().
Without these changes you might be filling up TM heap with uncleared state, which can lead to high CPU load.
I am attempting to write a keyedProcessFunction, the code looks like this below:
DataStream<Tuple2<Long, Integer>> busyMachinesPerWindow = busyMachines
// group by timestamp (window end)
.keyBy(event -> event.getField(1))
.process(new KeyedProcessFunction<Tuple1<Long>, Tuple3<Long, Long, Long>, Tuple2<Long, Integer>>() {
private ValueState<Integer> state;
#Override
public void open(Configuration config) throws IOException {
// initialize the state descriptors here
state = getRuntimeContext().getState(new ValueStateDescriptor<>("machine-counts", Integer.class));
if (state.value() == null) {
state.update(0);
}
}
#Override
public void processElement(Tuple3<Long, Long, Long> inWindow, Context ctx, Collector<Tuple2<Long, Integer>> out) throws Exception {
if (state.value() != null) {
state.update(state.value() + 1);
} else {
state.update(1);
}
ctx.timerService().registerEventTimeTimer(inWindow.f1);
}
#Override
public void onTimer(long timestamp, OnTimerContext ctx, Collector<Tuple2<Long, Integer>> out) throws Exception {
int counter = state.value();
state.clear();
// we can now output the window and the machine count
out.collect(new Tuple2<>(((Tuple1<Long>) ctx.getCurrentKey()).f0, counter));
}
});
However this pops up an error saying cannot derive anonymous method. I don't see what the problem is with this code. Is there some type ambiguity that I am not doing right?
One problem with this code is that you are calling state.value() and state.update(0) in the open method. This is not allowed. These methods can only be used in processElement and in onTimer, because only then is there a specific event being processed whose key can be used to access/update the appropriate entry in the state backend.
An instance of a KeyedProcessFunction is multiplexed across all of the keys assigned to a given task slot. The open method is called just once, at a time when there is no specific key in the runtime context, so the state cannot be accessed or updated at this time.
Earlier I asked about a simple hello world example for Flink. This gave me some good examples!
However I would like to ask for a more ‘streaming’ example where we generate an input value every second. This would ideally be random, but even just the same value each time would be fine.
The objective is to get a stream that ‘moves’ with no/minimal external touch.
Hence my question:
How to show Flink actually streaming data without external dependencies?
I found how to show this with generating data externally and writing to Kafka, or listening to a public source, however I am trying to solve it with minimal dependence (like starting with GenerateFlowFile in Nifi).
Here's an example. This was constructed as an example of how to make your sources and sinks pluggable. The idea being that in development you might use a random source and print the results, for tests you might use a hardwired list of input events and collect the results in a list, and in production you'd use the real sources and sinks.
Here's the job:
/*
* Example showing how to make sources and sinks pluggable in your application code so
* you can inject special test sources and test sinks in your tests.
*/
public class TestableStreamingJob {
private SourceFunction<Long> source;
private SinkFunction<Long> sink;
public TestableStreamingJob(SourceFunction<Long> source, SinkFunction<Long> sink) {
this.source = source;
this.sink = sink;
}
public void execute() throws Exception {
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
DataStream<Long> LongStream =
env.addSource(source)
.returns(TypeInformation.of(Long.class));
LongStream
.map(new IncrementMapFunction())
.addSink(sink);
env.execute();
}
public static void main(String[] args) throws Exception {
TestableStreamingJob job = new TestableStreamingJob(new RandomLongSource(), new PrintSinkFunction<>());
job.execute();
}
// While it's tempting for something this simple, avoid using anonymous classes or lambdas
// for any business logic you might want to unit test.
public class IncrementMapFunction implements MapFunction<Long, Long> {
#Override
public Long map(Long record) throws Exception {
return record + 1 ;
}
}
}
Here's the RandomLongSource:
public class RandomLongSource extends RichParallelSourceFunction<Long> {
private volatile boolean cancelled = false;
private Random random;
#Override
public void open(Configuration parameters) throws Exception {
super.open(parameters);
random = new Random();
}
#Override
public void run(SourceContext<Long> ctx) throws Exception {
while (!cancelled) {
Long nextLong = random.nextLong();
synchronized (ctx.getCheckpointLock()) {
ctx.collect(nextLong);
}
}
}
#Override
public void cancel() {
cancelled = true;
}
}
I am trying to use event time in my Flink job, and using BoundedOutOfOrdernessTimestampExtractor to extract timestamp and generate watermark.
But I have some input Kafka having sparse stream, it can have no data for a long time, which makes the getResult in AggregateFunction not called at all. I can see data going into add function.
I have set getEnv().getConfig().setAutoWatermarkInterval(1000L);
I tried
eventsWithKey
.keyBy(entry -> (String) entry.get(key))
.window(TumblingEventTimeWindows.of(Time.minutes(windowInMinutes)))
.allowedLateness(WINDOW_LATENESS)
.aggregate(new CountTask(basicMetricTags, windowInMinutes))
also session window
eventsWithKey
.keyBy(entry -> (String) entry.get(key))
.window(EventTimeSessionWindows.withGap(Time.seconds(30)))
.aggregate(new CountTask(basicMetricTags, windowInMinutes))
All the watermark metics shows No Watermark
How can I let Flink to ignore that no watermark thing?
FYI, this is commonly referred to as the "idle source" problem. This occurs because whenever a Flink operator has two or more inputs, its watermark is the minimum of the watermarks from its inputs. If one of those inputs stalls, its watermark no longer advances.
Note that Flink does not have per-key watermarking -- a given operator is typically multiplexed across events for many keys. So long as some events are flowing through a given task's input streams, its watermark will advance, and event time timers for idle keys will still fire. For this "idle source" problem to occur, a task has to have an input stream that has become completely idle.
If you can arrange for it, the best solution is to have your data sources include keepalive events. This will allow you to advance your watermarks with confidence, knowing that the source is simply idle, rather than, for example, offline.
If that's not possible, and if you have some sources that aren't idle, then you could put a rebalance() in front of the BoundedOutOfOrdernessTimestampExtractor (and before the keyBy), so that every instance continues to receive some events and can advance its watermark. This comes at the expense of an extra network shuffle.
Perhaps the most commonly used solution is to use a watermark generator that detects idleness and artificially advances the watermark based on a processing time timer. ProcessingTimeTrailingBoundedOutOfOrdernessTimestampExtractor is an example of that.
A new watermark with idleness capability has been introduced. Flink will ignore these idle watermarks while calculating the minimum so the single partition with the data will be considered.
https://ci.apache.org/projects/flink/flink-docs-release-1.11/api/java/org/apache/flink/api/common/eventtime/WatermarksWithIdleness.html
I have the same issue - a src that may be inactive for a long time.
The solution below is based on WatermarksWithIdleness.
It is a standalone Flink job that demonstrate the concept.
package com.demo.playground.flink.sleepysrc;
import org.apache.flink.api.common.eventtime.WatermarkStrategy;
import org.apache.flink.api.common.eventtime.WatermarksWithIdleness;
import org.apache.flink.streaming.api.datastream.DataStream;
import org.apache.flink.streaming.api.datastream.KeyedStream;
import org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator;
import org.apache.flink.streaming.api.datastream.WindowedStream;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.streaming.api.functions.source.SourceFunction;
import org.apache.flink.streaming.api.functions.windowing.ProcessWindowFunction;
import org.apache.flink.streaming.api.windowing.assigners.EventTimeSessionWindows;
import org.apache.flink.streaming.api.windowing.time.Time;
import org.apache.flink.streaming.api.windowing.windows.TimeWindow;
import org.apache.flink.util.Collector;
import java.time.Duration;
public class SleepyJob {
public static void main(String[] args) throws Exception {
final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
final EventGenerator eventGenerator = new EventGenerator();
WatermarkStrategy<Event> strategy = WatermarkStrategy.
<Event>forBoundedOutOfOrderness(Duration.ofSeconds(5)).
withIdleness(Duration.ofSeconds(Constants.IDLE_TIME_SEC)).
withTimestampAssigner((event, timestamp) -> event.timestamp);
final DataStream<Event> events = env.addSource(eventGenerator).assignTimestampsAndWatermarks(strategy);
KeyedStream<Event, String> eventStringKeyedStream = events.keyBy((Event event) -> event.id);
WindowedStream<Event, String, TimeWindow> windowedStream = eventStringKeyedStream.window(EventTimeSessionWindows.withGap(Time.milliseconds(Constants.SESSION_WINDOW_GAP)));
windowedStream.allowedLateness(Time.milliseconds(1000));
SingleOutputStreamOperator<Object> result = windowedStream.process(new ProcessWindowFunction<Event, Object, String, TimeWindow>() {
#Override
public void process(String s, Context context, Iterable<Event> events, Collector<Object> collector) {
int counter = 0;
for (Event e : events) {
Utils.print(++counter + ") inside process: " + e);
}
Utils.print("--- Process Done ----");
}
});
result.print();
env.execute("Sleepy flink src demo");
}
private static class Event {
public Event(String id) {
this.timestamp = System.currentTimeMillis();
this.eventData = "not_important_" + this.timestamp;
this.id = id;
}
#Override
public String toString() {
return "Event{" +
"id=" + id +
", timestamp=" + timestamp +
", eventData='" + eventData + '\'' +
'}';
}
public String id;
public long timestamp;
public String eventData;
}
private static class EventGenerator implements SourceFunction<Event> {
#Override
public void run(SourceContext<Event> ctx) throws Exception {
/**
* Here is the sleepy src - after NUM_OF_EVENTS events are collected , the code goes to a SHORT_SLEEP_TIME sleep
* We would like to detect this inactivity and FIRE the window
*/
int counter = 0;
while (running) {
String id = Long.toString(System.currentTimeMillis());
Utils.print(String.format("Generating %d events with id %s", 2 * Constants.NUM_OF_EVENTS, id));
while (counter < Constants.NUM_OF_EVENTS) {
Event event = new Event(id);
ctx.collect(event);
counter++;
Thread.sleep(Constants.VERY_SHORT_SLEEP_TIME);
}
// here we create a delay:
// a time of inactivity where
// we would like to FIRE the window
Thread.sleep(Constants.SHORT_SLEEP_TIME);
counter = 0;
while (counter < Constants.NUM_OF_EVENTS) {
Event event = new Event(id);
ctx.collect(event);
counter++;
Thread.sleep(Constants.VERY_SHORT_SLEEP_TIME);
}
Thread.sleep(Constants.LONG_SLEEP_TIME);
}
}
#Override
public void cancel() {
this.running = false;
}
private volatile boolean running = true;
}
private static final class Constants {
public static final int VERY_SHORT_SLEEP_TIME = 300;
public static final int SHORT_SLEEP_TIME = 8000;
public static final int IDLE_TIME_SEC = 5;
public static final int LONG_SLEEP_TIME = SHORT_SLEEP_TIME * 5;
public static final long SESSION_WINDOW_GAP = 60 * 1000;
public static final int NUM_OF_EVENTS = 4;
}
private static final class Utils {
public static void print(Object obj) {
System.out.println(new java.util.Date() + " > " + obj);
}
}
}
For others, make sure there's data coming out of all your topics' partitions if you're using Kafka
I know it sounds dumb, but in my case I had a single source and the problem was still happening, because I was testing with very little data in a single Kafka topic (single source) that had 10 partitions. The dataset was so small that some of the topic's partitions did not have anything to give and, although I had only one source (the one topic), Flink did not increase the Watermark.
The moment I switched my source to a topic with a single partition the Watermark started to advance.
I have tried to migrate some simple Task to Flink 1.0.0 version, but it fails with the following exception:
java.lang.RuntimeException: Record has Long.MIN_VALUE timestamp (= no timestamp marker). Is the time characteristic set to 'ProcessingTime', or did you forget to call 'DataStream.assignTimestampsAndWatermarks(...)'?
The code consists of two separated tasks connected via Kafka topic, where one task is simple messages generator and the other task is simple messages consumer which uses timeWindowAll to calculate the minutely messages arriving rate.
Again, the similar code worked with 0.10.2 version without any problems, but now it looks like the system wrongly interprets some event timestamps like Long.MIN_VALUE which causes task failure.
The question is do I something wrong or it is some bug which will be fixed in future releases?
The main Task:
public class Test1_0_0 {
// Max Time lag between events time to current System time
static final Time maxTimeLag = Time.of(3, TimeUnit.SECONDS);
public static void main(String[] args) throws Exception {
// set up the execution environment
final StreamExecutionEnvironment env = StreamExecutionEnvironment
.getExecutionEnvironment();
// Setting Event Time usage
env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime);
env.setBufferTimeout(1);
// Properties initialization
Properties properties = new Properties();
properties.setProperty("bootstrap.servers", "localhost:9092");
properties.setProperty("zookeeper.connect", "localhost:2181");
properties.setProperty("group.id", "test");
// Overwrites the default properties by one provided by command line
ParameterTool parameterTool = ParameterTool.fromArgs(args);
for(Map.Entry<String, String> e: parameterTool.toMap().entrySet()) {
properties.setProperty(e.getKey(),e.getValue());
}
//properties.setProperty("auto.offset.reset", "smallest");
System.out.println("Properties: " + properties);
DataStream<Message> stream = env
.addSource(new FlinkKafkaConsumer09<Message>("test", new MessageSDSchema(), properties)).filter(message -> message != null);
// The call to rebalance() causes data to be re-partitioned so that all machines receive messages
// (for example, when the number of Kafka partitions is fewer than the number of Flink parallel instances).
stream.rebalance()
.assignTimestampsAndWatermarks(new MessageTimestampExtractor(maxTimeLag));
// Counts messages
stream.timeWindowAll(Time.minutes(1)).apply(new AllWindowFunction<Message, String, TimeWindow>() {
#Override
public void apply(TimeWindow timeWindow, Iterable<Message> values, Collector<String> collector) throws Exception {
Integer count = 0;
if (values.iterator().hasNext()) {
for (Message value : values) {
count++;
}
collector.collect("Arrived last minute: " + count);
}
}
}).print();
// execute program
env.execute("Messages Counting");
}
}
The timestamp extractor:
public class MessageTimestampExtractor implements AssignerWithPeriodicWatermarks<Message>, Serializable {
private static final long serialVersionUID = 7526472295622776147L;
// Maximum lag between the current processing time and the timestamp of an event
long maxTimeLag = 0L;
long currentWatermarkTimestamp = 0L;
public MessageTimestampExtractor() {
}
public MessageTimestampExtractor(Time maxTimeLag) {
this.maxTimeLag = maxTimeLag.toMilliseconds();
}
/**
* Assigns a timestamp to an element, in milliseconds since the Epoch.
*
* <p>The method is passed the previously assigned timestamp of the element.
* That previous timestamp may have been assigned from a previous assigner,
* by ingestion time. If the element did not carry a timestamp before, this value is
* {#code Long.MIN_VALUE}.
*
* #param message The element that the timestamp is wil be assigned to.
* #param previousElementTimestamp The previous internal timestamp of the element,
* or a negative value, if no timestamp has been assigned, yet.
* #return The new timestamp.
*/
#Override
public long extractTimestamp(Message message, long previousElementTimestamp) {
long timestamp = message.getTimestamp();
currentWatermarkTimestamp = Math.max(timestamp, currentWatermarkTimestamp);
return timestamp;
}
/**
* Returns the current watermark. This method is periodically called by the
* system to retrieve the current watermark. The method may return null to
* indicate that no new Watermark is available.
*
* <p>The returned watermark will be emitted only if it is non-null and larger
* than the previously emitted watermark. If the current watermark is still
* identical to the previous one, no progress in event time has happened since
* the previous call to this method.
*
* <p>If this method returns a value that is smaller than the previously returned watermark,
* then the implementation does not properly handle the event stream timestamps.
* In that case, the returned watermark will not be emitted (to preserve the contract of
* ascending watermarks), and the violation will be logged and registered in the metrics.
*
* <p>The interval in which this method is called and Watermarks are generated
* depends on {#link ExecutionConfig#getAutoWatermarkInterval()}.
*
* #see org.apache.flink.streaming.api.watermark.Watermark
* #see ExecutionConfig#getAutoWatermarkInterval()
*/
#Override
public Watermark getCurrentWatermark() {
if(currentWatermarkTimestamp <= 0) {
return new Watermark(Long.MIN_VALUE);
}
return new Watermark(currentWatermarkTimestamp - maxTimeLag);
}
public long getMaxTimeLag() {
return maxTimeLag;
}
public void setMaxTimeLag(long maxTimeLag) {
this.maxTimeLag = maxTimeLag;
}
}
The problem is that calling assignTimestampsAndWatermarks returns a new DataStream which uses the timestamp extractor. Thus, you have to use the returned DataStream to perform the subsequent operations on it.
DataStream<Message> timestampStream = stream.rebalance()
.assignTimestampsAndWatermarks(new MessageTimestampExtractor(maxTimeLag));
// Counts Strings
timestampStream.timeWindowAll(Time.minutes(1)).apply(new AllWindowFunction<Message, String, TimeWindow>() {
#Override
public void apply(TimeWindow timeWindow, Iterable<Message> values, Collector<String> collector) throws Exception {
Integer count = 0;
if (values.iterator().hasNext()) {
for (Message value : values) {
count++;
}
collector.collect("Arrived last minute: " + count);
}
}
}).print();