why data is not processed in RichFlatMapFunction - apache-flink

In order to improve the performance of data process, we store events to a map and do not process them untill event count reaches 100.
in the meantime, start a timer in open method, so data is processed every 60 seconds
this works when flink version is 1.11.3,
after upgrading flink version to 1.13.0
I found sometimes events were consumed from Kafka continuously, but were not processed in RichFlatMapFunction, it means data was missing.
after restarting service, it works well, but several hours later the same thing happened again.
any known issue for this flink version? any suggestions are appreciated.
public class MyJob {
public static void main(String[] args) throws Exception {
...
DataStream<String> rawEventSource = env.addSource(flinkKafkaConsumer);
...
}
public class MyMapFunction extends RichFlatMapFunction<String, String> implements Serializable {
#Override
public void open(Configuration parameters) {
...
long periodTimeout = 60;
pool.scheduleAtFixedRate(() -> {
// processing data
}, periodTimeout, periodTimeout, TimeUnit.SECONDS);
}
#Override
public void flatMap(String message, Collector<String> out) {
// store event to map
// count event,
// when count = 100, start data processing
}
}

You should avoid doing things with user threads and timers in Flink functions. The supported mechanism for this is to use a KeyedProcessFunction with processing time timers.

Related

Watermark with TumblingWindow in Apache Flink

I am trying to understand the dependence between Windows and Watermark generation in Apache FLink, I have an error with the example below :
public static void main(String[] args) throws Exception {
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.getConfig().setAutoWatermarkInterval(10000);
FlinkKafkaConsumer<String> kafkaSource = new FlinkKafkaConsumer<String>("watermarkFlink", new SimpleStringSchema(), props);
DataStream<String> orderStream = env.addSource(kafkaSource);
DataStream<Order> dataStream = orderStream.map(str -> {
String[] order = str.split(",");
return new Order(order[0], Long.parseLong(order[1]), null);
});
WatermarkStrategy<Order> orderWatermarkStrategy = CustomWatermarkStrategy.<Order>forBoundedOutOfOrderness(Duration.ofSeconds(1))
.withTimestampAssigner((element, timestamp) ->
element.getTimestamp()
);
dataStream
.assignTimestampsAndWatermarks(orderWatermarkStrategy)
.map(new OrderKeyValue())
.keyBy(new KeySelector<Tuple2<String, Integer>, String>() {
#Override
public String getKey(Tuple2<String, Integer> src) throws Exception {
return src.f0;
}
})
.window(SlidingEventTimeWindows.of(Time.seconds(20), Time.seconds(5)))
.sum(1)
.print("Windows");
dataStream.print("Data");
env.execute();
}
public static class OrderKeyValue implements MapFunction<Order, Tuple2<String, Integer>> {
#Override
public Tuple2<String, Integer> map(Order order) {
return new Tuple2<>(order.getCategory(), 1);
}
}
The timestamp here is a long that we can retrieve from the Kafka source which should be like : A,4 C,8 where the C is the Category and 5 is the timestamp.
Whenever I send an event the datastream is printing but not these with the window (print("Windows")).
Also if for example I receive an event A,12 and then I have a watermark generated (in 10 seconds) then I have C,2 which comes after the first windows being closed, will it be processed in the window or will it be just ignored ?
There's a tutorial in the Flink documentation that should help clarify these concepts: https://nightlies.apache.org/flink/flink-docs-stable/docs/learn-flink/streaming_analytics/
But to summarize the situation:
If you have an event stream like (A,4) (C,8) (A,12), then those integers will be interpreted as milliseconds.
You first window will wait for a watermark of 20000 before being triggered.
To generate a watermark that large, you'll need an event with a timestamp of at least 21000 (since the bounded out-of-orderness is set to 1 second).
And since you have configured the auto-watermarking interval to 10 seconds, your application will have to run that long before the first watermark will be generated. (I can't think of any situation where setting the watermarking interval this large is helpful.)
If an event arrives after its window has been closed, then it will be ignored (by default). You can configure allowed lateness to arrange for late events to trigger additional window firings.

Use Cases of Flink CheckpointedFunction

While going through the Flink official documentation, I came across CheckpointedFunction.
Wondering why and when would you use this function. I am currently working on a stateful Flink job that heavily relies on ProcessFunction to save state in RocksDB. Just wondering if CheckpointedFunction is better than the ProcessFunction.
CheckpointedFunction is for cases where you need to work with state that should be managed by Flink and included in checkpoints, but where you aren't working with a KeyedStream and so you cannot use keyed state like you would in a KeyedProcessFunction.
The most common use cases of CheckpointedFunction are in sources and sinks.
In addition to the answer of #David I have another use case in which I don't use CheckpointedFunction with the source or sink. I do use it in a ProcessFunction where I want to count (programmatically) how many times my job has restarted. I use MyProcessFunction and CheckpointedFunction and I update ListState<Long> restarts when the job restarts. I use this state on the integration tests to ensure that the job was restarted upon a failure. I based my example on the Flink checkpoint example for Sinks.
public class MyProcessFunction<V> extends ProcessFunction<V, V> implements CheckpointedFunction {
...
private transient ListState<Long> restarts;
#Override
public void snapshotState(FunctionSnapshotContext context) throws Exception { ... }
#Override
public void initializeState(FunctionInitializationContext context) throws Exception {
restarts = context.getOperatorStateStore().getListState(new ListStateDescriptor<Long>("restarts", Long.class));
if (context.isRestored()) {
List<Long> restoreList = Lists.newArrayList(restarts.get());
if (restoreList == null || restoreList.isEmpty()) {
restarts.add(1L);
System.out.println("restarts: 1");
} else {
Long max = Collections.max(restoreList);
System.out.println("restarts: " + max);
restarts.add(max + 1);
}
} else {
System.out.println("restarts: never restored");
}
}
#Override
public void open(Configuration parameters) throws Exception { ... }
#Override
public void processElement(V value, Context ctx, Collector<V> out) throws Exception { ... }
#Override
public void onTimer(long timestamp, OnTimerContext ctx, Collector<V> out) throws Exception { ... }
}

How to get the Ingestion time of an event when the time characteristic is event-time?

I want to evaluate the time costed between an event reaches the system and get finished, and I think getting ingestion time will help, but how to do get it?
You probably want to use latency tracking. Alternatively, you can add the processing time directly after the source in a chained process function (with Context->TimerService#currentProcessingTime()).
Based on the reply from David, to get the ingest time we can chain the process method with source.
Below code shows the way to get the ingest time. Also in case the same need to be used for metrics to get the difference between ingest time & event time, I have used histogram metric group to do that.
Below code snippet might help you to better understand.
DataStream<EventDataMapping> text = env
.fromSource(source, WatermarkStrategy.forBoundedOutOfOrderness(Duration.ofSeconds(5)),"Kafka Source")
.process(new ProcessFunction<EventDataMapping, EventDataMapping>() {
private transient DescriptiveStatisticsHistogram eventVsIngestionTimeLag;
private static final int EVENT_TIME_LAG_WINDOW_SIZE = 10_000;
#Override
public void open(Configuration parameters) throws Exception {
super.open(parameters);
eventVsIngestionTimeLag = getRuntimeContext().getMetricGroup().histogram("eventVsIngestionTimeLag",
new DescriptiveStatisticsHistogram(EVENT_TIME_LAG_WINDOW_SIZE));
}
#Override
public void processElement(EventDataMapping eventDataMapping, Context context, Collector<EventDataMapping> collector) throws Exception {
LOG.info("process element event time "+context.timestamp()+" current ingestTime "+context.timerService().currentProcessingTime());
eventVsIngestionTimeLag.update(context.timerService().currentProcessingTime() - context.timestamp());
}
}).returns(EventDataMapping.class);

Apache Flink - Counter value displayed but meter values not displayed

We are using Flink 1.8.0 and running it on EMR - Yarn and would like to measure the throughput.
Because our operators are chained, we have added meters and counters in our code - essentially an async operator that makes API calls with kinesis as both source and sync. In the Application Master i.e. Flink's web UI, we are able to get the value for the counters but not the meters.
public class AsyncClass extends RichAsyncFunction<String, String> {
private transient Counter counter;
private transient Meter meter;
#Override
public void open(Configuration parameters) throws Exception {
super.open(parameters);
this.counter = getRuntimeContext()
.getMetricGroup()
.counter("myCounter");
this.meter = getRuntimeContext()
.getMetricGroup()
.meter("myMeter", new DropwizardMeterWrapper(new com.codahale.metrics.Meter()));
}
#Override
public void close() throws Exception {
super.close();
ExecutorUtils.gracefulShutdown(20000, TimeUnit.MILLISECONDS, executorService);
}
#Override
public void asyncInvoke(String key, final ResultFuture<String> resultFuture) throws Exception {
resultFuture.complete(key);
this.meter.markEvent();
this.counter.inc();
}
}
To measure the complete throughput of the application, we obviously need the throughput of all the task managers together. Using meters, we are able to get the metrics for individual task managers. Is there any way to measure it at the operator level?
Turns out the meter displays whole number values and the rate is measured in decimals. When my load was a constant 1 event per second, it was actually measured as 0.9xxx something and hence was showing only 0 events per second.

Flink streaming job is not scaling as expected

We are in the middle of testing scaling ability of Flink. But we found that scaling not working, no matter increase more slot or increase number of Task Manager. We would expect a linear, if not close-to-linear scaling performance but the result even show degradation. Appreciated any comments.
Test Details,
-VMWare vsphere
-Just a simple pass through test,
- auto gen source 3mil records, each 1kb in size, parallelism=1
- source pass into next map operator, which just return the same record, and sent counter to statsD, parallelism is in cases = 2,4,6
3 TM, total 6 slots(2/TM) each JM/TM has 32 vCPU, 100GB memory
Result:
2 slots: 26 seconds, 3mil/26=115k TPS
4 slots: 23 seconds, 3mil/23=130k TPS
6 slots: 22 seconds, 3mil/22=136k TPS
As shown the scaling is almost nothing. Any clue? Thanks.
You really should be using a RichParallelSourceFunction. If you care about making the records from different instances of the source distinct, you can get ahold of each instance's index from the RuntimeContext, which is available via the getRuntimeContext() method in the RichFunction interface.
Also, Flink has a built-in statsd metrics reporter that you should be using instead of rolling your own. Moreover, numRecordsIn, numRecordsOut, numRecordsInPerSecond, and numRecordsOutPerSecond are already being computed for you, so no need to create this instrumentation yourself. You can also access these metrics via Flink's web interface, or the REST API.
As for why you might be experiencing poor scalability with the Kafka consumer, there are many things that could cause this. If you are using event time processing, then idle partitions could be holding things up (see https://issues.apache.org/jira/browse/FLINK-5479). If the stream is keyed, then data skew could be an issue. If you are connecting to an external database or service, then it could easily be a bottleneck. If checkpointing is misconfigured it could cause this. Or insufficient network capacity.
I would start to debug this by looking at some key metrics in the Flink web UI. Is the load well balanced across the sub-tasks, or is it skewed? You could turn on latency tracking and see if one of the kafka partitions is misbehaving (by inspecting the latency at the sink(s), which will be reported on a per-partition basis). And you could look for back pressure.
please refer to the sample code,
public class passthru extends RichMapFunction<String, String> {
public void open(Configuration configuration) throws Exception {
... ...
stats = new NonBlockingStatsDClient();
}
public String map(String value) throws Exception {
... ...
stats.increment();
return value;
}
}
public class datagen extends RichSourceFunction<String> {
... ...
public void run(SourceContext<String> ctx) throws Exception {
int i = 0;
while (run){
String idx = String.format("%09d", i);
ctx.collect("{\"<a 1kb json content with idx in certain json field>\"}");
i++;
if(i == loop)
run = false;
}
}
... ...
}
public class Job {
public static void main(String[] args) throws Exception {
... ...
DataStream<String> stream = env.addSource(new datagen(loop)).rebalance();
DataStream<String> convert = stream.map(new passthru(statsdUrl));
env.execute("Flink");
}
}
the reductionState code,
dataStream.flatMap(xxx).keyBy(new KeySelector<xxx, AggregationKey>() {
public AggregationKey getKey(rec r) throws Exception {
... ...
}
}).process(new Aggr());
public class Aggr extends ProcessFunction<rec, rec> {
private ReducingState<rec> store;
public void open(Configuration parameters) throws Exception {
store= getRuntimeContext().getReducingState(new ReducingStateDescriptor<>(
"reduction store", new ReduceFunction<rec>() {
... ...
}
public void processElement(rec r, Context ctx, Collector<rec> out)
throws Exception {
... ...
store.add(r);

Resources