Talking about Counters with respect to StatsD, the way it works is that you keep posting a value of a counter eg. numOfRequests|c:1 whenever app get a request to the StatsD Daemon. The daemon has a flush interval set, when it pushes the aggregate of this counter in that period to an external backend. Additionally it also resets the counter to 0.
Trying to map this to Flink Counters.
Flink counters only has inc and dec methods so till the reporting time comes, app can call inc or dec to change the value of a counter.
At the time of reporting the latest value of counter is reported to StatsD daemon but the Flink counter value is never reset(Not able to find any code).
So should the flink counter be reported as a gauge value to StatsD. Or Flink does reset the counters?
Flink Counters are basically kind of gauge values. The counters are never reset. So numRecordsIn/numRecordsOut or any other counter metrics kept increasing over the lifetime of a job. If you want to visualise the count over a duration, you need to calculate and send the delta to the external backend yourself in the report method or use the external backend solution capabilities to graph the delta.
We use Datadog and used following to graph the delta over a duration:
diff(sum:numRecordsIn{$app_name,$env}.rollup(max))
Related
I'm using KDA with a flink job which should analyse messages emitted by a different IOT device sources. There is a kinesis stream with 4 shards with each of them contains more or less the same amount of data (there are no hot shards). The kinesis stream gets filled by AWS Greengrass Streammanager which is using an increasing sequence number as partition key. Each message contains a single value (something like temperature = 5).
As with this setup the stream read by the kinesis consumer in flink is unordered. But I need to preserve the order of the messages. To do so I have written a small buffer function which is more or less the logic from CepOperator to buffer messages and restore the order. Therefore the stream is keyed by the id of a message. Let's say a temperature message has always a unique id and therefore the stream is keyed by this id.
To create the respective watermarks I'm using the FlinkKinesisConsumer and register there a BoundedOutOfOrdernessTimestampExtractor. If I now use a out-of-orderness time of 10 seconds everything works fine except that I have almost 50% of late arrivals which is not the desired behaviour. But now if I increase the time to 60 seconds the iterator of the kinesis stream falls significantly behind (linear growing over time). The documentation of the Kinesis Consumer does say a little about the settings here. I have also tried to register a JobManagerWatermarkTracker but it seems that it does not change the behaviour.
I do not understand the circumstances why a higher out of orderness leads the iterator to fall behind increasingly but a smaller time setting drops a significant amount of messages. What measures do I need to take to find the proper settings or is my implementation wrong?
UPDATE:
While investigating the issue I have found that if the JobManagerWatermarkTracker isn't properly configured (I still don't understand how to configure) the alignment to the global watermark stops subtasks from reading from the kinesis stream which causes the iterator to fall back. I have calculated a delta how much "latency" a dropped event has and set this as and out-of-orderness (in this case 60 secs). With deactivating the JobManagerWatermarkTracker everything work as expected.
Furthermore it seems that the AWS Greengrass Streammanager isn't optimal for such use cases as it distributes the load evenly across shards but with an increasing number of shards this isn't optimal since one temperature datapoint might be spread across all shards of a stream. That introduces a lot unnecessary latency. I appreciate any input howto configure the JobManagerWatermarkTracker
I am using flink cep sql with blink planner.
Here is my sql
select * from test_table match_recognize (
partition by agent_id,room_id,call_type
order by row_time // process time
measures
last(BF.create_time) as create_time,
last(AF.connect_time) as connect_time
one row per match after match SKIP PAST LAST ROW
pattern (BF+ AF) WITHIN INTERVAL '10' SECOND
define
BF as BF.connect_time = 0,
AF as AF.connect_time > 0
) as T ;
The test_table is a kafka table
I set table.exec.state.ttl=10000 and run my program then I keep sending message.
As I both set state ttl and cep interval to 10s, the state's size should be a fixed number after 10 seconds when I started it.
But the fact is that the state keep growing for at least 15 minutes. Besides, jvm triggered twice full gc.
Are there any configurations I haven't configured?
You cannot use checkpoint sizes to estimate state size -- they are not related in any straightforward way. Checkpoints can include unpredictable amounts of in-flight, expired, or uncompacted data -- none of which would be counted as active state.
I'm afraid there isn't any good tooling available for measuring exactly how much state you actually have. But if you are using RocksDB, then you can enable these metrics
state.backend.rocksdb.metrics.estimate-live-data-size
state.backend.rocksdb.metrics.estimate-num-keys
which will give you a reasonably accurate estimate (but you may pay a performance penalty for turning them on).
As for your concern about CEP state -- you should be fine. Anytime you have a pattern that uses WITHIN, CEP should be able to clean the state automatically.
Do I need to set assignTimestampsAndWatermarks if I set my time characteristic to IngestionTime?
say I set my time characteristic of stream execution environment to Ingestion time as follows
streamExecutionEnvironment.setStreamTimeCharacteristic(TimeCharacteristic.IngestionTime);
do I need to call datastream.assignTimestampsAndWatermarks(AscendingTimestampExtractor) ?
I thought datastream.assignTimestampsAndWatermarks is mandatory only if time characteristic is event time. No? If not, I am wondering how can I set AscendingTimestampExtractor in a distributed environment? is there any way to add monotonically increasing long(AscendingTimestampExtractor) without any distributed locks?
No, there is no need to call assignTimestampsAndWatermarks when using ingestion time. With ingestion time, Flink assigns timestamps and watermarks automatically.
Also, there is never any need to worry about distributed locking when doing watermarking. Each local instance assigns watermarks locally, based on its knowledge of the local streams. For an AscendingTimestampExtractor it's enough that the timestamps are monotonically increasing in each instance.
I have a DataStream and need to compute a window aggregation on it. When I perform a regular window aggregation, the network IO is very high.
So, I'd like to perform local pre-aggregation to decrease the network IO.
I wonder if it is possible to pre-aggregate locally on the task managers (i.e., before shuffling the records) and then perform the full aggregate. Is this possible with Flink's DataStream API?
My code is:
DataStream<String> dataIn = ....
dataIn
.map().filter().assignTimestampsAndWatermarks()
.keyBy().window().fold()
The current release of Flink (Flink 1.4.0, Dec 2017) does not feature built-in support for pre-aggregations. However, there are efforts on the way to add this for the next release (1.5.0), see FLINK-7561.
You can implement a pre-aggregation operation based on a ProcessFunction. The ProcessFunction could keep the pre-aggregates in a HashMap (of fixed size) in memory and register timers event-time and processing-time) to periodically emit the pre-aggregates. The state (i.e., content of the HashMap) should be persisted in managed operator state to prevent data loss in case of a failure. When setting the timers, you need to respect the window boundaries.
Please note that FoldFunction has been deprecated and should be replaced by AggregateFunction.
I am currently writing an aggregation use case using Flink 1.0, as part of the use case I need to get count of api's that were logged in last 10 mins.
This I can easily do using keyBy("api") and then apply window of 10 min and doe sum(count) operation.
But the problem is my data might come out of order so I need some way to get the count of api's across the 10 min window..
For example : If the same api log comes in 2 different windows, I should get a global count i.e 2 for it and not two separate records diaplaying count as 1 each for each window.
I also don't want incremental counts i.e each record with same key is displayed many times with count equal to the incremental value..
I want the record to be displayed once with a global count, something like updateStateByKey() in Spark.
Can we do that?
You should have a look at Flink's event-time feature which produces consistent results for out-of-order streams. Event-time means that Flink will process data depending on timestamps that are part of the events and not depending on the machines wall-clock time.
If you you event-time (with appropriate watermarks). Flink will use automatically handle events that arrive out-of-order.