Flink message retries like Storm - apache-flink

I am trying to build a Flink job that would read data from a Kafka source do a bunch of processing including few REST calls and then finally sink into another Kafka topic.
The problem I trying to address is that of message retries. What if there are transient errors in the REST API? How can I do exponential backoff-based retry of these messages like the way Storm supports?
I have 2 approaches that I could think off
Use TimerService but then in case of failures the state will start to expand uncontrollably.
Write failed message to a different Kafka topic and process them with a delay of sorts, but here the problem can arise if the Sink itself is down for few minutes?
Is there a better more robust and simpler way to achieve this?

I would use Flink's AsyncFunction to make the REST calls. If needed, it will backpressure the source(s) rather than use more than a configured amount of state. For retries, see AsyncFunction retries.

Related

Canonical way of retrying in Flink operators

I have a couple of Flink jobs that receive data from a series of Kafka topics, do some aggregation, and publish the result into a Kafka topic.
The aggregation part is what gets somehow difficult. I have to retrieve some information from several HTTP endpoints and put together the responses in a particular format. Problem is that some of those outbound HTTP calls time out sometimes, so I need a way to retry them.
I was wondering if there is a canonical way to do such task within Flink operators, without doing something entirely manually. If not, what could be a recommended approach?
In a bit more than a month you'll have Flink 1.16 available with retry support in AsyncIO:
https://nightlies.apache.org/flink/flink-docs-master/docs/dev/datastream/operators/asyncio/#retry-support
That is probably your best option. In the meantime, using AsyncIO, but configuring it to support long timeouts and handle the retries yourself in the asyncInvoke may be an option.

Need advice on migrating from Flink DataStream Job to Flink Stateful Functions 3.1

I have a working Flink job built on Flink Data Stream. I want to REWRITE the entire job based on the Flink stateful functions 3.1.
The functions of my current Flink Job are:
Read message from Kafka
Each message is in format a slice of data packets, e.g.(s for slice):
s-0, s-1 are for packet 0
s-4, s-5, s-6 are for packet 1
The job merges slices into several data packets and then sink packets to HBase
Window functions are applied to deal with disorder of slice arrival
My Objectives
Currently I already have Flink Stateful Functions demo running on my k8s. I want to do rewrite my entire job upon on stateful functions.
Save data into MinIO instead of HBase
My current plan
I have read the doc and got some ideas. My plans are:
There's no need to deal with Kafka anymore, Kafka Ingress(https://nightlies.apache.org/flink/flink-statefun-docs-release-3.0/docs/io-module/apache-kafka/) handles it
Rewrite my job based on java SDK. Merging are straightforward. But How about window functions?
Maybe I should use persistent state with TTL to mimic window function behaviors
Egress for MinIO is not in the list of default Flink I/O Connectors, therefore I need to write my custom Flink I/O Connector for MinIO myself, according to https://nightlies.apache.org/flink/flink-statefun-docs-release-3.0/docs/io-module/flink-connectors/
I want to avoid Embedded module because it prevents scaling. Auto scaling is the key reason why I want to migrate to Flink stateful functions
My Questions
I don't feel confident with my plan. Is there anything wrong with my understandings/plan?
Are there any best practice I should refer to?
Update:
windows were used to assemble results
get a slice, inspect its metadata and know it is the last one of the packet
also knows the packet should contains 10 slices
if there are already 10 slices, merge them
if there are not enough slices yet, wait for sometime (e.g. 10 minutes) and then either merge or record packet errors.
I want to get rid of windows during the rewrite, but I don't know how
Background: Use KeyedProcessFunctions Rather than Windows to Assemble Related Events
With the DataStream API, windows are not a good building block for assembling together related events. The problem is that windows begin and end at times that are aligned to the clock, rather than being aligned to the events. So even if two related events are only a few milliseconds apart they might be assigned to different windows.
In general, it's more straightforward to implement this sort of use case with keyed process functions, and use timers as needed to deal with missing or late events.
Doing this with the Statefun API
You can use the same pattern mentioned above. The function id will play the same role as the key, and you can use a delayed message instead of a timer:
as each slice arrives, add it to the packet that's being assembled
if it is the first slice, send a delayed message that will act as a timeout
when all the slices have arrived, merge them and send the packet
if the delayed message arrives before the packet is complete, do whatever is appropriate (e.g., go ahead and send the partial packet)

Chaining Flink Sinks

Background
I am new to Flink and come from Apache Storm background
Working on developing a lossless gRPC sink
Crux
A finite no. of retries will be made based on the error codes returned by the gRPC endpoint
After that the data will be flushed to Kafka Queue for offline processing
Decision to retry will be based on returned error code.
Problem
Is it possible to chain another sink so that the response ( successful or error ) is also available downstream for any customized processing ?
Answer is as per the comment by Dominik WosiƄski
It's not possible in general, You will have to work around that, either by providing both functionalities in a single sink or using some existing fuctions like AsyncIO to write to gRPC and then sink the failures to kafka, but that may be harder if You need any strong guarantees.

Using Akka Streams Kafka Producer in parallel

I'd like to parallelize the write to kafka, that is having multiple producer sending data to kafka, although from within an akka-stream. In other my stream would have several initial stage from the source, and then when arriving at sending the data, i would like to have about 16 worker sending the data at the same time.
I wonder if i need to embed Akka Streams Kafka in an akka-stream Graph DSL and use a balancer for that, or if there is an easier solution. Also, simply, if someone has done anything like that in general that would be great.
Akka Streams Kafka (aka Reactive Kafka) has got a specific setting for the producer parallelism (see the docs)
akka.kafka.producer {
# Tuning parameter of how many sends that can run in parallel.
parallelism = 100
...
}
Have you tried tuning it to solve your problem?

Flume agent does not stop retrying for unrecoverable solr error

I am using Morphline Solr Sink to store information in Solr. The problem that I am facing is that flume agent never stops retrying the failed requests, which sometimes can increase over time. This results in the flume warning of MaxIO Workers being used and the system suffers with performance issues. Is there any way other than writing my own sink, that can make flume stop retrying or backoff exponentially to have a better system performance? My source is an avroSource.
Thanks.
You should fix the reason for the failed requests.
Flume is doing exactly what it's designed to do. It's transactionally trying to store the batch of events in your store. If it can't store those events then, yes, it keeps on trying.
You haven't explained what the problem is causing these failures. I would recommend thinking about an interceptor to fix whatever is wrong in the data or to drop events you don't want to store.

Resources