Flink JDBC sink consistency guarentees - apache-flink

I have a flink application (v1.13.2) which is reading from multiple kafka topics as a source. There is a filter operator to remove unwanted records from the source stream and finally a JDBC sink to persist the data into postgres tables. The SQL query can perform upserts so same data getting processed again is not a problem. Checkpointing is enabled.
According to the documentation, JDBC sink provides at-least once guarantee. Also,
A JDBC batch is executed as soon as one of the following conditions is true:
the configured batch interval time is elapsed
the maximum batch size is reached
a Flink checkpoint has started
And kafka source documentation
Kafka source commits the current consuming offset when checkpoints are
completed, for ensuring the consistency between Flink’s checkpoint
state and committed offsets on Kafka brokers.
With Flink’s checkpointing enabled, the Flink Kafka Consumer will
consume records from a topic and periodically checkpoint all its Kafka
offsets, together with the state of other operations. In case of a job
failure, Flink will restore the streaming program to the state of the
latest checkpoint and re-consume the records from Kafka, starting from
the offsets that were stored in the checkpoint.
Is it safe to say that in my scenario, whatever record offsets that get committed back to kafka will always be present in the database? Flink will store offsets as part of the checkpoints and commit them back only if they are successfully created. And if the jbdc query fails for some reason, the checkpoint itself will fail. I want to ensure there is no data loss in this usecase.

Related

What is the restoring mechanism of the Flink on K8S when rollingUpdate is executed for update strategy?

I am wondering that the restoring procedure of checkpoint or savepoint in Flink when job is restarted by rolling updates on k8s.
Let me explain simple example as below.
Assume that I have 4 pods in my flink k8s job and have following simple dataflow using parallelism 1.
source -> filter -> map -> sink
Each pod is responsible for each operator and data is consumed through the source function. Since I don't want to lose my data so I set up my dataflow as at least or exactly at once mode in Flink.
And then when rolling update occurs, each pod gets restarted in a sequential way. Suppose that filter is managed by pod1, map is pod2, sink is pod3 and source is pod4 respectively. When the pod1 (filter) is restarted according to the rolling update, does the records in the source task (other task) is saved to the external place for checkpoint immediately? So it can be restored perfectly without data loss even after restarting?
And also, I am wondering that the data in map task (pod3) keep persistent to the external source when rolling update happens even though the checkpoint is not finished?
It means that when the rolling update is happen, the flink is now processing the data records and the checkpoint is not completed. In this case, the current processed data in the task is loss?
I need more clarification for data restoring when we use checkpoint and k8s on flink updated by rolling strategy.
Flink doesn't support rolling upgrades. If one of your pods where a Flink application is currently running becomes unavailable , the Flink application will usually restart.
The answer from David at Is the whole job restarted if one task fails explains this in more detail.
I would also recommend to look at the current documentation for Task Failure Recovery at https://nightlies.apache.org/flink/flink-docs-stable/docs/ops/state/task_failure_recovery/ and the checkpointing/savepointing documentation that's also listed there.

How to have flink check point/savepoint backup on multiple data center

I have flink application which will run on a node in DC-1 (Data Center 1), we are planning to have savepoint and checkpoint state backup with HDFS or AMAZON-S3. The support in my org for both HDFS and S3 is that it does not replicate data written to DC-1 to DC-2 (They are working on it but time line is large). With this in mind, is there a way to have flink checkpoint/savepoint be written to both DC by flink itself somehow ?
Thanks
As far as I know there is no such mechanism in Flink. Usually, it's not the data processing pipelines responsibility to assert that data gets backed. The easiest workaround for that would be to create a CRON job that periodically copies checkpoints to DC-2.

What happens if Flink job restarts between two checkpoints?

This question is basically similar to the one asked here: Apache Flink fault tolerance.
i.e. what happens if a job restarts between two checkpoints? Will it reprocess the records that were already processed after the last checkpoint?
Take for example I have two jobs, job1 and job2. Job1 consumes records from Kafka, processes them and again produces them to second Kafka topic. Job2 consumes from this second topic and processes the records (in my case its updating values in aerospike using AerospikeClient).
Now from the answer to this question Apache Flink fault tolerance, I can somehow believe that if job1 restarts, it will not produce duplicates records in the sink. I am using FlinkKafkaProducer011 which extends TwoPhaseCommitSinkFunction (https://flink.apache.org/features/2018/03/01/end-to-end-exactly-once-apache-flink.html). Please explain how it will prevent reprocessing (ie duplicate production of records to Kafka).
According to Flink doc, flink restarts a job from last successful checkpoint. So if job2 restarts before completing the checkpoint, it will restart from last checkpoint and the records that were already processed after that last checkpoint will be reprocessed (ie multiple updations in aerospike).
Am I right or is there something else in Flink (& aerospike) that prevents this reprocessing in job2?
In such a scenario, Flink will indeed reprocess some events. During recovery the input partitions will have their offsets reset to the offsets in the most recent checkpoint, and events that had been read after that checkpoint will be re-ingested.
However, the FlinkKafkaProducer uses Kafka transactions that are committed when checkpoints are completed. When a job fails, whatever output it has produced since the last checkpoint is protected by transactions that are never committed. So long as that job's consumers are configured to use read_committed as their isolation.level, they won't see any duplicates.
For more details, see Best Practices for Using Kafka Sources/Sinks in Flink Jobs.

Apache Flink - How Checkpoint/Savepoint works If we run duplicate jobs (Multi Tenancy)

I have multiple Kafka topics (multi tenancy) and I run the same job run multiple times based on the number of topics with each job consuming messages from one topic. I have configured file system as state backend.
Assume there are 3 jobs running. How does checkpoints work here? Does all the 3 jobs store the checkpoint information in the same path? If any of the job fails, how does the job knows from where to recover the checkpoint information? We used to give a job name while submitting a job to the flink cluster. Does it have anything to do with it? In general how does Flink differentiate the jobs and its checkpoint information to restore in case of failures or manual restart of the jobs (irrespective of same or different jobs)?
Case1: What happens in case of job failure?
Case2: What happens If we manually restart the job?
Thank you
To follow-on to what #ShemTov was saying:
Each job will write its checkpoints in a sub-dir named with its jobId.
If you manually cancel a job the checkpoints are deleted (since they are no longer needed for recover), unless they have been configured to be retained:
CheckpointConfig config = env.getCheckpointConfig();
config.enableExternalizedCheckpoints(ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION);
Retained checkpoints can be used for manually restarting, and for rescaling.
Docs on retained checkpoints.
If you have high availability configured, the job manager's metadata about checkpoints will be stored in the HA store, so that recovery does not depend on the job manager's survival.
The JobManager is aware of each job checkpoint, and keep that metadata, checkpoint is being save to the checkpoint directory(via flink-conf.yaml), under this directory it`ll create a randomly hash directory for each checkpoint.
Case 1: The Job will restart (depend on your Fallback Strategy...), and if checkpoint is enabled it'll read the last checkpoint.
Case 2: Im not 100% sure, but i think if you cancel the job manually and then submit it, it won't read the checkpoint. You'll need to use savepoint. (You can kill your job with savepoint, and then submit your job again with the same savepoint). Just be sure that every oprator has a UID. you can read more about savepoints here: https://ci.apache.org/projects/flink/flink-docs-stable/ops/state/savepoints.html

Flink's failure recovery process

I want to know the detailed failure recovery process of flink.In standalone mode, I guess some steps, such as a TaskManager failure, first detect the failure, all tasks stop processing, and then redeploy the tasks. Then download the checkpoint from HDFS, and each operator loads the state. After the loading is completed, the source continues to send data. Am I right? Does anyone know the correct and detailed recovery process?
Flink recovers from failure through checkpoints. Checkpoints can be stored locally, in S3 or HDFS. When restored, all states of different operators will be revived.
For detailed recovery process, it really depends on your backend. If you are using RocksDB.
your checkpoint can be incremental
you can use the checkpoint data as a savepoint if you do not need to change the backend. This means you can change the parallelism while reviving from the checkpoint.

Resources