how to deploy a new job without downtime - apache-flink

I have an Apache Flink application that reads from a single Kafka topic.
I would like to update the application from time to time without experiencing downtime. For now the Flink application executes some simple operators such as map and some synchronous IO to external systems via http rest APIs.
I have tried to use the stop command, but i get "Job termination (STOP) failed: This job is not stoppable.", I understand that the Kafka connector does not support the the stop behavior - a link!
A simple solution would be to cancel with savepoint and to redeploy the new jar with the savepoint, but then we get downtime.
Another solution would be to control the deployment from the outside, for example, by switching to a new topic.
what would be a good practice ?

If you don't need exactly-once output (i.e., can tolerate some duplicates) you can take a savepoint without cancelling the running job. Once the savepoint is completed, you start a second job. The second job could write to different topic but doesn't have to. When the second job is up, you can cancel the first job.

Related

How to fail whole flink application if one job gets fail?

There are two jobs running in flink shown in the below image, If one gets failed, I need to fail the whole flink application? How can I do it? Suppose job with parallelism:1 fails due to some exception, How to fail job with parallelism:4?
The details of how you should go about this depend a bit on the type of infrastructure you are using to run Flink, and how are submitting the jobs. But if you look at ClusterClient and JobClient and associated classes, you should be able to find a way forward.
If you aren't already, you may want to take advantage of application mode, which was added in Flink 1.11. This makes it possible for a single main() method to launch multiple jobs, and added env.executeAsync() for non-blocking job submission.

Avoid running initialization code in Apache Flink job when resuming from savepoint

I have a Apache Flink Job, implemented with the DataStream API, which contains some initialization code before the definition and submission of the job graph. The initialization code should only run the first time the job is submitted and not when resuming the job from a checkpoint or when updating it using a savepoint.
It seems that when restarting the job during a failover from a checkpoint, the job is restarted from a job graph stored in the checkpoint - in particular, the initialization code is not run a second time (which is what I want).
Is the same possible when running a job from a savepoint? In other words, is there a way to execute code only when the job is not started from a savepoint?
If you implement the CheckpointedFunction interface, then initializeState(FunctionInitializationContext context) will be called during initialization. Then you can use context.isRestored() to determine whether the job is being started for the first time, or not.

Difference between savepoint and checkpoint in Flink

I know there are similar questions on the stackoverflow,but after investigating several of them, I know
savepoint is triggered manually, while checkpoint is triggered
automatically
They are using different storage format
But these are not the confusing points,I have no idea when to use one or when to use the other.
Consider the following two scenarios:
If I need to shutdown or restart the whole application for some reason(eg bug fix or crash unexpected) , then I will have to use savepoint to restore the whole application?
I thought that checkpoint is only used internally in Flink for fault tolerance when application is running, that is, the application itself is running, but tasks or other things may fail, that is, Flink will use checkpoint for state recovery?
There is also externalized checkpoint, I think it is the same with savepoint in functionality, that is, externalized checkpoint can also be used to recover from a restarted application?
Does Flink use checkpoint for state recovery?
Basically you're right. As you said, the checkpoint is usually used internally in Flink for fault tolerance and it's more like a concept inside the framework. When your application fails, the program will try to restart from the latest checkpoint. That's how checkpoint works in Flink, without any mannual interfering.
Should I use savepoint to restore the whole application for bug fix?
Yes. In these cases, you don't want to restore from the checkpoint because maybe the latest checkpoint occurs several minutes ago. Instead, you'd like to snapshot the current the state of the whole application and restart it from the latest savepoint, which may be the quickest way to restore the application without too much delay.
Externalized checkpoint.
It's still the checkpoint, but will be persisted externally based on your configurations. It can be used to restore the application, but the states are not so real time because there exists an interval between checkpoints.
For more information, take a look at this blog artical: https://data-artisans.com/blog/differences-between-savepoints-and-checkpoints-in-flink.

How to do rolling upgrade with zero downtime

Is it possible to do a job version update with zero downtime ?
Maybe with HA configuration ? i.e replacing the standby job with the updated one, next cancel the master which will cause the standby (updated) to be the master and then upload a new updated job instead of the master we cancelled in the previous phase, in order to maintain HA.
Is this scenario possible ? are there other scenarios that can achieve zero downtime on job version update ?
I don't think Flink HA mode is actually appropriate for zero-downtime job upgrades. HA mode ensures that a failing Jobmanager can be replaced without losing state information, but isn't HA in the sense that "unavailability" still occurs between the time the primary Jobmanager fails and the secondary Jobmanager takes over. (Or in the case of systems like Kubernetes, when the lone Jobmanager fails a healthcheck and is replaced)
For some types of jobs, zero-downtime upgrades are possible but not supported by Flink itself. For example, if your job outputs to an Elasticsearch index, you could bring up the upgraded job from a savepoint in parallel with the original but writing to a new index, and when it has caught up, switching your clients (or Elasticsearch index alias) to reference the new index.
Another technique I've considered but never tried would be to build into your applications a way to configure a flag that says when to start or stop emitting data. That way, you could update the configuration of the original job to drop (not forward to a sink) any windowed data starting at some timestamp in the near future, then run the upgraded job and configure it to emit its first window at that time.
Built-in support for zero-downtime "handoffs" is a feature that would be pretty nice to have in Flink for many use-cases.

How to schedule a job in apache-flink

I want to write a task that is triggered by apache flink after every 24 hours and then processed by flink. What is the possible way to do this? Does flink provide any job scheduling functionality?
Apache Flink is not a job scheduler but an event processing engine which is a different paradigm, as Flink jobs are supposed to run continuously instead of being triggered by a schedule.
That said, you could achieve the functionality by simply using an off the shelve scheduler (i.e. cron) who is scheduled to start a job on your Flink cluster and then stop it after you receive some sort of notification that the job was done (i.e. through a Kafka topic) or simply use a timeout after which you would assume that the job is finished and you can stop the job. But again, especially because Flink is not designed for this kind of use cases, you would most certainly run into edge cases which Flink does not support.
Alternatively you can simply use a 24 hour tumbling window and run your task in the corresponding trigger function. See https://flink.apache.org/news/2015/12/04/Introducing-windows.html for details on that matter.

Resources