I am attempting to recover my jobs and state when my job manager goes down and I haven't been able to restart my jobs successfully.
From my understanding, TaskManager recovery is aided by the JobManager (this works as expected) and JobManager recovery is completed through Zookeeper.
I am wondering if there is a way to recover the jobmanager without zookeeper?
I am using docker for my setup and all checkpoints & savepoints are persisted to mapped volumes.
Is flink able to recover when all job managers go down? I can afford to wait for the single JobManager to restart.
When I restart the jobmanager I get the following exception: org.apache.flink.runtime.rest.NotFoundException: Job 446f4392adc32f8e7ba405a474b49e32 not found
I have set the following in my flink-conf.yaml
state.backend: filesystem
state.checkpoints.dir: file:///opt/flink/checkpoints
state.savepoints.dir: file:///opt/flink/savepoints
I think my issue may that the JAR gets deleted when the job manager is restarted but I am not sure how to solve this.
At the moment, Flink only supports to recover from a JobManager fault if you are using ZooKeeper. However, theoretically you can also make it work without it if you can guarantee that there is only a single JobManager ever running. See this answer for more information.
You can check out running your cluster as a "Flink Job Cluster". This will automatically start the job that you baked into the docker image when the container comes up. You can read more here.
Related
I have an Apache Flink cluster (1.14.5) running on kubernetes (AKS). Cluster is running in session mode. I am triggering batch jobs to it periodically.
Earlier the job manager jvm metaspace was configured to 256mb then later increased to 512 MB. Whenever I run batch pipeline 3 to 4 times it completely fills up the metaspace and then it doesn't load any new batch job until I restart the job manager.
I do not see any class loader leaks in the application code. Also the triggered batch pipelines runs completely and marked in FINISHED state.
I have never seen the metaspace of job manager coming down from the time I restart cluster, it always keep growing up with every new batch pipeline run and eventually starts rejecting new batch jobs.
I need to know how flink manages/clean up this job manager metaspace periodically or it does not do it at all. Please suggest/help.
I have a job that has stateful operators and has also enabled checkpointing. One of the tasks of the staful operator fails due to some reason and has be restarted and recover the checkpointed state.
I would ask which of the followings is the restart behavor:
only the failed task is restarted and restored
all of the operator(contain failed task)'s tasks are restarted and restored
the whole job is restarted and restored
Is the whole job restarted if one task fails?
tldr: For streaming jobs the answer is usually yes, but not necessarily.
Recovery of a Flink streaming job involves rewinding the sources to the offsets recorded in a checkpoint, and resetting the state back to what it had been after having consumed only the data up to those offsets.
Restarting only the failed task would result in inconsistencies, and make it impossible to provide exactly-once semantics, unless the failed task had no dependencies on any upstream tasks, and no downstream tasks depended on it.
What Flink can do then is to restore the state and restart processing on the basis of failover regions, which take into account these dependencies within the job graph. In the case of a streaming job, only if the job is embarrassingly parallel is it possible to do less than a restore and restart of the entire job. So in the case of an embarrassingly parallel job, only the failed region is restored and restarted (which includes all of its subtasks from source to sink), while the other regions continue running.
This approach is used if jobmanager.execution.failover-strategy is set to region, which has been the default since Flink 1.10.
To learn more about this, see FLIP-1: Fine Grained Recovery from Task Failures and the Apache Flink 1.9.0 Release Announcement, where this feature was introduced.
I'm reading the Flink official doc about Task Failure Recovery: https://ci.apache.org/projects/flink/flink-docs-stable/dev/task_failure_recovery.html
As my understanding, this doc tells us that if some task failed for some reason, Flink is able to recover it with the help of Checkpoint mechanism.
So now I have two more questions:
What if a TaskManager failed? As my understanding, a task is assigned to one or more slots, and slots are located at one or more TaskManagers. After reading the doc above, I've known that Flink can recover a failed task, but if a TaskManager failed, what would happen? Can Flink recover it too? If a failed TaskManager can be recoverd, will the tasks assigned to it can continue running automatically after it's recovered?
What if the JobManager failed? If the JobManager failed, do all of TaskManagers will fail too? If so, when I recover the JobManager with the help of Zookeeper, do all of TaskManagers and their tasks will continue running automatically?
In a purely standalone cluster, if a Task Manager dies, then if you had a standby task manager running, it will be used. Otherwise the Job Manager will wait for a new Task Manager to magically appear. Making that happen is up to you. On the other hand, if you are using YARN, Mesos, or Kubernetes, the cluster management framework will take care of making sure there are enough TMs.
As for Job Manager failures, in a standalone cluster you should run standby Job Managers, and configure Zookeeper to do leader election. With YARN, Mesos, and Kubernetes, you can let the cluster framework handle restarting the Job Manager, or run standbys, as you prefer, but in either case you will still need Zookeeper to provide HA storage for the Job Manager's metadata.
Task Managers can survive a Job Manager failure/recovery situation. The jobs don't have to be restarted.
https://ci.apache.org/projects/flink/flink-docs-stable/ops/jobmanager_high_availability.html.
I have multiple Kafka topics (multi tenancy) and I run the same job run multiple times based on the number of topics with each job consuming messages from one topic. I have configured file system as state backend.
Assume there are 3 jobs running. How does checkpoints work here? Does all the 3 jobs store the checkpoint information in the same path? If any of the job fails, how does the job knows from where to recover the checkpoint information? We used to give a job name while submitting a job to the flink cluster. Does it have anything to do with it? In general how does Flink differentiate the jobs and its checkpoint information to restore in case of failures or manual restart of the jobs (irrespective of same or different jobs)?
Case1: What happens in case of job failure?
Case2: What happens If we manually restart the job?
Thank you
To follow-on to what #ShemTov was saying:
Each job will write its checkpoints in a sub-dir named with its jobId.
If you manually cancel a job the checkpoints are deleted (since they are no longer needed for recover), unless they have been configured to be retained:
CheckpointConfig config = env.getCheckpointConfig();
config.enableExternalizedCheckpoints(ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION);
Retained checkpoints can be used for manually restarting, and for rescaling.
Docs on retained checkpoints.
If you have high availability configured, the job manager's metadata about checkpoints will be stored in the HA store, so that recovery does not depend on the job manager's survival.
The JobManager is aware of each job checkpoint, and keep that metadata, checkpoint is being save to the checkpoint directory(via flink-conf.yaml), under this directory it`ll create a randomly hash directory for each checkpoint.
Case 1: The Job will restart (depend on your Fallback Strategy...), and if checkpoint is enabled it'll read the last checkpoint.
Case 2: Im not 100% sure, but i think if you cancel the job manually and then submit it, it won't read the checkpoint. You'll need to use savepoint. (You can kill your job with savepoint, and then submit your job again with the same savepoint). Just be sure that every oprator has a UID. you can read more about savepoints here: https://ci.apache.org/projects/flink/flink-docs-stable/ops/state/savepoints.html
Recovery with JobManager is achieved using Zookeeper, but what if TaskManager gets failed? How to recover from this, does JobManager automatically recovers TaskManagers?
In general, the JobManager takes care to recover from TaskManager failures. How this is done depends on your setup.
If you run Flink on YARN, the JobManager will start a new TaskManager when it realizes that a TaskManager has died and reassign tasks.
If you run Flink stand-alone on a cluster, you have to make sure you have one (or more) stand-by TaskManager(s) running. The JobManager will assign the tasks of the failed TM to a stand-by TM. This also means that you have to ensure that enough stand-by TMs are up and running.