Flink on YARN: how do I specify the number of Task Managers - apache-flink

In early versions of Flink, (e.g., 1.6), I can specify the number of Task Managers for both session mode with -n and per-job mode with -yn, but the flags don't exist in later versions of Flink (e.g., 1.12).
Wondering how should I set the number of Task Managers on YARN for newer versions of Flink? Or what are the related properties I can use to control the resources used by Flink?

In newer versions of Flink, the resource manager dynamically launches task managers as needed to provide the number of slots requested by the job(s) that are submitted. Each task manager will take its configuration either from flink-conf.yaml, or from the parameters provided when the cluster is started via yarn-session.sh.

Related

Specify slot sharing group for a specific task manager in Flink 1.14.0

I am trying the fine-grained resource management feature in Flink 1.14, hoping it can enable assigning certain operators to certain TaskManagers. Following the sample code in https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/deployment/finegrained_resource/, I can now define the task sharing groups I would like (using setExternalResource-method), but I do not see any option to "assign" a TaskManager worker instance with the capabilities of this "external resource".
So to the question. Following the GPU-based example in 1, how can I ensure that Flink "knows" which task manager actually has the required GPU?
With help from the excellent flink mailing list, I now have the solution. Basically, add lines to flink-conf.yaml for the specific task manager as per the external resource documentation. For a resource entitled 'example', these are the two lines that must be added:
external-resources: example
external-resource.example.amount: 1
.. will match a task a task sharing group with the added external resource:
.setExternalResource("example", 1.0)

Is it possible to add new embedded worker while cluster is running on statefun?

Here is the deal;
I'm dealing with adding new worker (embbeded) to on running the cluster (flink statefun 2.2.1).
As you see the new task manager can be registered to the cluster;
Screenshot of new deployed taskmanager
But it doesn't initialize (it doesn't deploying sources);
What am I missing here?? (master and workers has to same jar files too? or it should be enough deploying taskmanager with jar file)
Any help would be appreciated,
Thx.
Flink supports two different approaches to rescaling: active and reactive.
Reactive mode is new in Flink 1.13 (released just this week), and works as you expected: add (or remove) a task manager, and your application will adjust to the new parallelism. You can read about elastic scaling and reactive mode in the docs.
Reactive mode is currently a work in progress, but might need your needs.
In broad strokes, for active mode rescaling you need to:
Do a stop with savepoint to bring down your current job while taking a snapshot of its state.
Relaunch with the new parallelism, using the savepoint as the starting point.
The exact details depend on how your cluster is deployed.
For a step-by-step tutorial, see Upgrading & Rescaling a Job in the Flink Operations Playground.
The above applies to rescaling statefun embedded functions. Being stateless, remote functions can be rescaled more straightforwardly.

Can't set parallelism using Flink's CLI or Web-UI when using Apache Beam

I am using Flink 1.2.1 running on Docker, with Task Managers distributed across different VMs as part of a Docker Swarm.
Uploading an Apache Beam application using the Flink Web UI and trying to set the parallelism at job submission point doesn't work. Neither does submit a job using the Flink CLI.
It seems like the parallelism doesn't get picked up at client level, it ends up defaulting to 1.
When I set the parallelism programmatically within the Apache Beam code, it works: flinkPipelineOptions.setParallelism(4);
I suspect the root of the problem may be in the org.apache.beam.runners.flink.DefaultParallelismFactory class, as it checks for Flink's GlobalConfiguration, which may not pick up runtime values passed to Flink.
Any ideas on how this could be fixed or worked around? I need to be able to change the parallelism dynamically, so the programmatic approach won't work, nor will setting the Flink configuration at system level.
I am using the following documentation:
https://ci.apache.org/projects/flink/flink-docs-release-1.2/dev/parallel.html
https://beam.apache.org/documentation/sdks/javadoc/2.0.0/org/apache/beam/runners/flink/DefaultParallelismFactory.html
This should probably be fixed in the Beam Flink Runner but as a workaround you can try setting the parallelism to -1 programatically. This should make the translation pick up the parallelism that is specified when submitting the job.

How to install Flink on Mesos cluster without DC/OS?

I am newbie in Apache Flink and our team is trying to set up an Apache Flink Cluster on Apaches Mesos. We have already installed Apache Mesos & Marathon with 3 Master nodes and 3 Slaves and now we are trying to install Apache Flink without DC/OS as mentioned here https://ci.apache.org/projects/flink/flink-docs-release-1.3/setup/mesos.html#mesos-without-dcos.
I have couple of questions over here :
Do we need to download Flink on all the nodes(master and slaves) and configure mesos.master in all nodes?
Or Shall we download flink on only one master node and configure mesos.master over there?
If flink needs to be downloaded on all the nodes then what should be the location of flink directory or if there is any script where I can specify that?
Is running "mesos-appmaster.sh" on master node also responsible for running flink libraries and classes on slaves?
Thanks
Do we need to download Flink on all the nodes(master and slaves) and configure mesos.master in all nodes?
No you don't. Actualy it depends on the way you want to run Flink. In your setup the most convenient way to run Flink would be to run it with Marathon and download binaries during deployment. See this
Or Shall we download flink on only one master node and configure mesos.master over there?
It's up to you. You can run Flink on dedicated server or let Marathon do it for you. If you already have Marathon then it's easier to run Flink with Marathon. On the other hand for debugging purposes and proof of concept I'll recommend standalone version where you can quickly change configuration on local machine and see how it works. Creating docker images or binaries and publishing them in repository and finally deploying Flink on Marathon could have more overhead that will slow you down on development but will keep you safe on production. Flink does not come with support for High Availability (HA) so Marathon is required to provide basic HA support (launch new instance of Flink when agent crash).
If flink needs to be downloaded on all the nodes then what should be the location of flink directory or if there is any script where I can specify that?
Flink does not have to be downloaded on all nodes. It can be downloaded when needed at deployment.
Is running "mesos-appmaster.sh" on master node also responsible for running flink libraries and classes on slaves?
Flink is a scheduler which means that it should start tasks and executors on Mesos when needed.
Even when not using DC/OS, feel free to look at the Apache Flink DC/OS package. At its core, it is a marathon app definition you can deploy on pure Marathon/Mesos. The Flink package (as of today) does not require any DC/OS specific features.
The DC/OS example might also provide useful information.

Flink dynamic scaling

I am currently studying scalability on Flink. Starting from Version 1.2.0, dynamic rescaling was introduced. I am looking at scaling a long running job which reads data from Kafka source.
Questions regarding dynamic rescaling.
To scale out my flink application, for example: add new task managers, must I restart the job / yarn session to use the newly added resource?
I think it's possible to write Yarn client to deploy new task managers and make it talk to job manager, is that already available in existing flink yarn client application?
Pardon me if these questions are too basic, I did go through the documents and I have to admit I have not been able to put the concepts altogether with some test deployments on yarn recently.
Currently, Dynamic Scaling means the capability to update the operators' parallelism(Flink 1.2), either for keyed state or for non-keyed state.
To scale out my flink application, for example: add new task managers, must I restart the job / yarn session to use the newly added resource? - Yes, the job has to be stopped first, update the parallelism, and restart it again. Do not have to worry about the state, Flink will handle them, including repartition.
I think it's possible to write Yarn client to deploy new task managers and make it talk to job manager, is that already available in existing flink yarn client application? - No, you can not. This feature seems to be added in the future. Currently, we can not do that.

Resources