Running multiple Flink programs in a Flink Standalone Cluster (v1.4.2) - apache-flink

I have a Flink Standalone Cluster based on Flink 1.4.2 (1 job manager, 4 task slots) and want to submit two different Flink programs.
Not sure if this is possible at all as some flink archives say that a job manager can only run one job. If this is true, any ideas how can I get around this issue? There is only one machine available for the Flink cluster and we don't want to use any resource manager such as Mesos or Yarn.
Any hints?

The Flink jobs (programs) run in task slots which are located in a task manager. Assuming you have 4 task slots, you can run up-to 4 Flink programs. Also, be careful with the parallelism of your Flink jobs. If you have set the parallelism to 2 in both jobs, then yes 2 is the maximum number of jobs you can run on 4 task slots. Each parallel instance runs on a task slot.
Check this image: https://ci.apache.org/projects/flink/flink-docs-master/fig/slots_parallelism.svg

Related

Flink on AWS EMR Task Nodes

Is it possible to run Flink task managers on the Task nodes of AWS EMR? If yes, how different is it from running Task Managers on a core node?
Yes, you should be able to run TMs on task nodes. The only difference I'd expect is that EMR won't schedule the Flink Job Manager (JM) on a task node ("Amazon EMR ... allows application master processes to run only on core nodes").
If your workflow has sources that read from HDFS and/or sinks that write to HDFS, then subtasks of these operators running on task nodes might take longer, as task nodes don't run the Hadoop Data Node daemon, and thus all reads/writes are over the network.

How does parallelism work in Apache Flink?

Consider I have a Flink cluster of 3 nodes. One node is for Job Manager and the other 2 nodes are for task manager. Each task manager has 3 task slots. So, when I submit my job with parallelism equal to 2, Flink will assign two task slots. So, my question is, how Flink will assign these task slots?
Some scenario
Does Flink assign one task slot from each task manager?
Is there a possibility that both task slots get assign from the same task manager? If yes, my job will not be running if that particular node is down for some reason. How can I avoid downtime in this scenario?
Since Flink 1.10 you can use the configuration setting cluster.evenly-spread-out-slots: true to cause the scheduler to spread out the slots evenly across all available task managers. Otherwise it will use all of the slots from one task manager before taking slots from the other.
Yes, task slots can be assigned to the same task manager given that each TM has 3 slots. If any node running active slot is down, the job will fail and will try to restart and at this point all the slots will be assigned on the only running node.

Use of flink/kubernetes to replace etl jobs (on ssis) : one flink cluster per jobtype or create and destroy flink cluster per job execution

I am trying to see feasibility of replacing the hundreds of feed file ETL jobs created using SSIS packages with apache flink jobs (and kuberentes as underlying infra). One recommendation i saw in some article is "to use one flink cluster for one type of job".
Since i have handful jobs per day of each job type, then this means the best way for me is to create flinkcluster on the fly when executing the job and destroy it to free up resources, is that correct way to do it? I am setting up flinkcluster without job manager.
Any suggestions on best practices for using flink for batch ETL activities.
May be most important question: is flink correct solution for the problem statement or should i go more into Talend and other classic ETL tools?
Flink is well suited for running ETL workloads. The two deployment modes give you the following properties:
Session cluster
A session cluster allows to run several jobs on the same set of resources (TaskExecutors). You start the session cluster before submitting any resources.
Benefits:
No additional cluster deployment time needed when submitting jobs => Faster job submissions
Better resource utilization if individual jobs don't need many resources
One place to control all your jobs
Downsides:
No strict isolation between jobs
Failures caused by job A can cause job B to restart
Job A runs in the same JVM as job B and hence can influence it if statics are used
Per-job cluster
A per-job cluster starts a dedicated Flink cluster for every job.
Benefits
Strict job isolation
More predictable resource consumption since only a single job runs on the TaskExecutors
Downsides
Cluster deployment time is part of the job submission time, resulting in longer submission times
Not a single cluster which controls all your jobs
Recommendation
So if you have many short lived ETL jobs which require a fast response, then I would suggest to use a session cluster because you can avoid the cluster start up time for every job. If the ETL jobs have a long runtime, then this additional time will carry no weight and I would choose the per-job mode which gives you more predictable runtime behaviour because of strict job isolation.

How can i share state between my flink jobs?

I run multiple job from my .jar file. i want share state between my jobs. but all inputs consumes(from kafka) in every job and generate duplicate output.
i see my flink panel. all of jobs 'record sents' is 3. i think must split number to my jobs.
I create job with this command
bin/flink run app.jar
How can i fix it?
Because of its focus on scalability and high performance, Flink state is local. Flink doesn't really provide a mechanism for sharing state between jobs.
However, Flink does support splitting up a large job among a fleet of workers. A Flink cluster is able to run a single job in parallel, using the resources of one or many multi-core CPUs. Some Flink jobs are running on thousands of cores, just to give an idea of its scalability.
When used with Kafka, each Kafka partition can be read by a different subtask in Flink, and processed by its own parallel instance of the pipeline.
You might begin by running a single parallel instance of your job via
bin/flink run --parallelism <parallelism> app.jar
For this to succeed, your cluster will have to have at least as many free slots as the parallelism you request. The parallelism should be less than or equal to the number of partitions in the Kafka topic(s) being consumed. The Flink Kafka consumers will coordinate amongst themselves -- with each of them reading from one or more partitions.

In Apache Flink, what is the difference between the Job Manager and the Job Master?

In Apache Flink (e.g. v1.8), what is the difference between the Job Manager and the Job Master?
Job Manager and Job Master seem to be used analogously in the logs.
What is the difference between the Job Manager and the Job Master?
Thanks!
The JobManager is the composition of mainly 3 components.
Dispatcher - dispatch the job to the Task Managers
Resource Manager - Allocate the required resource for the job
JobMaster - Supervising, coordinating the Flink Job tasks.
So, JobMaster is part of JobManager. As per docs, a single JobManager is assigned to each individual Flink application, which can contain multiple Flink jobs in it.
For example, a Flink Application with 2 jobs will instantiate 1 JobManger but will contain 2 JobMasters.
JobManager and JobMaster have different roles.
For the JobManager, according to the JobManager Data Structures section of the documentation:
During job execution, the JobManager keeps track of distributed tasks, decides when to schedule the next task (or set of tasks), and reacts to finished tasks or execution failures.
The JobManager receives the JobGraph, which is a representation of the data flow consisting of operators (JobVertex) and intermediate results (IntermediateDataSet). Each operator has properties, like the parallelism and the code that it executes. In addition, the JobGraph has a set of attached libraries, that are necessary to execute the code of the operators.
The role of the JobMaster is more limited according to the Javadoc:
JobMaster implementation. The job master is responsible for the execution of a single JobGraph.

Resources