what the balance of flink's taskmanager and slot count on yarn - apache-flink

i use flink on yarn in pre-job mode, and yarn cluster have 500 vcore and 2000G ram, and flink app have large state.
i wonder to know how should i set the slot count. set large slot count and less TaskManager count, or less slot count and large TaskManager count?
exemple :
set 2 slot for every TaskManager, than yarn will run 250 TaskManager.
set 50 slot for every TaskManager, than yarn will run 10 TaskManager.
which one will have batter performance?

It depends. In part it depends on which state backend you are using, and on what "better performance" means for your application. Whether you are running batch or streaming workloads also makes a difference, and the job's topology can also be a factor.
If you are using RocksDB as the state backend, then having fewer, larger task managers is probably the way to go. With state on the heap, larger task managers are more likely to disrupt processing with significant GC pauses, which argues for having more, smaller TMs. But this mostly impacts worst-case latency for streaming jobs, so if you are running batch jobs, or only care about streaming throughput, then this might not be worth considering.
Communication between slots in the same TM can be optimized, but this isn't a factor if your job doesn't do any inter-slot communication.

Related

How to scale up Flink job in AWS EMR

I have a flink (version 1.8) job that runs in AWS EMR and it's currently sitting at a m5.xlarge for both the job manager and task managers. There is 1 job manager and 4 task managers. An m5.xlarge has 4 vCPUs and 16 GB RAM.
When the yarn session is created, I pass in these parameters: -n 4 -s 4 -jm 768 -tm 103144.
The worker nodes are set to a parallelism of 16.
Currently, the flink job is running a little slow so I want to make it faster. I was trying different configurations with a m5.2xlarge (8 vCPUs and 32 GB RAM) but I am getting issues when deploying. I assume it's because I don't have the right numbers to correctly use the new instance types. I tried playing around with the number of slots, jm/tm memory allocation and parallelism numbers but can't quite get it right. How would I adjust my flink job parameters if I were to double the amount of resources it has?
I'd have to say "it depends". You'll want to double the parallelism. By default I would implement this by doubling the number of task managers, and configure them the same as the existing TMs. But in some cases it might be better to double the slots per TM, and give the TMs more memory.
At the scale you are running at, I wouldn't expect it to make much difference; either approach should work fine. At larger scale I would lean toward switching to RocksDB (if you aren't already using it), and running fewer, larger TMs. If you need to use the heap-based state backend, you're probably better off with more, smaller TMs.

Ideal Number of Task Slots

We have developed a Flink application on v1.13.0 and deployed it on Kubernetes that runs a Task Manager instance on a Kubernetes pod. I am not sure how to determine the ideal number of task slots on each Task Manager instance. Should we configure/choose one task slot on each task manager/pod or two slots per Task Manager/pod or more. We currently configured two task slots per Task Manager instance and wondering if that is the right choice/setting. What are the pros and cons of running one task slot vs running two or more slots on a Task Manager/pod.
As a general rule, for containerized deployments (like yours), one slot per TM is a good default starting point. This tends to keep the configuration as straightforward as possible.
Depends on your expected workload, input, state size.
Is it a batch or a stream?
Batch: is the worload fast enough?
Stream: is the worload backpressuring?
For these throughput limitations, you might want to increase the number of TMs
State size: how are you processing your data? Does it require a lot of state?
For example, this query:
SELECT
user_id,
count(*)
FROM user_logins
will need a state proportional with the number of users.
You can tune the memory of TM in the options.
Here is a useful link: https://www.ververica.com/blog/how-to-size-your-apache-flink-cluster-general-guidelines
Concurrent jobs: is this machine under-used, and do you need to keep a pool of unused TS ready to execute a job?
A TM's memory will be sliced between the TS (be sure it fits your state size), but the CPU will be shared when idle.
Other than that if it's going fine on one TM on one pod then you have nothing to do.

Checkpointing issues in Flink 1.10.1 using RocksDB state backend

We are experiencing a very difficult-to-observe problem with our Flink job.
The Job is reasonably simple, it:
Reads messages from Kinesis using the Flink Kinesis connector
Keys the messages and distributes them to ~30 different CEP operators, plus a couple of custom WindowFunctions
The messages emitted from the CEP/Windows are forward to a SinkFunction that writes messages to SQS
We are running Flink 1.10.1 Fargate, using 2 containers with 4vCPUs/8GB, we are using the RocksDB state backend with the following configuration:
state.backend: rocksdb
state.backend.async: true
state.backend.incremental: false
state.backend.rocksdb.localdir: /opt/flink/rocksdb
state.backend.rocksdb.ttl.compaction.filter.enabled: true
state.backend.rocksdb.files.open: 130048
The job runs with a parallelism of 8.
When the job starts from cold, it uses very little CPU and checkpoints complete in 2 sec. Over time, the checkpoint sizes increase but the times are still very reasonable couple of seconds:
During this time we can observe the CPU usage of our TaskManagers gently growing for some reason:
Eventually, the checkpoint time will start spiking to a few minutes, and then will just start repeatedly timing out (10 minutes). At this time:
Checkpoint size (when it does complete) is around 60MB
CPU usage is high, but not 100% (usually around 60-80%)
Looking at in-progress checkpoints, usually 95%+ of operators complete the checkpoint with 30 seconds, but a handful will just stick and never complete. The SQS sink will always be included on this, but the SinkFunction is not rich and has no state.
Using the backpressure monitor on these operators reports a HIGH backpressure
Eventually this situation resolves one of 2 ways:
Enough checkpoints fail to trigger the job to fail due to a failed checkpoint proportion threshold
The checkpoints eventually start succeeding, but never go back down to the 5-10s they take initially (when the state size is more like 30MB vs. 60MB)
We are really at a loss at how to debug this. Our state seems very small compared to the kind of state you see in some questions on here. Our volumes are also pretty low, we are very often under 100 records/sec.
We'd really appreciate any input on areas we could look into to debug this.
Thanks,
A few points:
It's not unusual for state to gradually grow over time. Perhaps your key space is growing, and you are keeping some state for each key. If you are relying on state TTL to expire stale state, perhaps it is not configured in a way that allows it clean up expired state as quickly as you would expect. It's also relatively easy to inadvertently create CEP patterns that need to keep some state for a very long time before certain possible matches can be ruled out.
A good next step would be to identify the cause of the backpressure. The most common cause is that a job doesn't have adequate resources. Most jobs gradually come to need more resources over time, as the number of users (for example) being managed rises. For example, you might need to increase the parallelism, or give the instances more memory, or increase the capacity of the sink(s) (or the speed of the network to the sink(s)), or give RocksDB faster disks.
Besides being inadequately provisioned, other causes of backpressure include
blocking i/o is being done in a user function
a large number of timers are firing simultaneously
event time skew between different sources is causing large amounts of state to be buffered
data skew (a hot key) is overwhelming one subtask or slot
lengthy GC pauses
contention for critical resources (e.g., using a NAS as the local disk for RocksDB)
Enabling RocksDB native metrics might provide some insight.
Add this property to your configuration:
state.backend.rocksdb.checkpoint.transfer.thread.num: {threadNumberAccordingYourProjectSize}
if you do not add this , it will be 1 (default)
Link: https://github.com/apache/flink/blob/master/flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/contrib/streaming/state/RocksDBOptions.java#L62

Apache Flink Resource Planning best practices

I'm looking for recommendations/best practices in determining required optimal resources for deploying a streaming job on Flink Cluster.
Resources are
No. of tasks slots per TaskManager
Optimal Memory allocation for TaskManager
Max Parallelism
This blog post gives some ideas on how to size. It's meant for moving a Flink application under development to production.
I'm not aware of a resource that helps to size before that, as the topology of the job has a tremendous impact. So you'd usually start with a PoC and low data volume and then extrapolate your findings.
Memory settings are described on the Flink docs. I'd also use the appropriate page for your Flink version as it got changed recently.
Number of task slots per Task Manager
One slot per TM is a rough rule of thumb as a starting point, but you probably want to the keep the number of TMs under 100, or so. This is because the Checkpoint Coordinator will eventually struggle if it has to manage too many distinct TMs. Running with lots of slots per TM works better with RocksDB than with the heap-based state backends, because with RocksDB the state is off-heap -- with state on the heap, running with lots of slots increases the likelihood of significant GC pauses.
Max Parallelism
The default is 128. Changing this parameter is painful, as it is baked into each checkpoint and savepoint. But making it larger than necessary comes with some cost (in memory/performance). Make it large enough that you will never have to change it, but no larger.

Task distribution in Apache Flink

Consider a Flink cluster with some nodes where each node has a multi-core processor. If we configure the number of the slots based on the number of cores and equal share of memory, how does Apache Flink distribute the tasks between the nodes and the free slots? Are they fairly treated?
Is there any way to make/configure Flink to treat the slots equally when we configure the task slots based on the number of the cores available on a node
For instance, assume that we partition the data equally and run the same task over the partitions. Flink uses all the slots from some nodes and at the same time some nodes are totally free. The node which has less number of CPU cores involved outputs the result much faster than the node with more number of CPU cores involved in the process. Apart from that, this ratio of speedup is not proportional to the number of used cores in each node. In other words, if in one node one core is occupied and in another node two cores are occupied, in fairly treating each core as a slot, each slot should output the result over the same task in almost equal amount of time irrespective of which node they belong to. But, this is not the case here.
With this assumption, I would say that the nodes are not treated equally. This in turn produces a result time wise that is not proportional to the number of the nodes available. We can not say that increasing the number of the slots necessarily decreases the time cost.
I would appreciate any comment from the Apache Flink Community!!
Flink's default strategy as of version >= 1.5 considers every slot to be resource-wise the same. With this assumption, it should not matter wrt resources where you place the tasks since all slots should be the same. Given this, the main objective for placing tasks is to colocate them with their inputs in order to minimize network I/O.
If we are now in a standalone setup where we have a fixed number of TaskManagers running, Flink will pick slots in an arbitrary fashion (no guarantee given) for the sources and then colocate their consumers in the same slots if possible.
When running Flink on Yarn or Mesos where Flink can start new TaskManagers, Flink will first use up all slots of an existing TaskManager before it requests a new one. In this case, you will see that all sources will end up on as few TaskManagers as possible.
Since CPUs are not isolated wrt slots (they are a shared resource), the above-mentioned assumption does not hold true in all cases. Hence, in some cases where you have a fixed set of TaskManagers it is actually beneficial to spread the tasks out as much as possible to make use of the shared CPU resources.
In order to support this kind of scheduling strategy, the Flink community added the task spread out strategy via FLINK-12122. In order to use a scheduling strategy which is more similar to the pre FLIP-6 behaviour where Flink tries to spread out the workload across all available TaskExecutors, one needs to set cluster.evenly-spread-out-slots: true in the flink-conf.yaml
Very old thread, but there is a newer thread that answers this question for current versions.
with Flink 1.5 we added resource elasticity. This means that Flink is now able to allocate new containers on a cluster management framework like Yarn or Mesos. Due to these changes (which also apply to the standalone mode), Flink no longer reasons about a fixed set of TaskManagers because if needed it will start new containers (does not work in standalone mode). Therefore, it is hard for the system to make any decisions about spreading slots belonging to a single job out across multiple TMs. It gets even harder when you consider that some jobs like yours might benefit from such a strategy whereas others would benefit from co-locating its slots. It gets even more complicated if you want to do scheduling wrt to multiple jobs which the system does not have full knowledge about because they are submitted sequentially. Therefore, Flink currently assumes that slots requests can be fulfilled by any TaskManager.

Resources