We have a Flink streaming job v1.14 running in native K8S deployment mode. When we use AT_LEAST_ONCE checkpoint mode, the managed memory usage hits 100% no matter how many memory we assigned to it. Any ideas what might be the cause or is this actually an expected behavior how Flink manages memory?
Related
I am trying to understand what is going on in a basic flink stream job.
I have a basic flink job which reads from kafka as string, in a flatmap makes some string operations and sends messages back to kafka as string. No window or state.
1 kafka source and 8 kafka sinks.
Message count is about 25K/second.
When I check the task manager on Flink UI, I see that heap memory usage sometimes goes up to 10G. When I watch it I see that it changes something between 3GB-10GB.
I configured heap memory to 500MB and it seems it is working without any problem. heap usage is around 200-400MB
If I configure heap memory to 10GB , then heap usage metric of task manager on Flink UI is differs between 3-10GB.
I took a heap dump of task manager and analyzed it Visual Vm. It shows heap usage only 23MB. Why?
I enabled GC logs and where you can see a sample :
Memory usage stats: [HEAP: 3851/9208/9208 MB, NON HEAP: 84/89/744 MB (used/committed/max)]
So where is this 5GB-6GB . For just a basic stream job, why flink is using this much memory?
The real problem is, when I deploy the other jobs, after a while, taskmanager crushes without any trace so I assume I have an OOM error.
I am new to flink and i deployed my flink application which basically perform simple pattern matching. It is deployed in Kubernetes cluster with 1 JM and 6 TM. I am sending messages of size 4.4k and 200k messages every 10 min to eventhub topic and performing load testing. I added restart strategy and checking pointing as below and i am not explicitly using any states in my code as there is no requirement for it
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
// start a checkpoint every 1000 ms
env.enableCheckpointing(interval, CheckpointingMode.EXACTLY_ONCE);
// advanced options:
// make sure 500 ms of progress happen between checkpoints
env.getCheckpointConfig().setMinPauseBetweenCheckpoints(1000);
// checkpoints have to complete within one minute, or are discarded
env.getCheckpointConfig().setCheckpointTimeout(120000);
// allow only one checkpoint to be in progress at the same time
env.getCheckpointConfig().setMaxConcurrentCheckpoints(1);
// enable externalized checkpoints which are retained after job cancellation
env.getCheckpointConfig().enableExternalizedCheckpoints(CheckpointConfig.ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION);
// allow job recovery fallback to checkpoint when there is a more recent savepoint
env.getCheckpointConfig().setPreferCheckpointForRecovery(true);
env.setRestartStrategy(RestartStrategies.fixedDelayRestart(
5, // number of restart attempts
Time.of(5, TimeUnit.MINUTES) // delay
));
Initially i was facing Netty server issue with network buffer and i followed this link https://ci.apache.org/projects/flink/flink-docs-release-1.11/ops/config.html#taskmanager-network-memory-floating-buffers-per-gate flink network and heap memory optimizations and applied below settings and everything is working fine
taskmanager.network.memory.min: 256mb
taskmanager.network.memory.max: 1024mb
taskmanager.network.memory.buffers-per-channel: 8
taskmanager.memory.segment-size: 2mb
taskmanager.network.memory.floating-buffers-per-gate: 16
cluster.evenly-spread-out-slots: true
taskmanager.heap.size: 1024m
taskmanager.memory.framework.heap.size: 64mb
taskmanager.memory.managed.fraction: 0.7
taskmanager.memory.framework.off-heap.size: 64mb
taskmanager.memory.network.fraction: 0.4
taskmanager.memory.jvm-overhead.min: 256mb
taskmanager.memory.jvm-overhead.max: 1gb
taskmanager.memory.jvm-overhead.fraction: 0.4
But i have two below questions
If any task manager restarts because of any failures the task manager is restarting successfully and getting registered with job manager but after the restarted task manager don't perform any processing of data it will sit idle. Is this normal flink behavior or do i need to add any setting to make task manager to start processing again.
Sorry and correct me if my understanding is wrong, flink has a restart strategy in my code i made limit 5 attempts of restart. What will happen if my flink job is not successfully overcomes the task failure entire flink job will be remained in idle state and i have to restart job manually or is there any mechanism i can add to restart my job even after it crossed the limit of restart job attempts.
Is there any document to calculate the number of cores and memory i should assign to flink job cluster based on data size and rate at which my system receives the data ?
Is there any documentation on flink CEP optimization techniques?
This is the error stack trace i am seeing in job manager
I am seeing the below errors in my job manager logs before the pattern matching
Caused by: org.apache.flink.runtime.io.network.netty.exception.RemoteTransportException: Connection unexpectedly closed by remote task manager '/10.244.9.163:46377'. This might indicate that the remote task manager was lost.
at org.apache.flink.runtime.io.network.netty.CreditBasedPartitionRequestClientHandler.channelInactive(CreditBasedPartitionRequestClientHandler.java:136)
at org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:257)
at org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:243)
at org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:236)
at org.apache.flink.shaded.netty4.io.netty.handler.codec.ByteToMessageDecoder.channelInputClosed(ByteToMessageDecoder.java:393)
at org.apache.flink.shaded.netty4.io.netty.handler.codec.ByteToMessageDecoder.channelInactive(ByteToMessageDecoder.java:358)
at org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:257)
at org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:243)
at org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:236)
at org.apache.flink.shaded.netty4.io.netty.channel.DefaultChannelPipeline$HeadContext.channelInactive(DefaultChannelPipeline.java:1416)
at org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:257)
at org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:243)
at org.apache.flink.shaded.netty4.io.netty.channel.DefaultChannelPipeline.fireChannelInactive(DefaultChannelPipeline.java:912)
at org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannel$AbstractUnsafe$8.run(AbstractChannel.java:816)
at org.apache.flink.shaded.netty4.io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
at org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:416)
at org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:515)
at org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:918)
at org.apache.flink.shaded.netty4.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(Thread.java:748)
Thanks in advance, please help me in resolving my doubts
Various points:
If your patterns involve matching temporal sequences (e.g., "A followed by B"), then you need state to do this. Most of Flink's sources and sinks also use state internally to record offsets, etc., and this state needs to be checkpointed if you care about exactly-once guarantees. If the patterns are being streamed in dynamically, then you'll want to store the patterns in Flink state as well.
Some of the comments in the code don't match the configuration parameters: e.g., "500 ms of progress" vs. 1000, "checkpoints have to complete within one minute" vs 120000. Also, keep in mind that the section of the documentation that you copied these settings from is not recommending best practices, but is instead illustrating how to make changes. In particular, env.getCheckpointConfig().setPreferCheckpointForRecovery(true); is a bad idea, and that config option should probably not exist.
Some of your entries in config.yaml are concerning. taskmanager.memory.managed.fraction is rather large (0.7) -- this only makes sense if you are using RocksDB, since managed memory has no other purpose for streaming. And taskmanager.memory.network.fraction and taskmanager.memory.jvm-overhead.fraction are both very large, and the sum of these three fractions is 1.5, which doesn't make sense.
In general the default network configuration works well across a wide range of deployment scenarios, and it is unusual to need to tune these settings, except in large clusters (which is not the case here). What sort of problems did you encounter?
As for your questions:
After a TM failure and recovery, the TMs should automatically resume processing from the most recent checkpoint. To diagnose why this isn't happening, we'll need more information. To gain experience with a deployment that handles this correctly, you can experiment with the Flink Operations Playground.
Once the configured restart strategy has played itself out, the job will FAIL, and Flink will no longer try to recover that job. You can, of course, build your own automation on top of Flink's REST API, if you want something more sophisticated.
Documentation on capacity planning? No, not really. This is generally figured out through trial and error. Different applications tend to have different requirements in ways that are difficult to anticipate. Things like your choice of serializer, state backend, number of keyBys, the sources and sinks, key skew, watermarking, and so on can all have significant impacts.
Documentation on optimizing CEP? No, sorry. The main points are
do everything you can to constrain the matches; avoid patterns that must keep state indefinitely
getEventsForPattern can be expensive
I have a job which consumes from RabbitMQ, I was using FS State Backend but it seems that the sizes of states became bigger and then I decide to move my states to RocksDB.
The issue is that during the first hours running the job is fine, event after more time if traffic get slower, but then when the traffic gets high again then the consumer start to have issues (events pilled up as unacked) and then these issues are reflected in the rest of the app.
I have:
4 CPU core
Local disk
16GB RAM
Unix environment
Flink 1.11
Scala version 2.11
1 single job running with few keyedStreams, and around 10 transformations, and sink to Postgres
some configurations
flink.buffer_timeout=50
flink.maxparallelism=4
flink.memory=16
flink.cpu.cores=4
#checkpoints
flink.checkpointing_compression=true
flink.checkpointing_min_pause=30000
flink.checkpointing_timeout=120000
flink.checkpointing_enabled=true
flink.checkpointing_time=60000
flink.max_current_checkpoint=1
#RocksDB configuration
state.backend.rocksdb.localdir=home/username/checkpoints (this is not working don't know why)
state.backend.rocksdb.thread.numfactory=4
state.backend.rocksdb.block.blocksize=16kb
state.backend.rocksdb.block.cache-size=512mb
#rocksdb or heap
state.backend.rocksdb.timer-service.factory=heap (I have test with rocksdb too and is the same)
state.backend.rocksdb.predefined-options=SPINNING_DISK_OPTIMIZED
Let me know if more information is needed?
state.backend.rocksdb.localdir should be an absolute path, not a relative one. And this setting isn't for specifying where checkpoints go (which shouldn't be on the local disk), this setting is for specifying where the working state is kept (which should be on the local disk).
Your job is experiencing backpressure, meaning that some part of the pipeline can't keep up. The most common causes of backpressure are (1) sinks that can't keep up, and (2) inadequate resources (e.g., the parallelism is too low).
You can test if postgres is the problem by running the job with a discarding sink.
Looking at various metrics should give you an idea of what resources might be under-provisioned.
I was going through lot of blogs and stack overflow answers , but i am not clear about the Flink memory management. In few blogs i found "Memory Manager Pool" and "Rocksdb". I am using rocksdb and i assume all my state is stored in that db.
Here are my doubts..
How the memory management process handled in streaming ?
what is difference between Memory management in streaming and batch ?
Difference between "Memory Manager Pool" and "back end state (Rcokdb")
In streaming, what you mean by "Flink Managed Memory" ? does include the memory required by RacksDb cache and buffers ?
Streaming
When you use RocksDBStatebackend all KeyedState (ValueState, MapState, ... and Timers) is stored in RocksDB. OperatorState is kept on the Heap. OperatorState is usually very small, and seldomly used directly by a Flink developer.
For Flink 1.10+, managed memory includes all memory used by RocksDB. Flink makes sure that RocksDB's memory usage stays within the limits of the assigned managed memory. Use taskmanager.memory.managed.fraction to tune how much memory you give to RocksDB. Usually, you can give all memory but 500MB to RockSDB.
Batch
Batch Programs do not use a Statebackend. Managed memory is used for off-heap joins, sorting, etc. The memory configurations like taskmanager.memory.managed.fraction are the same for batch and streaming.
As per Flink documents memory management in Streaming and batch handled differently
I was running the Flink 1.8 WordCount example job on Kubernetes, I noticed a behavior. Sometimes, a TaskManager pod gets OOMKilled and restarted (it is not a concern for now) but the whole job fails, the JobManager log shows The assigned slot XXX was removed.
My question is, why does the whole job fail? Is there a way that I can configure Flink to make the job more tolerant to transient TaskManager failures?
Apache Flink's fault tolerance mechanism is based on periodic checkpoints and can guarantee exactly-once state consistency, i.e., after recovering from a failure, the state is consistent and the same as if the failure never happened (assuming deterministic application logic of course).
In order to achieve this, Flink takes consistent snapshots of the application's state (so-called checkpoints) in regular intervals. In case of a failure, the whole application is reset to the latest competed checkpoint. For that, Flink (until Flink 1.8) always restarts the whole application. A failure is any reason that terminates a worker process, including application failure, JVM OOM, killed container, hardware failure, etc.
In Flink 1.9 (released a week ago, see announcement), Flink adds so-called failover regions (see here), which can reduce the number restarted tasks. For continuous streaming applications, this only applies if the application does not have a shuffle (keyBy, broadcast, partition, ...) operation. In that case, only the affected pipeline is restarted and all other pipelines continue processing data.
Running Flink jobs you should do a capacity plan previously, otherwise, you will meet the OOM problems frequently, in kubernetes environment you should calculate how many memories your job will cost and set the resources.limits.memory of your deployment higher than it as well as the resources.requests.memory, if the resources.requests.memory is much lower than your job actually cost your Pod will be fall in Evicted state this will cause your job to fail as well.
A container in a Pod may fail due to number of reasons like process in it exited with a non-zero exit code, or the container was killed for exceeding a memory limit
You can use the Jobs specification
.spec.template.spec.restartPolicy = "OnFailure"
So using this pod will stay in the system and container will re-run.
For more information on also check official job documentation : https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/