I have 32GB of managed memory and 8 task slots.
As state.backend.rocksdb.memory.managed is set to true, each rockdb in each task slot uses 4GB of memory.
Some of my tasks do not require a rocksdb backend so I want to increase this 4GB to 6GB by setting state.backend.rocksdb.memory.fixed-per-slot: 6000m
The problem when I set state.backend.rocksdb.memory.fixed-per-slot: 6000m is on Flink UI, in task manager page, I cant see the allocated managed memory anymore.
As you can see when state.backend.rocksdb.memory.fixed-per-slot is not set and state.backend.rocksdb.memory.managed: true, 4GB usage appears on managed memory for each running task which uses rocksdb backend.
But after setting state.backend.rocksdb.memory.fixed-per-slot: 6000m , Managed Memory always shows zero!
1- How can I watch the managed memory allocation after setting state.backend.rocksdb.memory.fixed-per-slot: 6000m
2- Should state.backend.rocksdb.memory.managed be set to true even I set fixed-per-slot.
Another reply we got from the hive:
"Fixed-per-slot overrides managed memory setting, so managed zero is expected (it's either fixed-per-slot or managed). As Yuval wrote you can see the memory instances by checking the LRU caches.
One more thing to check is write_buffer_manager pointer in the RocksDB log file. It will be different for each operator if neither fixed-per-slot or managed memory is used and shared between instances otherwise"
Let us know if this is useful
Shared your question with the Speedb hive on Discord and here's the "honey" we got for you:
We don't have much experience with Flink setups regarding how to configure the memory limits and their different parameters. However, RocksDB uses a shared Block Cache to control the memory limits of your state. so for question 1 - you could grep "block_cache:" and "capacity :" from all the LOG files of all the DBs (operators). the total memory limit allocated to RocksDB through the block cache would be the sum of the capacity for all the unique pointers. the same block cache (memory) can be shared across DBs.
do note that RocksDB might use more memory than the block cache capacity.
Hope this help. If you have follow-up questions or want more help with this, send us a message on Discord.
Related
How can we control the window in RSS when mapping a large file? Now let me explain what i mean.
For example, we have a large file that exceeds RAM by several times, we do shared memory mmaping for several processes, if we access some object whose virtual address is located in this mapped memory and catch a page fault, then reading from disk, the sub-question is, will the opposite happen if we no longer use the given object? If this happens like an LRU, then what is the size of the LRU and how to control it? How is page cache involved in this case?
RSS graph
This is the RSS graph on testing instance(2 thread, 8 GB RAM) for 80 GB tar file. Where does this value of 3800 MB come from and stay stable when I run through the file after it has been mapped? How can I control it (or advise the kernel to control it)?
As long as you're not taking explicit action to lock the pages in memory, they should eventually be swapped back out automatically. The kernel basically uses a memory pressure heuristic to decide how much of physical memory to devote to swapped-in pages, and frequently rebalances as needed.
If you want to take a more active role in controlling this process, have a look at the madvise() system call.
This allows you to tweak the paging algorithm for your mmap, with actions like:
MADV_FREE (since Linux 4.5)
The application no longer requires the pages in the range specified by addr and len. The kernel can thus free these pages, but the freeing could be delayed until memory pressure occurs. ...
MADV_COLD (since Linux 5.4)
Deactivate a given range of pages. This will make the pages a more probable reclaim target should there be a memory pressure.
MADV_SEQUENTIAL
Expect page references in sequential order. (Hence, pages in the given range can be aggressively read ahead, and may be freed soon after they are accessed.)
MADV_WILLNEED
Expect access in the near future. (Hence, it might be a good idea to read some pages ahead.)
MADV_DONTNEED
Do not expect access in the near future. (For the time being, the application is finished with the given range, so the kernel can free resources associated with it.) ...
Issuing an madvise(MADV_SEQUENTIAL) after creating the mmap might be sufficient to get acceptable behavior. If not, you could also intersperse some MADV_WILLNEED/MADV_DONTNEED access hints (and/or MADV_FREE/MADV_COLD) during the traversal as you pass groups of pages.
I found vespa-proton-bin already used 68GB memory of my system. I've tried to limit memory on docker level and found that it will randomly kill process, which can be a huge problem.
Is there any setting to force it just using certain amount of memory on vespa-proton-bin in Vespa setting? Thanks.
Great question!
There is no explicit way to tell Vespa to only use x GB of memory but default Vespa will block feeding if 80% of the memory is in use already, see https://docs.vespa.ai/documentation/writing-to-vespa.html#feed-block. Using docker limits is only going to cause random OOM kills which is not what you want.
I'm guessing that you have a lot of attribute fields which are in-memory structures , see https://docs.vespa.ai/documentation/performance/attribute-memory-usage.html.
How is memory is managed in YugaByte DB? I understand that there are two sets of processes yb-tserver & yb-master, but couldn't find too many other details.
Specific questions:
How much RAM do each of these processes use by default?
Is there a way to explicitly control this?
Presumably, the memory is used for caching, memtables etc. How are these components sized?
Can specific tables be pinned in memory (or say given higher priority in caches)?
Thanks in advance.
How much RAM do each of these processes use by default?
By default:
yb-tserver process assumes 85% of node's RAM is available for its use.
and
yb-master process assumes 10% of node's RAM is available for its use.
These are determined by default settings of the gflag
--default_memory_limit_to_ram_ratio (0.85 and 0.1 respectively for
yb-tserver/yb-master).
Is there a way to explicitly control this?
Yes, there are 2 different options for controlling how much memory is allocated to the processes yb-master and yb-tserver:
Option A) You can set --default_memory_limit_to_ram_ratio to control
what percentage of node's RAM the process should use.
Option B) You can specify an absolute value too using
--memory_limit_hard_bytes. For example, to give yb-tserver 32GB of
RAM, use:
--memory_limit_hard_bytes 34359738368
Since you start these two processes independently, you can use either option for yb-master or yb-tserver. Just make sure that you don't oversubscribe total machine memory since a yb-master and a yb-tserver process can be present on a single VM.
Presumably, the memory is used for caching, memtables etc. How are
these components sized?
Yes, the primary consumers of memory are the block cache, memstores &
memory needed for requests/RPCs in flight.
Block Cache:
--db_block_cache_size_percentage=50 (default)
Total memstore is the minimum of these two knobs:
--global_memstore_size_mb_max=2048
--global_memstore_size_percentage=10
Can specific tables be pinned in memory (or say given higher
priority in caches)?
We currently (as of 1.1) do not have per-table pinning hints yet.
However, the block cache does do a great job already by default of
keeping hot blocks in cache. We have enhanced RocksDB’s block cache to
be scan resistant. The motivation was to prevent operations such as
long-running scans (e.g., due to an occasional large query or
background Spark jobs) from polluting the entire cache with poor
quality data and wiping out useful/hot data.
Tomcat memory is keep on increasing for every minute. Currently Max limit is set as 1024 MB. If i increase the max limit then Tomcat is not starting. Please let mw know is there any way to reduce Memory usage for Tomcat.
I assume, when you say max limit, it means you have set maximum heap size (i.e., -Xmx1024M) as one of your options in catalina.bat.
It may be that your machine does not have enough RAM, and hence with a higher value of heap size, Tomcat fails to start (or other processes are running and hence Tomcat does not gets enough memory to start up).
With 1GB (the value you currently have), set below flags as well to have more information when the server goes out of memory and to enable concurrent garbage collection :
-XX:+UseConcMarkSweepGC -XX:+HeapDumpOnOutOfMemoryError
Also, 'jvisualvm.exe' is the best way to start analyzing your server as to where actually memory leak is.
I am trying to read data from an excel file(xlsx format) which is of size 100MB. While reading the excel data I am facing outOfMemoryException. Tried by increasing the JVM heap size to 1024MB but still no use and I cant increase the size more than that. Also tried by running garbage collection too but no use. Can any one help me on this to resolve my issue.
Thanks
Pavan Kumar O V S.
By default a JVM places an upper limit on the amount of memory available to the current process in order to prevent runaway processes gobbling system resources and making the machine grind to a halt. When reading or writing large spreadsheets, the JVM may require more memory than has been allocated to the JVM by default - this normally manifests itself as a java.lang.OutOfMemoryError.
For command line processes, you can allocate more memory to the JVM using the -Xms and -Xmx options eg. to allocate an initial heap allocation of 10 MB, with 100 MB as the upper bound you can use:
java -Xms10m -Xmx100m -classpath jxl.jar spreadsheet.xls
You can refer to http://www.andykhan.com/jexcelapi/tutorial.html#introduction for further details