Tomcat 6 Memory Consumption - tomcat6

Tomcat memory is keep on increasing for every minute. Currently Max limit is set as 1024 MB. If i increase the max limit then Tomcat is not starting. Please let mw know is there any way to reduce Memory usage for Tomcat.

I assume, when you say max limit, it means you have set maximum heap size (i.e., -Xmx1024M) as one of your options in catalina.bat.
It may be that your machine does not have enough RAM, and hence with a higher value of heap size, Tomcat fails to start (or other processes are running and hence Tomcat does not gets enough memory to start up).
With 1GB (the value you currently have), set below flags as well to have more information when the server goes out of memory and to enable concurrent garbage collection :
-XX:+UseConcMarkSweepGC -XX:+HeapDumpOnOutOfMemoryError
Also, 'jvisualvm.exe' is the best way to start analyzing your server as to where actually memory leak is.

Related

Flink rocksdb per slot memory configuration issue

I have 32GB of managed memory and 8 task slots.
As state.backend.rocksdb.memory.managed is set to true, each rockdb in each task slot uses 4GB of memory.
Some of my tasks do not require a rocksdb backend so I want to increase this 4GB to 6GB by setting state.backend.rocksdb.memory.fixed-per-slot: 6000m
The problem when I set state.backend.rocksdb.memory.fixed-per-slot: 6000m is on Flink UI, in task manager page, I cant see the allocated managed memory anymore.
As you can see when state.backend.rocksdb.memory.fixed-per-slot is not set and state.backend.rocksdb.memory.managed: true, 4GB usage appears on managed memory for each running task which uses rocksdb backend.
But after setting state.backend.rocksdb.memory.fixed-per-slot: 6000m , Managed Memory always shows zero!
1- How can I watch the managed memory allocation after setting state.backend.rocksdb.memory.fixed-per-slot: 6000m
2- Should state.backend.rocksdb.memory.managed be set to true even I set fixed-per-slot.
Another reply we got from the hive:
"Fixed-per-slot overrides managed memory setting, so managed zero is expected (it's either fixed-per-slot or managed). As Yuval wrote you can see the memory instances by checking the LRU caches.
One more thing to check is write_buffer_manager pointer in the RocksDB log file. It will be different for each operator if neither fixed-per-slot or managed memory is used and shared between instances otherwise"
Let us know if this is useful
Shared your question with the Speedb hive on Discord and here's the "honey" we got for you:
We don't have much experience with Flink setups regarding how to configure the memory limits and their different parameters. However, RocksDB uses a shared Block Cache to control the memory limits of your state. so for question 1 - you could grep "block_cache:" and "capacity :" from all the LOG files of all the DBs (operators). the total memory limit allocated to RocksDB through the block cache would be the sum of the capacity for all the unique pointers. the same block cache (memory) can be shared across DBs.
do note that RocksDB might use more memory than the block cache capacity.
Hope this help. If you have follow-up questions or want more help with this, send us a message on Discord.

High Paging file % Usage while memory is not full

I was handed a server hosting SQL Server and I was asked to fined the causes of its bad performance problems.
While monitoring PerfMon I found that:
Paging file: % Usage = 25% average for 3 days.
Memory: Pages/sec > 1 average for 3 days.
What I know that if % Usage is > 2% then there is too much paging because of memory pressure and lack in memory space. However, if when I opened Resource Monitor, Memory tab, I found:
-26 GB in use (out of 32 GB total RAM)
-2 GB standby
-and 4 GB Memory free !!!!!!
If there is 4 GB free memory why the paging?! and most importantly why it (paging %) is too high?!!
Someone please explain this situation and how paging file % usage can be lowered to normal.
Note that SQL Server Max. memory is set to 15GB
Page file usage on its own isn't a major red flag. The OS will tend to use a page file even when there's plenty of RAM available, because it allows it to dump the relevant parts of memory from RAM when needed - don't think of the page file usage as memory moved from RAM to HDD - it's just a copy. All the accesses will still use RAM, the OS is simply preparing for a contingency - if it didn't have the memory pre-written to the page file, the memory requests would have to wait for "old" memory to be dumped before freeing the RAM for other uses.
Also, it seems you're a bit confused about how paging works. All user-space memory is always paged, this has nothing to do with the page file itself - it simply means you're using virtual memory. The metric you're looking for is Hard faults per second (EDIT: uh, I misread which one you're reading - Pages/sec is how many hard faults there are; still, the rest still applies), which tells you how often the OS had to actually read data from the page file. Even then, 1 per second is extremely low. You will rarely see anything until that number goes above fifty per sec or so, and much higher for SSDs (on my particular system, I can get thousands of hard faults with no noticeable memory lag - this varies a lot based on the actual HDD and your chipset and drivers).
Finally, there's way too many ways SQL Server performance can suffer. If you don't have a real DBA (or at least someone with plenty of DB experience), you're in trouble. Most of your lines of inquiry will lead you to dead-ends - something as complex and optimized as a DB engine is always hard to diagnose properly. Identify signs - is there a high CPU usage? Is there a high RAM usage? Are there queries with high I/O usage? Are there specific queries that are giving you trouble, or does the whole DB suffer? Are your indices and tables properly maintained? Those are just the very basics. Once you have some extra information like this, try DBA.StackExchange.com - SO isn't really the right place to ask for DBA advice :)
Just some shots in the dark really, might be a little random but I could hardly spot something straight away:
might there be processes that select uselessly large data sets or run too frequent operations? (e.g. the awful app developers' practice to use SELECT * everywhere or get all data and then filter it on application level or run DB queries in loops instead of getting record sets once, etc.)
is indexing proper? (e.g. are leaf elements employed where possible to reduce the key lookup operations, are heavy queries backed up with proper indices to avoid table & index scans etc.)
how is data population managed? (e.g. is it possible that there are too many page splits due to improper clustered indices or parallel inserts, are there some index rebuilds taking place etc.)

Linux C app runs out of memory

I am running the word2phrase.c using a very large (45Gb) training set. My PC has 16Gb of physical RAM and 4Gb of swap. I've left it train overnight (second time tbh) and I come back in the morning, to see it was "killed" without further explanation. I sat and watched it die, when my RAM run out.
I set in my /etc/sysctl.conf
vm.oom-kill = 0
vm.overcommit_memory = 2
The actual source code does not appear to write to the file the data, but rather keep it in memory, which is creating the issue.
Is the total memory (RAM + SWAP) used to kill OOM? For example, if I increase my SWAP to 32Gb, will this stop happening?
Can I force this process to use SWAP instead of Physical RAM, at the expense of slower performance?
Q: Is the total memory (RAM + SWAP) used to kill OOM?
Yes.
Q: For example, if I increase my SWAP to 32Gb, will this stop happening?
Yes, if RAM and swap space combined (48 GB) are enough for the process.
Q: Can I force this process to use SWAP instead of Physical RAM, at the expense of slower performance?
This will be managed automatically by the operating system. All you have to do is to increase swap space.
To answer the first question, yes.
Second question:
Can I force this process to use SWAP instead of Physical RAM
linux dictates how the process is running, and allocate the memory appropriately for the process. When the threshold gets reached, linux will use the swap space as a measure.
Increasing swap space may work in this case. Then, again, I do not know how linux will cope with such a large swap, bear in mind, this could decrease performance dramatically.
Best alternative thing to do is split up the 45GB training set to smaller chunks.

Uploading Large(8GB) File Issue using Weka

I am trying to upload a 8GB file to weka for usage of Apriori Algorithm. The server configuration is as follows :-
Its 8 processor server with 4 cores in each physical address space = 40bits and virtual address space =48 bits. Its a 64 bits processor.
Physical Memory =26GB and SWAP =27GB
JVM = 64bit. We have allocated 32GB for JVM Heap using XmX option. Our concern is that the loading of such a huge file is taking a very long time(around 8 hours) and java is utilizing 107% CPU and 91% memory and it has not shown Out of memory exception and weka is showing reading from file.
Please help me how do I handle huge file and what exactly is happening here?
Reagards,
Aniket
I can't speak to Weka, I don't know your data set, or how many elements are in it. The number of elements matter as in a 64b JVM, the pointers are huge, and they add up.
But do NOT create a JVM larger than physical RAM. Swap is simply not an option for Java. A swapping JVM is a dead JVM. Swap is for idle processes rarely used.
Also note that the Xmx value and the physical heap size are not the same, physical size will always be larger than the Xmx size.
You should pre-allocate your JVM heap (Xms == Xmx) and try out various values until MOST of your physical RAM is consumed. This will limit full GCs and memory fragmentation. It also helps (a little) to do this on a fresh system if you're allocating such a large portion of the total memory space.
But whatever you do, do not let Java swap. Swapping and Garbage Collectors do not mix.

outOfMemoryException while reading excel data

I am trying to read data from an excel file(xlsx format) which is of size 100MB. While reading the excel data I am facing outOfMemoryException. Tried by increasing the JVM heap size to 1024MB but still no use and I cant increase the size more than that. Also tried by running garbage collection too but no use. Can any one help me on this to resolve my issue.
Thanks
Pavan Kumar O V S.
By default a JVM places an upper limit on the amount of memory available to the current process in order to prevent runaway processes gobbling system resources and making the machine grind to a halt. When reading or writing large spreadsheets, the JVM may require more memory than has been allocated to the JVM by default - this normally manifests itself as a java.lang.OutOfMemoryError.
For command line processes, you can allocate more memory to the JVM using the -Xms and -Xmx options eg. to allocate an initial heap allocation of 10 MB, with 100 MB as the upper bound you can use:
java -Xms10m -Xmx100m -classpath jxl.jar spreadsheet.xls
You can refer to http://www.andykhan.com/jexcelapi/tutorial.html#introduction for further details

Resources