I want to increase the permgen memory and Heap Memory for my Tomcat instance.
I have tried by creating the setenv.bat file.
Now I am not sure whether it is reflected or not.
Could you please tell me how to know the existing allocated Heap and PerGen memory for Tomcat instance and how to cross check whether the memory is increased or not.
I have tried this method mentioned in the url but this is for JAVA in the windows not for specific to any java instance
https://www.mkyong.com/java/find-out-your-java-heap-memory-size/
Solution 1: Check windows process parameters
You might see it in the task manager by enabling the command line column and check if java.exe was indeed started with the -XX:MaxPermSize parameter.
However the task manager does not always show the full command line. It cuts off after a certain number of characters. You can use Process Explorer to see the full command line (just mouse over the java.exe process).
Solution 2: Check in code:
import java.lang.management.ManagementFactory;
import java.lang.management.MemoryPoolMXBean;
import java.lang.management.MemoryType;
import java.lang.management.MemoryUsage;
for (MemoryPoolMXBean mpBean : ManagementFactory.getMemoryPoolMXBeans()) {
System.out.printf(
"Type: %s, Name: %s: %s\n",
mpBean.getType().toString(), mpBean.getName(), mpBean.getUsage()
);
}
See also How is the java memory pool divided.
Related
I'm working with some big txt files, (some around 3 GB), and whenever I have to check the txt files the message "File size exceeds configured limit (2.56 MB), code insight features not available" appear in the top of the file, I tried to change the file size by going to Help->Edit custom properties and then adding the next line of code in the file that opens
idea.max.content.load.filesize=500000
the problem is that even after closing and re-opening PyCharm the same message appears, do I need to do something else? just writing that line of code is enough to change the filesize?, it doesn't need to be run like a normal code? if so how can I run it since the option doesn't appear?
instead of using the original line of code I used
idea.max.intellisense.filesize = new size in kB
also, I advise rebooting the Pc after adding that line of code in the window that appears after going to Help->Edit custom properties
I'm writing a program in C that will have to check a configuration file every time it starts to set some variables.
At the first start of the program I suppose there won't be any configuration file, so I need to create it (with default settings).
I've been said configurations files of program belongs to the folder /etc, more specifically to a particular folder created on purpose for the program itself (i.e. /etc/myprog). Here comes the first question I should have asked: is it true? Why /etc?
In any case I tried to create that file using this:
open("/etc/myprog/myprog.conf", O_WRONLY | O_CREAT, 0644);
the open returns -1 and sets errno global variable to 2 (i.e. folder does not exist).
If I try to create the file straight inside /etc (therefore "/etc/myprog.conf" as first argument of the open) I get instead an errno set to 13 (i.e. permission denied).
Is there a way to grant my program permissions to write in /etc?
EDIT: I see most users are suggesting to use sudo. If possible I would have preferred to avoid this option as this file has to be created just once (at the first start). Maybe I should make 2 different executables? (e.g. myprog_bootstrap and myprog, having to run only the first one with sudo)
You need root privileges to create a file in /etc. Run your executable with sudo in front:
sudo executable_name
Another possibility might be to make your executable setuid. Your program would then call very appropriately the setreuid(2) system call.
However, be very careful. Programs like /bin/login (or /usr/bin/sudo itself) are coded this way, but any subtle error in your program opens a can of worms of security holes. So please be paranoid when writing such a code, and get it reviewed by someone else.
Perhaps a better approach might be to have your installation procedure make /etc/yourfile some symlink (created once at installation time to some writable file elsewhere) ....
BTW, you might create a group for your program, and make -at installation time- the /etc/yourfile writable to the group, and make your program setgid.
Or even, dedicate a user for your program, and have this /etc/yourfile belonging to that user.
Or, at installation time, have the /etc/myprog/ directory created and belonging to the appropriate user (or group) and being writable to that user (or group).
PS. Read also Advanced Linux Programming, capabilities(7), credentials(7) and execve(2)
I'm writing a MPI application that takes a filename as an argument and tries to read from the file using regular C functions. I run this application on several nodes of a cluster by using qsub, which in turn uses mpiexec.
The application runs just fine on a local node where the file is. For this I just call mpiexec directly:
mpiexec -n 4 ~/my_app ~/input_file.txt
But when I submit it with qsub to be run on other nodes of the cluster, the file reading part fails. The application errors at fopen call -- it can't open the file (likely because it's not present).
The question is, how do I make the file available to all nodes? I have looked over qsub manpage and couldn't fine anything relevant.
I guess Vanilla Gorilla doesn't need an answer any more? However, let's consider the case of a pathological system with no parallel file system and a file system available only at one node. There is a way in ROMIO (a very common MPI-IO implementation) to achieve your goal:
how can i transfer file from one proccess to all other with mpi?
This is a very beginner-level question in C.
Don't know where to start looking/searching.
So, if I have a program continuously running in C, what is the best way to accept input through the command line into the program?
EX, mysql is already running, but you can process a command call
mysql SELECT * FROM *
Do I need a different program to write to file/stdin?enter code here
Clarification:
So, mysql seems to be able to take in commands while it is already running... is that possible in C?
Goal:
I have some hooks into open gl es, and I want to run a continuous draw loop in the background, while having the ability to call commands such as
glhookprogram make "object1" model "triangle" program "default"
glhookprogram attr "object1" position "1.0, 1.0, 0.0" scale "2.0" rotation "45, 0, 0"
this way, I can have a node server run hw-accelerated animations in javascript on the rpi.
Looks like this is what you need (and I'm sorry - I won't be going into too much details as there are plenty of sources on the Web about that):
A "server" - that would be your background process that stays running in memory and can accept and process commands (requests)
A "client" - a (short-running?) process that can accept commands from user (GUI, command-line. Network? Other process?) and send requests to your "server"
This is not a trivial task for a beginner. I would suggest googling for "server-client" and for "inter-process communications" first and go from there.
The range of options to "accept input" into your server includes (but is not limited to) the following:
(Windows) messages
Shared memory and a command queue (producer-consumer)
Shared file (just listing it here for completeness, I'd advise against this particular one for your case)
Named pipes
Sockets (thanks for reminding me of those in the comments, can't believe I missed that!)
I recently ran out of disk space on a drive on a FreeBSD server. I truncated the file that was causing problems but I'm not seeing the change reflected when running df. When I run du -d0 on the partition it shows the correct value. Is there any way to force this information to be updated? What is causing the output here to be different?
In BSD a directory entry is simply one of many references to the underlying file data (called an inode). When a file is deleted with the rm(1) command only the reference count is decreased. If the reference count is still positive, (e.g. the file has other directory entries due to symlinks) then the underlying file data is not removed.
Newer BSD users often don't realize that a program that has a file open is also holding a reference. The prevents the underlying file data from going away while the process is using it. When the process closes the file if the reference count falls to zero the file space is marked as available. This scheme is used to avoid the Microsoft Windows type issues where it won't let you delete a file because some unspecified program still has it open.
An easy way to observe this is to do the following
cp /bin/cat /tmp/cat-test
/tmp/cat-test &
rm /tmp/cat-test
Until the background process is terminated the file space used by /tmp/cat-test will remain allocated and unavailable as reported by df(1) but the du(1) command will not be able to account for it as it no longer has a filename.
Note that if the system should crash without the process closing the file then the file data will still be present but unreferenced, an fsck(8) run will be needed to recover the filesystem space.
Processes holding files open is one reason why the newsyslog(8) command sends signals to syslogd or other logging programs to inform them they should close and re-open their log files after it has rotated them.
Softupdates can also effect filesystem freespace as the actual inode space recovery can be deferred; the sync(8) command can be used to encourage this to happen sooner.
This probably centres on how you truncated the file. du and df report different things as this post on unix.com explains. Just because space is not used does not necessarily mean that it's free...
Does df --sync work?