Once JMeter script with 5000 values in CSV file is executed with Synchronization Timer, Response data in View Results in Tree shows following error:
java.net.SocketException: Too many open files
I could not find the satisfactory answer yet on google.
Is there any way to resolve this?
Increase the number of open file handles or file descriptors per process.
You can use command ulimit -a to find out how many open file handles per process is allowed.
$ ulimit -a
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
file size (blocks, -f) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 10
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 2048
virtual memory (kbytes, -v) unlimited
You can see that, open files (-n) 1024, which means only 1024 open file handles per process is allowed. If your Java program exceeds this limit, it will throw java.net.SocketException: Too many files open error.
See these threads I/O exception (java.net.SocketException) and java.net.SocketException: Too many open files.
Related
I am using a C program open(), close(), read(), write() technique to copy a large file to RAW disk. The disk size is 20 GB and the file size is 17 GB but, every time after around writing 945 MB write() throws No space left on device error.
I have run fdisk -l /dev/sdb and returns 20.5 GB and du /dev/sdb says 945126569
Then I tried, cat mylargefile > /dev/sdb it too throws same No space left on device error and then I do cat /dev/sdb > /tmp/sdb.img it completes normally. Then I do ls -ld /tmp/sdb.img it responds 945126569
I can use the same disk to create ext4 file system and format it without any issues, so disk error is improbable. (I guess ...)
I am using Ubuntu 16.04 LTS amd64 OS with latest GCC to build my program.
Can anyone suggest where am I going wrong or what needs to be done to avoid this?
du /dev/sdb should say 0 if /dev/sdb is a block device. Try also blockdev --report /dev/sdb.
What happened is that in the begining you didn't have a device file named /dev/sdb at all, and you created a regular file named /dev/sdb,
copied 945 MiB into it. This filled the partition on which /dev/ is located, and thus you get the error. fdisk just reads the partition table that is contained in the first 945 MiB and thinks it sees a hard disk of 20 GiB.
When you do cat mylargefile > /dev/sdb, the file /dev/sdb is first truncated to size 0, so there is now 945 MiB free space again that cat will proceed to fill.
To avoid this: make sure that you open a device by its correct name. In C open the device without O_CREAT.
When I ran this
du -k *
I expected the output for each file to be ceil(filesize/1024) but the output was ceil(filesize/4096) * 4. Why is that?
Description of -k in $ man du: Display block counts in 1024-byte (1-Kbyte) blocks.
I'm using OS X if that makes any difference.
The file system allocates space in units of 4K (4096 bytes). If you create a 1 byte file, the file system will allocate 4K of storage to hold the file.
The du -k command reports the total storage used by the file system. So du -k reports that the file system is using 4K of space for that file.
What is the difference between these 2 linux errors in errno.h? 23 and 24
I tried 2 different sites but can't understand difference between the two.
[EMFILE]
Too many open files.
[ENFILE]
Too many files open in system.
# define ENFILE 23 /* File table overflow */
# define EMFILE 24 /* Too many open files */
Also, I am getting errno 24 and socket call failing at 974th time. (AF_INET UDP datagram socket)
When I did a cat /proc/sys/fs/file-max I am seeing a value of 334076
ulimit -n showing 1024
Any idea what can be done to increase limit?
For 1) Both error codes are about the situation with too many opened files. EMFILE is too many files opened in your process. ENFILE is too many files opened in the entire system.
You can increase the maximum number of open files / file descriptors
sysctl -w fs.file-max=100000
Or open
/etc/sysctl.conf
and append/change fs.file-max to the number you need:
fs.file-max = 100000
Then run
sysctl -p
to reload the new settings
If you don't want to set system-wide FD (file-descriptor) limits, you can set the user-level FD limits.
You need to edit
/etc/security/limits.conf file
And for user YOUR_USER, add these lines:
YOUR_USER soft nofile 4096
YOUR_USER hard nofile 10240
to set the soft and hard limits for user YOUR_USER.
Save and close the file.
To see the hard and soft limits for user YOUR_USER:
su - YOUR_USER
ulimit -Hn
ulimit -Sn
Yop,
I have an application running continuesly and needs to acess a sqlite3 database.
The program crash after about 1022 open/close of the same database.
Exemple:
int i = 1024;
sqlite3 * db;
while(i){
sqlite3_open("database.sqlite",&db) ;
// exécute prepared statement
sqlite3_close(db);
i--;
}
After 1022 iteration I cant open database, Iv Got the error:
Failed to open database unable to open database fileFailed to prepare database library routine called out of sequence2
I take a look at the limits sqlite documentation but no mentions of such of limit:
http://sqlite.org/limits.html
You're bumping into the max open files per process limit of the operating system itself.
Have a look at ulimit -S -a: (mine shown here for example)
xenon-lornix:~> ulimit -S -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 29567
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 29567
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
Look through the list, see where it says open files? The default limit is 1024, meaning that a user (you in this case!) may only have a maximum of 1024 files open per process at once.
A typical program has 3 files open by default, STDIN, STDOUT, & STDERR... leaving 1021 file descriptors available... so when you go to open the 1022nd database, it refuses, and fails.
Be sure to read the man page involving ulimit, surprisingly, it's NOT man ulimit! Best documentation is in bash, so try man bash, then press slash ('/') and type ulimit to search for it. (Around line 3383 in my bash man page)
The more detailed programming side of the various ulimits can be found in man getrlimit.
Please remember that ulimit works with both HARD and SOFT limits. A user can change their SOFT limit (via -S) from 0 to whatever the HARD (-H) limit value is. But a user cannot RAISE their HARD limit, and if user LOWERS their HARD limit, they can't raise it back up again. Only a super-user (root) may raise a HARD limit.
So to raise your SOFT open files limit, try this:
ulimit -S -n 8192
A quirk... ulimit defaults to setting HARD limits. I have an alias for ulimit to default to the soft limit, like this:
alias ulimit='ulimit -S'
If you happen to add the -H option, it overrides the default soft (-S) option, so all is good.
To see your hard limits:
ulimit -H -a
You probably don't sqlite3_finalize your prepared statements. You can't close a database that still has outstanding prepared statements. Do you check the result code of sqlite3_close?
I need to have more than 60 text files opened at the same time in my C program. However, it seems that fopen is not able to handle more than 60 files simultaneously. I am programming in Windows environment.
I use the following fopen statement:
fopen(fileName.c_str(),"wt");
Where fileName is the path of my txt file, name which changes inside a loop along 100 files. Does anybody know any trick to make this work? Or any alternative?
If you issue the bash shell command:
ulimit -n
you'll see that 60 is your limit for open file handles. You can change it with:
ulimit -n 256
Note: there're soft (-S) and hard (-H) limits you can see with -Sn and -Hn, you can change your soft limit up to your hard limit.
There's actually two things that constrain how many files you can have open at any time:
The environment limit specified by ulimit -n.
The C runtime library. I know of several that limit you to 256 file handles (Sun to name one)
Your current limit is probably 63 once you take into account STDIN, STDOUT and STDERR already being opened, and I don't know of a system that goes that low so it's probably your ulimit but you need to be aware of the other limit.
On windows you can use _setmaxstdio(n) but in the default case you should still be able to open 512 files. so I'm still a little confused as to why you only get 60 odd unless you open each file about 8 times...