Difference between Linux errno 23 and Linux errno 24 - c

What is the difference between these 2 linux errors in errno.h? 23 and 24
I tried 2 different sites but can't understand difference between the two.
[EMFILE]
Too many open files.
[ENFILE]
Too many files open in system.
# define ENFILE 23 /* File table overflow */
# define EMFILE 24 /* Too many open files */
Also, I am getting errno 24 and socket call failing at 974th time. (AF_INET UDP datagram socket)
When I did a cat /proc/sys/fs/file-max I am seeing a value of 334076
ulimit -n showing 1024
Any idea what can be done to increase limit?

For 1) Both error codes are about the situation with too many opened files. EMFILE is too many files opened in your process. ENFILE is too many files opened in the entire system.

You can increase the maximum number of open files / file descriptors
sysctl -w fs.file-max=100000
Or open
/etc/sysctl.conf
and append/change fs.file-max to the number you need:
fs.file-max = 100000
Then run
sysctl -p
to reload the new settings
If you don't want to set system-wide FD (file-descriptor) limits, you can set the user-level FD limits.
You need to edit
/etc/security/limits.conf file
And for user YOUR_USER, add these lines:
YOUR_USER soft nofile 4096
YOUR_USER hard nofile 10240
to set the soft and hard limits for user YOUR_USER.
Save and close the file.
To see the hard and soft limits for user YOUR_USER:
su - YOUR_USER
ulimit -Hn
ulimit -Sn

Related

Can value of a file descriptor go beyond max opened file descriptor softlimit?

In my program I'm storing some data related to the file descriptor in an array using file descriptors value as the index. so if I create an array with size equal to the soft limit of opened file descriptors, I will have array indexes from 0 - (soft limit-1). My question is can a value of file descriptor go beyond this index range? (i'm using ubuntu 20.4 and c language).
Yes, it can. The soft limit is something you can change with a call to ulimit(2) system call.... so you can put it under the number of actually open files and that mean that open(2) will fail on the next open, but it doesn't affect the actual number of open files you have open now. anyway... let's imagine this scenario:
you open 97 files (plus stdin, stdout and stderr, this makes 100 open descriptors from 0 to 99)
you close descriptors 0 to 49 (You have still open 50 to 99). (beware that this example will not allow you to print anything, as you have closed stdin, stdout & stderr)
you reduce the soft limit to 75.
you can still open 25 files more (you have 50 open files now)... and the'll go in the range 0 to 24, but the others continue to be open from 50 to 99. And you cannot open more files, because you run out the limit of open files.
By the way, the descriptor you get from the system from an open system call is always the minimum value available number to get... so, if you avoid touching the ulimits maximum number of open files, then you can do what you want.

C : write() fails while writing to a RAW disk after specific size limit reached

I am using a C program open(), close(), read(), write() technique to copy a large file to RAW disk. The disk size is 20 GB and the file size is 17 GB but, every time after around writing 945 MB write() throws No space left on device error.
I have run fdisk -l /dev/sdb and returns 20.5 GB and du /dev/sdb says 945126569
Then I tried, cat mylargefile > /dev/sdb it too throws same No space left on device error and then I do cat /dev/sdb > /tmp/sdb.img it completes normally. Then I do ls -ld /tmp/sdb.img it responds 945126569
I can use the same disk to create ext4 file system and format it without any issues, so disk error is improbable. (I guess ...)
I am using Ubuntu 16.04 LTS amd64 OS with latest GCC to build my program.
Can anyone suggest where am I going wrong or what needs to be done to avoid this?
du /dev/sdb should say 0 if /dev/sdb is a block device. Try also blockdev --report /dev/sdb.
What happened is that in the begining you didn't have a device file named /dev/sdb at all, and you created a regular file named /dev/sdb,
copied 945 MiB into it. This filled the partition on which /dev/ is located, and thus you get the error. fdisk just reads the partition table that is contained in the first 945 MiB and thinks it sees a hard disk of 20 GiB.
When you do cat mylargefile > /dev/sdb, the file /dev/sdb is first truncated to size 0, so there is now 945 MiB free space again that cat will proceed to fill.
To avoid this: make sure that you open a device by its correct name. In C open the device without O_CREAT.

JMeter Ubuntu: java.net.SocketException: Too many open files

Once JMeter script with 5000 values in CSV file is executed with Synchronization Timer, Response data in View Results in Tree shows following error:
java.net.SocketException: Too many open files
I could not find the satisfactory answer yet on google.
Is there any way to resolve this?
Increase the number of open file handles or file descriptors per process.
You can use command ulimit -a to find out how many open file handles per process is allowed.
$ ulimit -a
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
file size (blocks, -f) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 10
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 2048
virtual memory (kbytes, -v) unlimited
You can see that, open files (-n) 1024, which means only 1024 open file handles per process is allowed. If your Java program exceeds this limit, it will throw java.net.SocketException: Too many files open error.
See these threads I/O exception (java.net.SocketException) and java.net.SocketException: Too many open files.

sqlite3 C API cant Open/Close more than 1024 database

Yop,
I have an application running continuesly and needs to acess a sqlite3 database.
The program crash after about 1022 open/close of the same database.
Exemple:
int i = 1024;
sqlite3 * db;
while(i){
sqlite3_open("database.sqlite",&db) ;
// exécute prepared statement
sqlite3_close(db);
i--;
}
After 1022 iteration I cant open database, Iv Got the error:
Failed to open database unable to open database fileFailed to prepare database library routine called out of sequence2
I take a look at the limits sqlite documentation but no mentions of such of limit:
http://sqlite.org/limits.html
You're bumping into the max open files per process limit of the operating system itself.
Have a look at ulimit -S -a: (mine shown here for example)
xenon-lornix:~> ulimit -S -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 29567
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 29567
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
Look through the list, see where it says open files? The default limit is 1024, meaning that a user (you in this case!) may only have a maximum of 1024 files open per process at once.
A typical program has 3 files open by default, STDIN, STDOUT, & STDERR... leaving 1021 file descriptors available... so when you go to open the 1022nd database, it refuses, and fails.
Be sure to read the man page involving ulimit, surprisingly, it's NOT man ulimit! Best documentation is in bash, so try man bash, then press slash ('/') and type ulimit to search for it. (Around line 3383 in my bash man page)
The more detailed programming side of the various ulimits can be found in man getrlimit.
Please remember that ulimit works with both HARD and SOFT limits. A user can change their SOFT limit (via -S) from 0 to whatever the HARD (-H) limit value is. But a user cannot RAISE their HARD limit, and if user LOWERS their HARD limit, they can't raise it back up again. Only a super-user (root) may raise a HARD limit.
So to raise your SOFT open files limit, try this:
ulimit -S -n 8192
A quirk... ulimit defaults to setting HARD limits. I have an alias for ulimit to default to the soft limit, like this:
alias ulimit='ulimit -S'
If you happen to add the -H option, it overrides the default soft (-S) option, so all is good.
To see your hard limits:
ulimit -H -a
You probably don't sqlite3_finalize your prepared statements. You can't close a database that still has outstanding prepared statements. Do you check the result code of sqlite3_close?

fopen does not deal with more than 60 files at the same time

I need to have more than 60 text files opened at the same time in my C program. However, it seems that fopen is not able to handle more than 60 files simultaneously. I am programming in Windows environment.
I use the following fopen statement:
fopen(fileName.c_str(),"wt");
Where fileName is the path of my txt file, name which changes inside a loop along 100 files. Does anybody know any trick to make this work? Or any alternative?
If you issue the bash shell command:
ulimit -n
you'll see that 60 is your limit for open file handles. You can change it with:
ulimit -n 256
Note: there're soft (-S) and hard (-H) limits you can see with -Sn and -Hn, you can change your soft limit up to your hard limit.
There's actually two things that constrain how many files you can have open at any time:
The environment limit specified by ulimit -n.
The C runtime library. I know of several that limit you to 256 file handles (Sun to name one)
Your current limit is probably 63 once you take into account STDIN, STDOUT and STDERR already being opened, and I don't know of a system that goes that low so it's probably your ulimit but you need to be aware of the other limit.
On windows you can use _setmaxstdio(n) but in the default case you should still be able to open 512 files. so I'm still a little confused as to why you only get 60 odd unless you open each file about 8 times...

Resources