Processes and a shared file descriptor - c

I have an application that creates multiple instances (processes) of itself and these processes have a shared data structure. In that struct there is a file descriptor used for logging data to file. There is a check in the logging function that checks to see if the file descriptor is -1 and if it is then it opens the file and sets the value of the shared file descriptor.
Other processes / threads do the same check but at this time the fd is != -1. So the file does not get opened. They then continue to writing to the file. The write fails most of the time and returns -1. When the write did not fail I checked the file path of the fd using readlink. The path was some other file than the log file.
I am assuming that this is because even though the file descriptor value was always 11, even in subsequent runs, that value refers to a different file for each process. So it is the eleventh file that process has open? So the log file is not even regarded as open for these processes and even if they do open the file the fd would be different.
So my question is this correct? My second question is how do I then re-implement this method given that multiple processes need to write to this log file. Would each process need to open that file.. or is there another way that is more efficient.. do I need to close the file so that other processes can open and write to it..?
EDIT:
The software is an open source software called filebench.
The file can be seen here.
Log method is filebench_log. Line 204 is the first check I mentioned where the file is opened. The write happens at line 293. The fd value is eleven among all processes and the value is the same: 11. It is actually shared through all processes and setup mostly here. The file is only opened once (verified via print statements).
The shared data struct that has the fd is called
filebench_shm
and the fd is
filebench_shm->shm_log_fd
EDIT 2:
The error message that I get is Bad file descriptor. Errno is 9.
EDIT 3:
So it seems that each process has a different index table for the fds. Wiki:
On Linux, the set of file descriptors open in a process can be accessed under the path /proc/PID/fd/, where PID is the process identifier.
So the issue that I am having is that for two processes with process IDs 101, 102 the file descriptor 11 is not the same for the two processes:
/proc/101/fd/11
/proc/102/fd/11
I have a shared data structure between these processes.. is there another way I can share an open file between them other than an fd since that doesn't work?

It seems that it would be simplest to open the file before spawning the new processes. This avoids all the coordination complexity regarding opening the file by centralizing it to one time and place.

I originally wrote this as a solution:
Create a shared memory segment.
Put the file descriptor variable in the segment.
Put a mutex semaphore in the segment
Each process accesses the file descriptor in the segment. If it is not open, lock the semaphore, check if it is open, and if not open the
file. Release the mutex.
That way all processes share the same file descriptor.
But this assumes that the the underlying file descriptor object is also in the shared memory, which I think it is not.
Instead, use the open then fork method mentioned in the other answer, or have each process open the file and use flock to serialize access when needed.

Related

File descriptor and process relation

Do file descriptor are with respect to the processes or with respect to the operating system? What I basically want to know is if in a c program I open a file and that file gets assigned a file descriptor value lets say, 103, so when I open a file with file descriptor 103 in some other c program would I be referring to the same file or some other?
Each Process will be having its own file descriptor table. Its processor specific, if you change the fd it will be valid only to that process it wont affect the other processes in the system. once process is terminated fd will be discarded.
What if I fork a new process from the process I opened that file?
Current File description table i.e the table before fork system call will be inherited to the child process.
File descriptors are process specific created through open(). But you can open same file more than once by other processes with open(). In this way each process will have his own file descriptor for same file. File descriptors along with other resources pass through fork to child process.That means child process does not need to reopen same file which parent already has opened.

File descriptors before fork()

I know that if I call the open function before the fork(), the IO pointer is shared between the processes.
If one of these processes closes the file calling the close(fd) function, will the other processes still be capable to write/read the file or will the file be closed for everyone?
Yes. Each process has a copy of the file descriptor (among other things). So one process closing it won't affect the copy of the fd in other process.
From fork() manual:
The child inherits copies of the parent's set of open file
descriptors. Each file descriptor in the child refers to the same
open file description (see open(2)) as the corresponding file
descriptor in the parent. This means that the two descriptors
share open file status flags, current file offset, and signal-
driven I/O attributes (see the description of F_SETOWN and
F_SETSIG in fcntl(2)).
From close() manual:
If fd is the last file descriptor referring to the underlying open
file description (see open(2)), the resources associated with the
open file description are freed; if the descriptor was the last
reference to a file which has been removed using unlink(2), the file
is deleted.
So if you do close(fd); it closes only the reference in that process and other process holding another reference to the same file descriptor can continue to operate on it.
Whenever a child process is created, it gets a copy of the file descriptor table from the parent process. And there is a reference count corresponding to each file descriptor, that is the number of processes currently accessing the file. So, if a file is open in master process and a child process is created, the reference count increments, as it is now open in child process as well, and when it is closed in any of the processes, it decrements. A file is finally closed when the reference count reaches zero.

What is the difference between inode number and file descriptor?

I understand file descriptors are kernel handle to identify the file , while inode number of a file is pointer to a structure which has other details about file(Correct me if I am wrong). But I am unable to get the difference between them.
An inode is an artifact of a particular file-system and how it manages indirection. A "traditional *ix" file-system uses this to link together files into directories, and even multiple parts of a file together. That is, an inode represents a physical manifestation of the file-system implementation.
On the other hand, a file descriptor is an opaque identifier to an open file by the Kernel. As long as the file remains open that identifier can be used to perform operations such as reading and writing. The usage of "file" here is not to be confused with a general "file on a disk" - rather a file in this context represents a stream and operations which can be performed upon it, regardless of the source.
A file descriptor is not related to an inode, except as such may be used internally by particular [file-system] driver.
The difference is not substantial, both are related to the abstract term called "file". An inode is a filesystem structure that represents files. Whereas, a file descriptor is an integer returned by open syscall. By definition:
Files are represented by inodes. The inode of a file is a structure kept by the filesystem which holds information about a file, like its type, owner, permissions, inode links count and so on.
On other the hand, a file descriptor
File Descriptors:
The value returned by an open call is termed a file descriptor and is essentially an index into an array of open files kept by the kernel.
The kernel doesn't represent open files by their names, instead it uses an array of entries for open files for every process, so a file descriptor in effect is an index into an array of open files. For example, let's assume you're doing the following operation in a process:
read(0, 10)
0 denotes the file descriptor number, and 10 to read 10 bytes. In this case, the process requests 10 bytes from the file/stream in index 0, this is stdin. The kernel automatically grants each process three open streams:
Descriptor No.
0 ---> stdin
1 ---> stdout
2 ---> stderr
These descriptors are given to you for free by the kernel.
Now, when you open a file, in the process via open("/home/myname/file.txt") syscall, you'll have index 3 for the newly opened file, you open another file, you get index 4 and so forth. These are the descriptors of the opened files in the process:
Descriptor No.
0 ---> stdin
1 ---> stdout
2 ---> stderr
3 ---> /home/user100/out.txt
4 ---> /home/user100/file.txt
See OPEN(2) it explains what goes underneath the surface when you call open.
The fundamental difference is that an inode represents a file while a file descriptor (fd) represents a ticket to access the file, with limited permission and time window. You can think an inode as kind of complex ID of the file. Each file object has a unique inode. On the other hand, a file descriptor is an "opened" file by a particular user. The user program is not aware of the file's inode. It uses the fd to access the file. Depending on the user's permissions and the mode the user program choses to open the file (read-only for example) a fd is allowed a certain set of operations on the file. Once the fd is "closed" the user program can't access the file unless it opens another fd. At any given time, there can be multiple fds accessing a file in the same or different user programs.

Using rename to safely overwrite a shared file in Linux

Here is the setup: I have a shared file (lets call it status.csv) that is read by many processes (lets call them consumers) in a read-only fashion. I have one producer that periodically updates status.csv by creating a temp file, writing data to it and using the C function discussed here:
http://www.gnu.org/software/libc/manual/html_node/Renaming-Files.html
to rename the temp file (effectively overwrite) to status.csv so that the consumers can process the new data. It want to try and guarantee (as much as possible in the Linux world) that the consumers won't get a malformed/corrupted/half-old/half-new status.csv file (I want them to get either all of the old data or all of the new). I can't seem to guarantee this by reading the description of rename: it seems to guarantee that the rename action itself is atomic but I want to know if a consumer already has the status.csv file open, he will continue to read the same file as it was when it was opened, even if the file is renamed/overwritten by the producer in the middle of this reading operation.
I attempted to prototype this thinking that the consumers will get some type of error or a half old/half new file but it seems to always be in the state it was when it was open by the consumer even if renamed/overwritten multiple times.
BTW, these processes are running on the same machine (RHEL 6).
Thanks!
In Linux and similar systems, if a process has a file open and the file is deleted, the file itself remains undeleted until all processes close it. All that happens immediately is that the directory entry is deleted so that it cannot be opened again.
The same thing happens if rename is used to replace an open file. The old file descriptor still keeps the old file open. However, new opens will see the new file.
Therefore, for your consumers to see the new file, they must close and reopen the file.
Note: your consumers can discover if the file has been replaced by using the stat(2) call. If either the st_dev or st_ino entries (or both) have changed, then the file has been replaced and must be closed and reopened. This is how tail -F works.

Is the file position per inode?

I am confused with the concept of file position as used in lseek. Is this file position maintained at inode level or a simple variable which could have different values for different process working on the same file?
Per the lseek docs, the file position is associated with the open file pointed to by a file descriptor, i.e. the thing that is handed to your by open. Because of functions like dup and fork, multiple descriptors can point to a single description, but it's the description that holds the location cursor.
Think about it: if it were associated with the inode, then you would not be able to have multiple processes accessing a file in a sensible manner, since all accesses to that file by one process would affect other processes.
Thus, a single process could have track many different file positions as it has file descriptors for a given file.
It's not an 'inode', but FILEHANDLE inside kernel.
Inode is a part of file description of the *nix specific file system on the disk. FAT32, for example, has no inodes, but supported by Linux.
For know the relation between file descriptors and open files, we need to examine three data structures.
the per-process file descriptor table
the system wide table of open file descriptors
the file system i-node table.
For each process kernel maintains a table of open file descriptors. Each entry in this table records information about a single file descriptor including
a set of flags controlling the operation of the file descriptor.
a reference to the open file description
The kernel maintains a system wide table of all open file descriptors. An open file description stores all information related to an open file including:
the current file offset(as updated by read() and write(),or explicitly modified using lseek())
status flags specified when opening the file.
the file access mode (read-only,write only,or read-write,as specified in open())
setting relating to signal-driven I/O and
a reference to i-node object for this file.
Reference-Page 94,The Linux Programming Interface by Michael Kerrisk

Resources