I have a file foo.hex that is accessed by two processes. One process has O_RDONLY access and the other has O_RDWR access.
When starting the system for the very first time, the reading process should not access the file before the writing process has initialized it.
Thus, I wrote something like this to initialize the file.
fd = open("foo.hex", O_RDWR|O_CREAT, 0666);
flock(fd, LOCK_EX);
init_structures(fd);
flock(fd, LOCK_UN);
Which still leaves the possibility to the reader process to access the file before it is initialized.
I couldn't find a way to open() and flock() in an atomic fashion. Besides mutexes what other possibilities are there to achieve my goal in an elegant way with as little overhead as possible (since it's only used once, the very first time the system is started)?
Make the writer create a file called "foo.hex.init" instead, and initialize that before renaming it to "foo.hex". This way, the reader can never see the uninitialized file contents.
Another approach could be to remove the existing file, recreate it without permissions for any process to access it, then change the file permissions after it's written:
unlink("foo.hex");
fd = open("foo.hex", O_RDWR|O_CREAT|O_EXCL, 0);
init_structures(fd);
fchmod(fd, 0666);
That likely won't work if you're running as root. (Which you shouldn't be doing anyway...)
This would prevent any process from using old data once the unlink() call is made. Depending on your requirements, that may or may not be worth the extra reader code necessary to deal with the file not existing or being accessible while the new file is being initialized.
Personally, I'd use the rename( "foo.hex.init", "foo.hex" ) solution unless init_structures() takes significant time, and there's a real, hard requirement to not use old data once new data is available. But sometimes important people aren't comfortable with using old data while any portion of new data is available, and they don't really understand, "If the reader process started two milliseconds earlier it would use the old data anyway".
An alternative approach is for the reader process to sleep a little and retry upon finding that the file doesn't yet exist, or is empty.
int open_for_read(const char *fname)
{
int retries = 0;
for (;;) {
int fd = open(fname, O_RDONLY);
if (fd == -1) {
if (errno != ENOENT) return -1;
goto retry;
}
if (flock(fd, LOCK_SH)) {
close(fd);
return -1;
}
struct stat st;
if (fstat(fd, &st)) {
close(fd);
return -1;
}
if (st.st_size == 0) {
close(fd);
goto retry;
}
return fd;
retry:
if (++retries > MAX_RETRIES) return -1;
sleep(1);
}
/* not reached */
}
You need similar code on the write side, so that if the writer loses the race it doesn't have to be restarted.
There are many ways of inter-process communications.
Perhaps use a named semaphore that the writing process locks before opening and initializing the file? Then the reading process could attempt to lock the semaphore as well, and if it succeeds and the file doesn't exist it unlocks the semaphore and wait a little while and retry.
The simplest way though, especially if the file will be recreated by the writing process every time, is already in the answer by John Zwinck.
Related
So my end goal is to allow multiple threads to read the same file from start to finish. For example, if the file was 200 bytes:
Thread A 0-> 200 bytes
Thread B 0-> 200 bytes
Thread C 0-> 200 bytes
etc.
Basically have each thread read the entire file. The software is only reading that file, no writing.
so I open the file:
fd = open(filename, O_RDWR|O_SYNC, 0);
and then in each thread simply loop the file. Because I only create one File Descriptor, are also create a create a clone of the file descriptor in each thread using dup
Here is a minimual example of a thread function:
void ThreadFunction(){
int file_desc= dup(fd);
uint32_t nReadBuffer[1000];
int numBytes = -1;
while (numBytes != 0) {
numBytes = read(file_desc, &nReadBuffer, sizeof(nReadBuffer));
//processing on the bytes goes here
}
}
However, I'm not sure this is correctly looping through the entire file and each thread is instead somehow daisy chaining through the file.
Is this approach correct? I inherited this software for a project I am working on, the file descriptor gets used in an mmap call, so I am not entirely sure of O_RDWR or O_SYNC matter
As other folks have mentioned, it isn't possible to use a duplicated file descriptor here. However, there is a thread-safe alternative, which is to use pread. pread reads a file at an offset and doesn't change the implicit offset in the file description.
This does mean that you have to manually manage the offset in each thread, but that shouldn't be too much of a problem with your proposed function.
I'm trying to understand the difference between select() and poll() better. For this I tried to implement a simple program that will open a file as write-only, add its file descriptor to the read set and than execute select in hopes that the function will block until the read permission is granted.
As this didnt work (and as far as I understood, this is intended behaviour) I tried to block access to the file using flock before the select() executen. Still, the program did not block its execution.
My sample code is as follows:
#include <stdio.h>
#include <poll.h>
#include <sys/file.h>
#include <errno.h>
#include <sys/select.h>
int main(int argc, char **argv)
{
printf("[+] Select minimal example\n");
int max_number_fds = FOPEN_MAX;
int select_return;
int cnt_pollfds;
struct pollfd pfds_array[max_number_fds];
struct pollfd *pfds = pfds_array;
fd_set fds;
int fd_file = open("./poll_text.txt", O_WRONLY);
struct timeval tv;
tv.tv_sec = 10;
tv.tv_usec = 0;
printf("\t[+] Textfile fd: %d\n", fd_file);
//create and set fds set
FD_ZERO(&fds);
FD_SET(fd_file, &fds);
printf("[+] Locking file descriptor!\n");
if(flock(fd_file,LOCK_EX) == -1)
{
int error_nr = errno;
printf("\t[+] Errno: %d\n", error_nr);
}
printf("[+] Executing select()\n");
select_return = select(fd_file+1, &fds, NULL, NULL, &tv);
if(select_return == -1){
int error_nr = errno;
printf("[+] Select Errno: %d\n", error_nr);
}
printf("[+] Select return: %d\n", select_return);
}
Can anybody see my error in this code? Also: I first tried to execute this code with two FDs added to the read list. When trying to lock them I had to use flock(fd_file,LOCK_SH) as I cannot exclusively lock two FDs with LOCK_EX. Is there a difference on how to lock two FDs of the same file (compared to only one fd)
I'm also not sure why select will not block when a file, that is added to the Read-set is opened as Write-Only. The program can never (without a permission change) read data from the fd, so in my understanding select should block the execution, right?
As a clarification: My "problem" I want to solve is that I have to check if I'm able to replace existing select() calls with poll() (existing in terms of: i will not re-write the select() call code, but will have access to the arguments of select.). To check this, I wanted to implement a test that will force select to block its execution, so I can later check if poll will act the same way (when given similar instructions, i.e. the same FDs to check).
So my "workflow" would be: write tests for different select behaviors (i.e. block and not block), write similar tests for poll (also block, not block) and check if/how poll can be forced do exactly what select is doing.
Thank you for any hints!
When select tells you that a file descriptor is ready for reading, this doesn't necessarily mean that you can read data. It only means that a read call will not block. A read call will also not block when it returns an EOF or error condition.
In your case I expect that read will immediately return -1 and set errno to EBADF (fd is not a valid file descriptor or is not open for reading) or maybe EINVAL (fd is attached to an object which is unsuitable for reading...)
Edit: Additional information as requested in a comment:
A file can be in a blocking state if a physical operation is needed that will take some time, e.g. if the read buffer is empty and (new) data has to be read from the disk, if the file is connected to a terminal and the user has not yet entered any (more) data or if the file is a socket or a pipe and a read would have to wait for (new) data to arrive...
The same applies for write: If the send buffer is full, a write will block. If the remaining space in the send buffer is smaller than your amount of data, it may write only the part that currently fits into the buffer.
If you set a file to non-blocking mode, a read or write will not block but tell you that it would block.
If you want to have a blocking situation for testing purposes, you need control over the process or hardware that provides or consumes the data. I suggest to use read from a terminal (stdin) when you don't enter any data or from a pipe where the writing process does not write any data. You can also fill the write buffer on a pipe when the reading process does not read from it.
I want to be able to write atomically to a file, I am trying to use the write() function since it seems to grant atomic writes in most linux/unix systems.
Since I have variable string lengths and multiple printf's, I was told to use snprintf() and pass it as an argument to the write function in order to be able to do this properly, upon reading the documentation of this function I did a test implementation as below:
int file = open("file.txt", O_CREAT | O_WRONLY);
if(file < 0)
perror("Error:");
char buf[200] = "";
int numbytes = snprintf(buf, sizeof(buf), "Example string %s" stringvariable);
write(file, buf, numbytes);
From my tests it seems to have worked but my question is if this is the most correct way to implement it since I am creating a rather large buffer (something I am 100% sure will fit all my printfs) to store it before passing to write.
No, write() is not atomic, not even when it writes all of the data supplied in a single call.
Use advisory record locking (fcntl(fd, F_SETLKW, &lock)) in all readers and writers to achieve atomic file updates.
fcntl()-based record locks work over NFS on both Linux and BSDs; flock()-based file locks may not, depending on system and kernel version. (If NFS locking is disabled like it is on some web hosting services, no locking will be reliable.) Just initialize the struct flock with .l_whence = SEEK_SET, .l_start = 0, .l_len = 0 to refer to the entire file.
Use asprintf() to print to a dynamically allocated buffer:
char *buffer = NULL;
int length;
length = asprintf(&buffer, ...);
if (length == -1) {
/* Out of memory */
}
/* ... Have buffer and length ... */
free(buffer);
After adding the locking, do wrap your write() in a loop:
{
const char *p = (const char *)buffer;
const char *const q = (const char *)buffer + length;
ssize_t n;
while (p < q) {
n = write(fd, p, (size_t)(q - p));
if (n > 0)
p += n;
else
if (n != -1) {
/* Write error / kernel bug! */
} else
if (errno != EINTR) {
/* Error! Details in errno */
}
}
}
Although there are some local filesystems that guarantee write() does not return a short count unless you run out of storage space, not all do; especially not the networked ones. Using a loop like above lets your program work even on such filesystems. It's not too much code to add for reliable and robust operation, in my opinion.
In Linux, you can take a write lease on a file to exclude any other process opening that file for a while.
Essentially, you cannot block a file open, but you can delay it for up to /proc/sys/fs/lease-break-time seconds, typically 45 seconds. The lease is granted only when no other process has the file open, and if any other process tries to open the file, the lease owner gets a signal. (If the lease owner does not release the lease, for example by closing the file, the kernel will automagically break the lease after the lease-break-time is up.)
Unfortunately, these only work in Linux, and only on local files, so they are of limited use.
If readers do not keep the file open, but open, read, and close it every time they read it, you can write a full replacement file (must be on the same filesystem; I recommend using a lock-subdirectory for this), and hard-link it over the old file.
All readers will see either the old file or the new file, but those that keep their file open, will never see any changes.
I'm creating a quite-big project as an homework where I need to create a server program which listen to 2 fifos, where clients will write.
Everything works, but there is something that is making me angry: whenever I do an operation, which is composed from some write/reads between client and server, when I close fifos on client, it looks like server "think" that there is still someone keeping those fifos opened.
Due to this, the server tries to read 64 byte after each operation, obviusly failing (reading 0 bytes). Only one time per operation this thing happens, it doesn't keep trying to read 64 byte
It doesn't create any problem to clients but it's really strange and I hate those type of bugs
I think it's a problem connected to open/close and to the fact that clients use a lock.
Note, flags used on the open operation are specified in this pseudocode text
Server behaviour:
Open Fifo(1) for READING (O_RDONLY)
Open Fifo(2) for WRITING (O_WRONLY)
Do some operations
Close Fifo(1)
Close Fifo(2)
Client behaviour:
Set a lock on Fifo(1) (waiting if there is already one)
Set a lock on Fifo(2) (same as before)
Open Fifo(1) for WRITING (O_WRONLY)
Open Fifo(2) for READING (O_RDONLY)
Do some operations
Close Fifo(1)
Close Fifo(2)
Get lock from Fifo(1)
Get lock from Fifo(2)
I can't post directly the code, except from the functions used for networking because the project is quite big and I don't use syscalls directly. Here you are:
int Network_Open(const char* path,int oflag)
{
return open(path,oflag);
}
ssize_t Network_IO(int fifo,NetworkOpCodes opcode,void* data,size_t dataSize)
{
ssize_t retsize = 0;
errno = 0;
if (dataSize == 0) return 0;
while ((retsize = (opcode == NetworkOpCode_Write? write(fifo,data,dataSize) : read(fifo,data,dataSize))) < 0)
{
if (errno != EINTR) break;
}
return retsize;
}
Boolean Network_Send(int fifo,const void* data,size_t dataSize)
{
return ((ssize_t)dataSize) == Network_IO(fifo,NetworkOpCode_Write,(void*)data,dataSize);
}
Boolean Network_Receive(int fifo,void* data,size_t dataSize)
{
return ((ssize_t)dataSize) == Network_IO(fifo,NetworkOpCode_Read,data,dataSize);
}
Boolean Network_Close(int fifo)
{
if (fifo >= 0)
return close(fifo) == 0;
}
Any help will be appreciated, thanks.
EDIT 1:
Client output: http://pastie.org/2523854
Server output (strace): http://pastie.org/2523858
Zero bytes returned from (blocking) read() indicates an end of file, i.e., that the other end has closed the FIFO. Read the manpage for read.
The zero bytes result from read() means that the other process has finished. Now your server must close the original file descriptor and reopen the FIFO to serve the next client. The blocking operations will resume once you start working with the new file descriptor.
That's the way it is supposed to work.
AFAIK, after you get the zero bytes, further attempts to read on the file descriptor will also return 0 bytes, in perpetuity (or until you close the file descriptor). Even if another process opens the FIFO, the original file descriptor will continue to indicate EOF (the other client process will be hung waiting for a server process to open the FIFO for reading).
I have created a following program in which I wish to poll on the file descriptor of the file that I am opening in the program.
#define FILE "help"
int main()
{
int ret1;
struct pollfd fds[1];
ret1 = open(FILE, O_CREAT);
fds[0].fd = ret1;
fds[0].events = POLLIN;
while(1)
{
poll(fds,1,-1);
if (fds[0].revents & POLLIN)
printf("POLLING");
}
return 0;
}
It is going in infinite loop. I am expecting to run the loop when some operation happen to the file. (Its a ASCII file)
plz help
poll() actually doesn't work on opened files. Since a read() on a file will never block, poll() will always return that you can read non-blocking from the file.
This would (almost) work on character devices*, named pipes** or sockets, though, since those block when you read() from them when there is no data available. (you also need to actually read that data then, or else poll will tell again and again that data is available)
To "poll" a growing/shrinking file, see man inotify or implement your own polling using fstat() in a loop.
* block devices are a story apart; while technically a read from a harddisk can block for 10 ms or longer, this is not perceived as blocking I/O in linux.
** see also how to flush a named pipe using bash
No idea if this is the cause of your problems (probably not), but it is a particularly bad idea to redefine the standard macro FILE.
Didn't your compiler complain about this?