Why doesn't my multi-process writing program trigger concurrent conflict? - c

I'm trying to trigger some concurrent conflicts by having several processes writing to the same file, but couldn't:
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <string.h>
#include <fcntl.h>
#include <sys/wait.h>
void concurrent_write()
{
int create_fd = open("bar.txt", O_CREAT | O_TRUNC, 0644);
close(create_fd);
int repeat = 20;
int num = 4;
for (int process = 0; process < num; process++)
{
int rc = fork();
if (rc == 0)
{
// child
int write_fd = open("bar.txt", O_WRONLY | O_APPEND, 0644);
for (int idx = 0; idx < repeat; idx++)
{
sleep(1);
write(write_fd, "child writing\n", strlen("child writing\n"));
}
close(write_fd);
exit(0);
}
}
for (int process = 0; process < num; process++)
{
wait(NULL);
// wait for all children to exits
}
printf("write to `bar.txt`\n%d lines written by %d process\n", repeat * num, num);
printf("wc:");
if (fork() == 0)
{
// child
char *args[3];
args[0] = strdup("wc");
args[1] = strdup("bar.txt");
args[2] = NULL;
execvp(args[0], args);
}
}
int main(int argc, char *argv[])
{
concurrent_write();
return 0;
}
This program fork #num children and then have all of them write #repeat lines to a file. But every time (however I change #repeat and #num) I got the same result that the length of bar.txt (output file) matched the number of total written lines. Why is there no concurrent conflicts triggered?

Writing to a file can be divided into a two-step process:
Locate where you want to write.
Write data into the file.
You open a file with flag O_APPEND and it ensures that the two-step process is atomic. So, you can always find the lines of the file as the count you set.

See the open(2) man page:
O_APPEND
The file is opened in append mode. Before each write(2),
the file offset is positioned at the end of the file, as
if with lseek(2). The modification of the file offset and
the write operation are performed as a single atomic step.
In essence, one of the major design features of O_APPEND is precisely to prevent the sort of "concurrent conflicts" you mention. The typical example would be a log file that several processes must write to. Using O_APPEND ensures their messages do not overwrite each other.
Moreover, all data written by a single write call is written atomically, so provided that your write("child writing\n") successfully writes all its bytes (which for a regular file it usually would), they will not be interleaved with the bytes of any other such message.

First, write() calls with the O_APPEND flag should be atomic. Per POSIX write():
If the O_APPEND flag of the file status flags is set, the file offset shall be set to the end of the file prior to each write and no intervening file modification operation shall occur between changing the file offset and the write operation.
But that's not enough when there are multiple threads or processes making parallel write() calls on the same file - that does not guarantee that parallel write() calls are atomic.
POSIX does guarantee that parallel write() calls are also atomic:
All of the following functions shall be atomic with respect to each
other in the effects specified in POSIX.1-2017 when they operate on
regular files or symbolic links:
...
write()
...
See also Is file append atomic in UNIX?
Beware, though. Reading that question and its answers shows that Linux filesystems such as ext3 are not POSIX compliant once you get past a relatively small size operation, or possibly if you cross page and/or file system sector boundaries. I suspect XFS and ZFS will support write() atomicity much better given their origins.
And none of this applies to Windows.

Related

read() doesn't block on empty FIFOs opened without O_NONBLOCK flag

pipe(7) says:
If a process attempts to read from an empty pipe, then read(2) will block until data is available. If a process attempts to write to a full pipe (see below), then write(2) blocks until sufficient data has been read from the pipe to allow the write to complete. Nonblocking I/O is possible by using the fcntl(2) F_SETFL operation to enable the O_NONBLOCK open file status flag.
Below I have two simple C programs compiled on linux with gcc:
reader.c:
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#define STACKBUF_SIZE 128
#define FIFO_PATH "/home/bogdan/.imagedata"
signed int main(int argc, char **argv) {
int fifo_fd = open(FIFO_PATH, O_RDONLY); // blocking... - notice no O_NONBLOCK flag
if (fifo_fd != -1) {
fprintf(stdout, "open() call succeeded\n");
}
while (1) {
char buf[STACKBUF_SIZE] = {0};
ssize_t bread = read(fifo_fd, buf, STACKBUF_SIZE);
fprintf(stdout, "%d - %s\n", bread, buf);
sleep(1);
}
close(fifo_fd);
return EXIT_SUCCESS;
}
writer.c:
#define STACKBUF_SIZE 128
#define FIFO_PATH "/home/bogdan/.imagedata"
#define DATA "data"
int main(void) {
int fifo_fd = open(FIFO_PATH, O_WRONLY); // blocks until reader opens on the reader end, however we always first open the reader so...
if(fifo_fd != -1) {
ssize_t bwritten = write(fifo_fd, DATA, 5);
fprintf(stdout, "writer wrote %ld bytes\n", bwritten);
}
close(fifo_fd);
return EXIT_SUCCESS;
}
The files are compiled into two separate binaries with gcc writer.c -Og -g -o ./writer, same for the reader.
From the shell I first execute the reader binary, and as expected, the initial open() call blocks until I also execute the writer. I then execute the writer, whose open() call immediately succeeds and it writes 5 bytes to the FIFO (which are correctly displayed by the reader), after which it closes the fd, leaving the FIFO empty (?).
However, the following read() calls in the while loop of the reader don't block at all, and instead just return 0.
Unless I am missing something (I probably am) this is in clash with the semantics outlined by the pipe(7) manpage, as the FIFO fd was open without the O_NONBLOCK flag both in the reader and the writer.
The section of the manual that you quoted only applies to pipes with open writers. Two paragraphs down, it says this:
If all file descriptors referring to the write end of a pipe have been closed, then an attempt to read(2) from the pipe will see end-of-file (read(2) will return 0).

Question about writing to a file in Producer-Consumer program

I have to write two separate programs, which one is a Producer and second is a Consumer (both running in separate terminals). I provide an argument to the Producer which can be a text or a single character. Then, producer creates a .txt file, puts single character into it then closes it. Consumer opens that file, reads that character and prints it on a terminal, then closes the file and deletes it.The whole process repeats itself. If provided argument includes *, for example * or text* it finishes the both programs, printing * before ending. I can only use functions: open(), close(), read(), write(), unlink(). The expected result looks like this:
I have written both codes, this is the Producer code:
(I am aware of the fact that i have unnecessarily defined SIZE and used it, please don't mind it)
#include <stdio.h>
#include <string.h>
#include <unistd.h>
#include <sys/stat.h>
#include <fcntl.h>
#define SIZE 1
int main(int argc, char *argv[]){
char buff;
do{
int fdi=-1;
while(fdi<0){
fdi=open("test.txt",O_WRONLY | O_CREAT | O_EXCL, 0666);
}
read(STDIN_FILENO,&buff,SIZE);
write(fdi,&buff,SIZE);
close(fdi);
}while(buff!='*');
return 0;
}
and this is the Consumer code:
#include <string.h>
#include <unistd.h>
#include <sys/stat.h>
#include <fcntl.h>
#define SIZE 1
int main(int argc, char *argv[]){
char buff;
do{
int fdi=-1;
while(fdi<0){
fdi=open("test.txt",O_RDONLY | O_EXCL);
}
int rdin=read(fdi,&buff,SIZE);
if(rdin>0){
write(STDOUT_FILENO,&buff,SIZE);
close(fdi);
unlink("test.txt");
}
else{
close(fdi);
}
}while(buff!='*');
return 0;
}
My question is: how does the Producer program not insert more than a one character into the file? I mean, if I for example run the Producer program only, and provide argument text, it will insert in the file the letter t only, the rest will be inserted to other files. Shouldn't it loop and add the whole text word to one file? There is no statement that guarantees that file will contain one character, yet it contains only one character and I don't know why.
In the producer, when you use the O_CREAT and O_EXCL flags when opening a file, the call will fail if the file already exists. So on the first iteration of the outer loop (assuming the file doesn't exist) the file is created and the first character is written. On the next iteration the open call fails because the file exists, so it sits in the inner loop for as long as the file exists.
In the consumer, the open call is done in a loop until it succeeds. This will happen when the producer writes and closes the file. The consumer then reads the character from the file and deletes it. When the consumer deletes the file, the open call in the producer will succeed and write the second character to the file.
This process then repeats until a * character is read by the producer and written to the file, after which the producer exits. Then when the consumer reads the * it also exits.
I will limit my answer to your only specific question:
how does the Producer program not insert more than a one character into the file?
You're doing the following in a loop:
do{
int fdi = -1;
while (fdi < 0){
// Open the file only if it does not exist, creating it.
fdi = open("test.txt", O_WRONLY | O_CREAT | O_EXCL, 0666);
}
// Read 1 char from stdin.
read(STDIN_FILENO, &buff, SIZE);
// Write that char to the beginning of the file.
write(fdi, &buff, SIZE);
// Close and truncate the file.
close(fdi);
} while(buff != '*');
The first time you open the file, it is also created. From the second time onwards, the combination of flags O_CREAT | O_EXCL will make open() fail with error EEXIST, since this combination of flags will open the file only if it doesn't already exist. After writing the first character, your program will run in an endless loop (while (fdi < 0)) trying to open the file a second time.
From the manual page for open():
O_EXCL: Ensure that this call creates the file: if this flag is specified in conjunction with O_CREAT, and pathname
already exists, then open() will fail.
So, first of all, you don't need the O_EXCL flag. Other than that, if you want to append data to the file instead of overwriting its content each time, you should add the O_APPEND flag when you open() the file. From the manual page:
O_APPEND: The file is opened in append mode. Before each write(2), the file offset is positioned at the end of the file,
as if with lseek(2). The modification of the file offset and the write operation are performed as a single
atomic step.

Writing FILE* through pipe

I have two file open in two different processes. There's a pipe connecting the two. Is it possible to write directly from one file to another? Especially if the process reading doesn't know the size of the file it's trying to read?
I was hoping to do something like this
#define length 100
int main(){
int frk = fork();
int pip[2];
pipe(pip);
if (frk==0){ //child
FILE* fp fopen("file1", "r");
write(pip[1],fp,length);
}
else {
FILE* fp fopen("file2", "w");
read(pip[0],fp,length);
}
Is it possible to write directly from one file to another?
C does not provide any mechanism for that, and it seems like it would require specialized hardware support. The standard I/O paradigm is that data get read from their source into memory or written from memory to their destination. That pesky "memory" in the middle means copying from one file to another cannot be direct.
Of course, you can write a function or program that performs such a copy, hiding the details from you. This is what the cp command does, after all, but the C standard library does not contain a function for that purpose.
Especially if the process reading doesn't know the size of the file it's trying to read?
That bit isn't very important. One simply reads and then writes (only) what one has read, repeating until there is nothing more to read. "Nothing more to read" means that a read attempt indicates by its return value that the end of the file has been reached.
If you want one process to read one file and the other to write that data to another file, using a pipe to convey data between the two, then you need both processes to implement that pattern. One reads from the source file and writes to the pipe, and the other reads from the pipe and writes to the destination file.
Special note: for the process reading from the pipe to detect EOF on that pipe, the other end has to be closed, in both processes. After the fork, each process can and should close the pipe end that it doesn't intend to use. The one using the write end then closes that end when it has nothing more to write to it.
In other unix systems, like BSD, there's a call to connect directly two file descriptors to do what you want, but don't know if there's a system call to do that in linux. Anywya, this cannot be done with FILE * descriptors, as these are the instance of a buffered file used by <stdio.h> library to represent a file. You can get the file descriptor (as the system knows it) of a FILE * instance by a call to the getfd(3) function call.
The semantics you are trying to get from the system are quite elaborate, as you want something to pass directly the data from one file descriptor to another, without intervention of any process (directly in the kernel), and the kernel needs for that a pool of threads to do the work of copying directly from the read calls to the write ones.
The old way of doing this is to create a thread that makes the work of reading from one file descriptor (not a FILE * pointer) and write to the other.
Another thing to comment is that the pipe(2) system call gives you two connected descriptors, that allow you to read(2) in one (the 0 index) what is write(2)n in the second (the 1 index). If you fork(2) a second process, and you do the pipe(2) call on both, you will have two pipes (with two descriptors each), one in each process, with no relationship between them. You will be able only to communicate each process with itself, but not with the other (which doesn't know anything about the other process' pipe descriptors) so no communication between them will be possible.
Next is a complete example of what you try to do:
#include <errno.h>
#include <stdlib.h>
#include <string.h>
#include <sys/wait.h>
#include <fcntl.h>
#include <stdio.h>
#include <sys/types.h>
#include <unistd.h>
#define length 100
#define FMT(fmt) "pid=%d:"__FILE__":%d:%s: " fmt, getpid(), __LINE__, __func__
#define ERR(fmt, ...) do { \
fprintf(stderr, \
FMT(fmt ": %s (errno = %d)\n"), \
##__VA_ARGS__, \
strerror(errno), errno); \
exit(1); \
} while(0)
void copy(int fdi, int fdo)
{
unsigned char buffer[length];
ssize_t res, nread;
while((nread = res = read(fdi, buffer, sizeof buffer)) > 0) {
res = write(fdo, buffer, nread);
if (res < 0) ERR("write");
} /* while */
if (res < 0) ERR("read");
} /* copy */
int main()
{
int pip[2];
int res;
res = pipe(pip);
if (res < 0) ERR("pipe");
char *filename;
switch (res = fork()) {
case -1: /* error */
ERR("fork");
case 0: /* child */
filename = "file1";
res = open(filename, O_RDONLY);
if (res < 0) ERR("open \"%s\"", filename);
close(pip[0]);
copy(res, pip[1]);
break;
default: /* parent, we got the child's pid in res */
filename = "file2";
res = open(filename, O_CREAT | O_TRUNC | O_WRONLY, 0666);
if (res < 0) ERR("open \"%s\"", filename);
close(pip[1]);
copy(pip[0], res);
int status;
res = wait(&status); /* wait for the child to finish */
if (res < 0) ERR("wait");
fprintf(stderr,
FMT("The child %d finished with exit code %d\n"),
res,
status);
break;
} /* switch */
exit(0);
} /* main */

Is it possible to change open file descriptor's access flags of another process using any system calls or even using loadable kernel module?

I am running a process (Process A) which opens a file with read only access mode. Then it pauses, causing the process to stay running and keep the file descriptor open. Later, if any of the below is possible, it will resume after some time and continue operation.
I want to know if any of these are possible:
Can we create another process (Process B) with superuser privileges, which can access Process A's open file descriptor and change its access mode to read and write ?
Can we modify the file descriptor of Process A, from within the process (I mean within its code) from read only to read and write?
Can I create a loadable kernel module that access the process A using its process ID (PID) and check for open file descriptors and change their permissions to read and write?
I have searched the forums countless number of times, but didn't find anything specific to my problem. I also found out about the fcntl() system call. But this doesn't allow us to modify the status flag of a file descriptor.
You can't. Permissions are locked in place at the time when a file descriptor is opened and it would be almost impossible to ensure the security of the system if they could be changed at run time. You could make a kernel module that changes this, but it would pretty much be a death sentence to the stability and security of the system.
What you can do and what is normally done is to open the file again with different permissions and replace the file descriptor with dup2.
3 first, since it's easiest: Can it be done in the kernel? Certainly … in theory, you can do “almost anything” there.
Both #2 and #1 come down to the question as to whether the file descriptor is re-openable.
In the most common case — the fd refers to a regular file stream in the local filesystem, the pathname of which has a directory link to which has not been altered — you can simply open the same pathname from another process. EG: If A opens /home/user/foo.log read-only, then either A or B can simply open the same pathname read-write in future.
Since you're asking, I'll assume it's not that easy. Perhaps the pathname may have been altered (eg, the file may have been unlinked), or perhaps the fd is a reference to another type of stream, like a shell pipeline, FIFO, or network socket connection.
As you probably noticed, fcntl does not allow escalation of privileges:
F_SETFL (int)
Set the file status flags to the value specified by arg. File
access mode (O_RDONLY, O_WRONLY, O_RDWR) and file creation
flags (i.e., O_CREAT, O_EXCL, O_NOCTTY, O_TRUNC) in arg are
ignored.
This is, of course, a security precaution to protect against escalation of privileges in a process that may have opened the file under temporarily elevated or changed permissions.
However, it seems that you may have a chance to open a new, duplicate stream based upon a pathname if you know the file descriptor number and the process ID of Process A.
The /proc filesystem contains virtual file entries which represent open streams. Under /proc/ pid /fd/ fd you can find a pathname to the currently-open stream. Given sufficient permissions, you can open that stream.
reader.c
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
int
main (int argc, char** argv) {
FILE* f = fopen("/tmp/foo", "r");
while(1) {
sleep(10);
}
}
writer.c
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <string.h>
int
main (int argc, char** argv) {
/* sane PID passed in */
if (2 != argc) exit(1);
if (5 > strlen(argv[1])) exit(2);
for(size_t ci = 0; argv[ci]; ++ci) {
if (! ( ('0' <= argv[1][ci]) && (argv[1][ci] <= '9') ) ) exit(3);
}
/* note we know FD=3 so it's hard-coded */
char path[100];
int n = snprintf(path, 99, "/proc/%s/fd/3", argv[1]);
if (n < 0) exit (4);
FILE* f = fopen(path, "rw+");
fprintf(f, "written\n");
exit(0);
}
shell test
⇒ cc reader.c -o reader
⇒ cc writer.c -o writer
⇒ echo XXXXXXXXXXXX > /tmp/foo
⇒ cat /tmp/foo
XXXXXXXXXXXX
⇒ ./reader &
[1] 20709
⇒ ./writer 20709
⇒ cat /tmp/foo
written
XXXX

How can I use Linux's splice() function to copy a file to another file?

here's another question about splice(). I'm hoping to use it to copy files, and am trying to use two splice calls joined by a pipe like the example on splice's Wikipedia page. I wrote a simple test case which only tries to read the first 32K bytes from one file and write them to another:
#define _GNU_SOURCE
#include <fcntl.h>
#include <stdio.h>
#include <unistd.h>
#include <errno.h>
#include <string.h>
int main(int argc, char **argv) {
int pipefd[2];
int result;
FILE *in_file;
FILE *out_file;
result = pipe(pipefd);
in_file = fopen(argv[1], "rb");
out_file = fopen(argv[2], "wb");
result = splice(fileno(in_file), 0, pipefd[1], NULL, 32768, SPLICE_F_MORE | SPLICE_F_MOVE);
printf("%d\n", result);
result = splice(pipefd[0], NULL, fileno(out_file), 0, 32768, SPLICE_F_MORE | SPLICE_F_MOVE);
printf("%d\n", result);
if (result == -1)
printf("%d - %s\n", errno, strerror(errno));
close(pipefd[0]);
close(pipefd[1]);
fclose(in_file);
fclose(out_file);
return 0;
}
When I run this, the input file seems to be read properly, but the second splice call fails with EINVAL. Anybody know what I'm doing wrong here?
Thanks!
From the splice manpage:
EINVAL Target file system doesn't support splicing; target file is
opened in append mode; neither of the descriptors refers to a
pipe; or offset given for non-seekable device.
We know one of the descriptors is a pipe, and the file's not open in append mode. We also know no offset is given (0 is equivalent to NULL - did you mean to pass in a pointer to a zero offset?), so that's not the problem. Therefore, the filesystem you're using doesn't support splicing to files.
What kind of file system(s) are you copying to/from?
Your example runs on my system when both files are on ext3 but fails when I use an external drive (I forget offhand if it is DOS or NTFS). My guess is that one or both of your files are on a file system that splice does not support.
The splice(2) system call is for copying between files and pipes and not between files, so it can not be used to copy between files, as has been pointed out by the other answers.
As of Linux 4.5 however a new copy_file_range(2) system call is available that can copy between files. In the case of NFS it can even cause server side copying.
The linked man page contains a full example program.

Resources