Is there a problem with using pread on the same file descriptor from 2 or more different threads at the same time?
pread itself is thread-safe, since it is not on the list of unsafe functions. So it is safe to call it.
The real question is: what happens if you read from the same file concurrently (not necessarily from two threads, but also from two processes).
Regarding this, the specification says:
The behavior of multiple concurrent reads on the same pipe, FIFO, or terminal device is unspecified.
Note that it doesn't mention ordinary files. This bit relates only to read anyway, because pread cannot be used on unseekable files.
I/O is intended to be atomic to ordinary files and pipes and FIFOs.
But this is from the non-normative section, so your OS might do it differently. E.g., if you read from two threads and there is a concurrent write, you might get different pieces of the write in your two read buffers. But this kind of problem is not specific to multithreading.
Also nice to know that in some cases
read() shall block the calling thread
Not the process, just the thread. And
A thread that has blocked shall not prevent any unblocked thread [...] from eventually making forward progress
As we are using same fd, we have to bind a lock otherwise there will be mix of data from the two pread on the file descriptor.
Hence yes there is a problem in doing this
http://linux.die.net/man/2/pread
I'm not 100% sure but I think that the file descriptor structure itself isn't thread safe, so two concurrent changes to it would corrupt it. You need some kind of locking.
Related
What would happen if you call read (or write, or both) in two different thread, on the same file descriptor (lets says we are interested about a local file, and a it's a socket file descriptor), without using explicitly a synchronization mechanism?
Read and Write are syscall, so, on a single core CPU, it's probably unlucky that two read would be executed "at the same time". But with multiple cores...
What the linux kernel will do?
And let's be a bit more general : is the behavior always the same for other kernels (like BSDs) ?
Edit : According to the close documentation, we should be sure that the file descriptor isn't used by a syscall in an other thread. So it seams that explicit synchronization would be required before closing a file descriptor (and so, also around read/write if thread that may call it are still running).
Any system level (syscall) file descriptor access is thread safe in all mainstream UNIX-like OSes.
Though depending on the age they are not necessarily signal safe.
If you call read, write, accept or similar on a file descriptor from two different tasks then the kernel's internal locking mechanism will resolve contention.
For reads each byte may be only read once though and writes will go in any undefined order.
The stdio library functions fread, fwrite and co. also have by default internal locking on the control structures, though by using flags it is possible to disable that.
The comment about close is because it doesn't make a lot of sense to close a file descriptor in any situation in which some other thread might be trying to use it. So while it is 'safe' as far as the kernel is concerned, it can lead to odd, hard to diagnose corner cases.
If a thread closes a file descriptor while a second thread is trying to read from it, the second thread may get an unexpected EBADF error. Worse, if a third thread is simultaneously opening a new file, that might reallocate the same fd, and the second thread might accidentally read from the new file rather than the one it was expecting...
Have a care for those who follow in your footsteps
It's perfectly normal to protect the file descriptor with a mutex semaphore. It removes any dependence on kernel behaviour so your message boundaries are now certain. You then don't have to cite the last paragraph at the bottom of a 15,489 line manpage which explains why the mutex isn't necessary (I exaggerated, but you get my meaning)
It also makes it clear to anyone reading your code that the file descriptor is being used by more than one thread.
Fringe Benefit
There is a fringe benefit to using a mutex that way. Suppose you've got different messages coming from the different threads and some of those messages are more important than others. All you need to do is set the thread priorities to reflect their messages' importance. That way the OS will ensure that your messages will be sent in order of importance for minimal effort on your part.
The result would depend on how the threads are scheduled to run at that particular instant in time.
One way to potentially avoid undefined behavior with multi-threading is to assume that you are doing memory operations. E.g. updating a linked list or changing a variable, etc.
If you use mutex/semaphores/lock or some other synchronization mechanism, it should work as intended.
My question is quite simple. Is reading and writing from and to a serial port under Linux thread-safe? Can I read and write at the same time from different threads? Is it even possible to do 2 writes simultaneously? I'm not planning on doing so but this might be interesting for others. I just have one thread that reads and another one that writes.
There is little to find about this topic.
More on detail—I am using write() and read() on a file descriptor that I obtained by open(); and I am doing so simultaneously.
Thanks all!
Roel
There are two aspects to this:
What the C implementation does.
What the kernel does.
Concerning the kernel, I'm pretty sure that it will either support this or raise an according error, otherwise this would be too easy to exploit. The C implementation of read() is just a syscall wrapper (See what happens after read is called for a Linux socket), so this doesn't change anything. However, I still don't see any guarantees documented there, so this is not reliable.
If you really want two threads, I'd suggest that you stay with stdio functions (fopen/fread/fwrite/fclose), because here you can leverage the fact that the glibc synchronizes these calls with a mutex internally.
However, if you are doing a blocking read in one thread, the other thread could be blocked waiting to write something. This could be a deadlock. A solution for that is to use select() to detect when there is some data ready to be read or buffer space to be written. This is done in a single thread though, but while the initial code is a bit larger, in the end this approach is easier and cleaner, even more so if multiple streams are involved.
I'm writing a server web.
Each connection is served by a separate thread, so I don't know in advance the number of threads.
There are also a group of text files (don't know the number, too), and each thread can read/write on each file.
A file can be written by just one thread a time, but different threads can write on different files at the same time.
If a file is read by one or more threads (reads can be concurrent), no thread can write on THAT file.
Now, I noticed this (Thread safe multi-file writing) solution, but I'd like also to use functions as fgets(), for example.
So, can I flock() a file, and then use a fgets() or another stdio read/write library function?
First of all, use fcntl, not flock. The latter is a non-standard, deprecated BSD function and does not work with NFS and possibly other filesystems. fcntl locking on the other hand is POSIX standard and is intended to work everywhere.
Now if you want to use file-level reader-writer locking mixed with stdio, it will work, but you have to take some care to ensure that buffering does not break your assumptions about locks. The method I'm about to explain is not the only one, but I believe it's the clearest/simplest:
When you want to operate on one of your files with stdio, obtaining the correct type of lock (read or write, aka shared of exclusive) should be the first thing you do after fopen. Use fileno to get the file descriptor number and apply the lock to it. After that, perform your entire read or write operation. Do not make any attempt to unlock the file; instead, call fclose to close the file and let it be implicitly unlocked when it's closed. Otherwise you may release the lock while unbuffered data is still unwritten, or later read data that was buffered before the lock was released, that's no longer valid after the lock is released.
Hello every one I want to ask a question about flockfile function I was reading the description and came to know that it is used in threads. but I am doing forking which means that there will be different process not threads can I use flockfile with different process does it make any difference?
Thanks
The flockfile function doesn't lock a file but the FILE data structure that a process uses to access a file. So this is about the representation in address space that a process has of the file, not necessarily about the file itself.
Even in a process if you have different FILEs open on the same file, you can write simultaneously to that file, even if you have locked each of the FILEs by means of flockfile.
For locking on the file itself have a look into flock and lockf but beware that the rules of their effects for access files through different threads of the same process are complicated.
These functions can only be used within one process.
From the POSIX docs:
In summary, threads sharing stdio streams with other threads can use flockfile() and funlockfile() to cause sequences of I/O performed by a single thread to be kept bundled.
All the rest of that page talks about mutual exclusion between threads. Different processes will have different input/output buffers for file streams, this locking wouldn't really make sense/be effective.
Does the OS handle it correctly?
Or will I have to call flock()?
Although the OS won't crash, and the filesystem won't be corrupted, calls to write() are NOT guarenteed to be atomic, unless the file descriptor in question is a pipe, and the amount of data to be written is PIPE_MAX bytes or less. The relevant part of the standard:
An attempt to write to a pipe or FIFO has several major characteristics:
Atomic/non-atomic: A write is atomic if the whole amount written in one operation is not interleaved with data from any other process. This is useful when there are multiple writers sending data to a single reader. Applications need to know how large a write request can be expected to be performed atomically. This maximum is called {PIPE_BUF}. This volume of IEEE Std 1003.1-2001 does not say whether write requests for more than {PIPE_BUF} bytes are atomic, but requires that writes of {PIPE_BUF} or fewer bytes shall be atomic.
[...]
As such, in principle, you must lock with simultaneous writers, or your written data may get mixed up and out of order (even within the same write) or you may have multiple writes overwriting each other. However, there is an exception - if you pass O_APPEND, your writes will be effectively atomic:
If the O_APPEND flag of the file status flags is set, the file offset shall be set to the end of the file prior to each write and no intervening file modification operation shall occur between changing the file offset and the write operation.
Although this is not necessarily atomic with respect to non-O_APPEND writes, or simultaneous reads, if all writers use O_APPEND, and you synchronize somehow before doing a read, you should be okay.
write (and writev, too) guarantee atomicity.
Which means if two threads or processes write simultaneously, you do not have a guarantee which one writes first. But you do have the guarantee that anything that is in one syscall will not be intermingled with data from the other one.
Insofar it will always work correctly, but not necessarily in the way you expect (if you assume that process A comes before process B).
Of course the kernel will handle it correctly, for the kernel’s idea of correctness — which is by definition correct.
If you have a set of coöperating flockers, then you can use the kernel to queue everyone up. But remember that flock has nothing to do with I/O: it will not stop someone else from writing the file. It will at most only interfere with other flockers.
Yes of course it will work correctly. It won't crash the OS or the process.
Whether it makes any sense, depends on the way the application(s) are written an what the file's purpose is.
If the file is opened by all processes as append-only, each process (notionally) does an atomic seek-to-end before each write; these are guaranteed not to overwrite each others' data (but of course, the order is nondeterministic).
In any case, if you use a library which potentially splits a single logical write into several write syscalls, expect trouble.
write(), writev(), read(), readv() can generate partial writes/reads where the amount of data transferred is smaller than what was requested.
Quoting the Linux man page for writev():
Note that is not an error for a successful call to transfer fewer bytes than requested
Quoting the POSIX man page:
If write() is interrupted by a signal after it successfully writes some data, it shall return the number of bytes written.
AFAIU, O_APPEND does not help in this regard because it does not prevent partial writes: it only ensures that whatever data is written is appended at the end of the file.
See this bug report from the Linux kernel:
A process is writing a messages to the file. [...] the writes [...] can be split in two. [...] So if the signal arrives [...] the write is interrupted. [...] this is perfectly valid behavior as far as spec (POSIX, SUS,...) is concerned
FIFOs and PIPE writes smaller than PIPE_MAX however are guaranteed to be atomic.