what is the sizeof pipe [duplicate] - c

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Pipe buffer size is 4k or 64k?
In linux, which header file specifies the size available for writes on a pipe ?
I capture latency of my main application per configurable cycle and write that data to a pipe. A separate reporting process reads off that pipe. Typically, the main application exchanges about 10,000 messages per second. So, given a cycle of one second, the main application collects 10k latency data points for each message exchange and then writes them off to a pipe on a second's boundary. I have following questions in this scenario
Is there way to specify the size of pipe while creation,so i can ensure there is adequate write space in the pipe?
Are writes to pipe expensive? How is pipe implemented? Do writes to pipe go against some mmap file or in-memory buffer?

Is there way to specify the size of a pipe at creation? Maybe. Since Linux 2.6.35 you can use fcntl(2) with the F_SETPIPE_SZ operation to set the pipe buffer up to /proc/sys/fs/pipe-max-size. On earlier versions, no, but I suppose you could use the socket mechanism instead. It would be slower for most purposes but you can specify the amount of buffering up to wmem_max, see socket(7), and you have certain other controls over kernel memory allocation.
Are writes to pipe expensive? No. But write(2) is a kernel call, so pipe I/O should be buffered if possible.
How are pipes implemented? With kernel code that transfers data in and out of the system buffer cache.
Do writes to pipe go against some mmap file or in-memory buffer? It's an in-memory buffer

Related

What's the point of fclose()? [duplicate]

This question already has answers here:
What happens if I don't call fclose() in a C program?
(4 answers)
Closed 7 years ago.
From what I read, fclose() is basically like the free() when memories been allocated but I also read that the operating system will close that file for you and flush away any streams that were open right after it terminates. I've even tested a few programs without fclose() and they all seem to work fine.
A long-running process (ex. a database or web browser) may need to open many files during its lifetime; keeping unused files open wastes resources, and potentially locks other processes out of using the files.
Additionally, fclose flushes the user-space buffer that is frequently used when writing to files to improve performance; if the process exits without flushing that buffer (with fflush/fclose), the data still in the buffer will be lost.
Most modern OSes will also reclaim the memory you malloc()'ed, but using free() when appropriate is still good practice. The point is that once you no longer need a resource you should relinquish it, so the system can repurpose whatever backing resources were reclaimed for use by other applications (typically, memory). Also there are limits on the number of file descriptors you can keep open at the same time.
Apart from that there are further considerations in the case of open() and friends, specifically by default open file descriptors are inherited accross thread and fork()'ed process boundaries. This means that if you fail to close() file descriptors, you may find that a child process can access files opened by the parent process. This is typically undesirable, it's a trivial security hole if you want a privileged parent process to spawn a slave process with lesser privileges.
Additionally, the semantics of unlink() and friends are that the file contents are only 'deleted' once the last open file descriptor to the file is close()'d so again: if you keep files open for longer than strictly necessary you cause suboptimal behaviour in the overall system.
Finally, in the case of sockets a close() also corresponds to disconnecting from the remote peer.

Mutexs with pipes in C

I am sorry if this sounds like I am repeating this question, but I have a couple additions that I am hoping someone can explain for me.
I am trying to implement a 'packet queueing system' with pipes. I have 1 thread that has a packet of data that it needs to pass to a second thread (Lets call the threads A and B respectively). Originally I did this with a queueing structure that I implemented using linked lists. I would lock a Mutex, write to the queue, and then unlock the Mutex. On the read side, I would do the same thing, lock, read, unlock. Now I decided to change my implementation and make use of pipes (so that I can make use of blocking when data is not available). Now for my question:
Do I need to use Mutexs to lock the file descriptors of the pipe for read and write operations?
Here is my thinking.
I have a standard message that gets written to the pipe on writes, and it is expected to be read on the read side.
struct pipe_message {
int stuff;
short more_stuff;
char * data;
int length;
};
// This is where I read from the pipe
num_bytes_read = read(read_descriptor, &buffer, sizeof(struct pipe_message));
if(num_bytes_read != sizeof(struct pipe_message)) // If the message isn't full
{
printe("Error: Read did not receive a full message\n");
return NULL;
}
If I do not use Mutexs, could I potentially read only half of my message from the pipe?
This could be bad because I would not have a pointer to the data and I could be left with memory leaks.
But, if I use Mutexs, I would lock the Mutex on the read, attempt to read which would block, and then because the Mutex is locked, the write side would not be able to access the pipe.
Do I need to use Mutexs to lock the file descriptors of the pipe for read and write operations?
It depends on the circumstances. Normally, no.
Normality
If you have a single thread writing into the pipe's write file descriptor, no. Nor does the reader need to use semaphores or mutexes to control reading from the pipe. That's all taken care of by the OS underneath on your behalf. Just go ahead and call write() and read(); nothing else is required.
Less Usual
If you have multiple threads writing into the pipe's write file descriptor, then the answer is maybe.
Under Linux calling write() on the pipe's write file descriptor is an atomic operation provided that the size of data being written is less than a certain amount (this is specified in the man page for pipe(), but I recall that it's 4kbytes). This means that you don't need a mutex or semaphore to control access to the pipe's write file descriptor.
If the size of the data you're writing is too large then then the call to write() on the pipe is not atomic. So if you have multiple threads writing to the pipe and the size is too large then you do need a mutex to control access to the write end of the pipe.
Using a mutex with a blocking pipe is actually dangerous. If the write side takes the mutex, writes to the pipe and blocks because the pipe is full, then the read side can't get the mutex to read the data from the pipe, and you have a deadlock.
To be safe, on the write side you'd probably need to do something like take the mutex, check if the pipe has space for what you want to write, if not then release the mutex, yield and then try again.

How to make C program block until FIFO pipe is empty?

I'm doing IPC using named (FIFO) pipes and I would like to coordinate that program can only write into the pipe when program reading the pipe has read the previously written data out from the pipe. So I would like to block the write until the the pipe is empty. Is this possible?
One option that I though is that write function blocks when the pipe is full. But I would like to do this to much smaller amounts of data than the pipe size in Linux. E.g I would like that program can only write 20 bytes and then it waits until other end has read the data. I think you can not shrink named pipes to be so small. (Minimum size seems to be page file size (4096 bytes)?)
Thanks!
A possible solution is to have the reading process sending a signal to the writing process when it has read some data. You can do this using kill() to send SIGTERM to the writer, since SIGTERM can be catched and handled.

Inter process communication on the same machine,signal or socket,how to decide?

It seems to me that both signal and socket can be used for this job,
how do you decide which one to use actually?
Using signals for IPC is sort of inconvenient and primitive. You should really be choosing between Unix sockets (not TCP ones!) and pipes.
Pipes are generally easier to program with, since they guarantee that a single write under the size of PIPE_BUF is atomic. They do have their limitations however. For example, when the writer is faster than the reader, the writer starts to block when the pipe buffer gets full. The size of this buffer by default is around 64k, and it cannot be changed without recompiling the kernel, at least in Linux. Pipes are also unidirectional, which means that you'll have to keep a pair of pipes in each process, one for reading and one for writing.
Unix sockets have a configurable send buffer size and a more advanced programming interface.

C prog: After append file, read still return 0

I've a new file, opened as read/write then 1 thread will receive from network and append binary data to that file, the other thread will read from the same file to process the binary data, but the read() always return 0, so I can't read the data, but if I using cat in command line to append data, then the program can read the data and process. I don't know why it can't notice the new data coming from network. I'm using open(), read(), and write() in this program.
Use a pipe instead of an HDD-file. Depending on your system (which you didnt tell us) there are only minor modifications to your code (which you didnt give us) to do that.
file operations are buffered. try flushing the stream?
Assuming that your read() and write() functions are the POSIX one, they share the file position, even if they are used in different threads. So your read after write was trying to read after the position at which write had written. Don't use file IO to communicate between threads. In most contexts, I'd not even use pipe or sockets for that (one context I'd use them is when the reading thread is using poll/select with other file descriptors) but simple shared memory and mutex.

Resources