I'm creating a quite-big project as an homework where I need to create a server program which listen to 2 fifos, where clients will write.
Everything works, but there is something that is making me angry: whenever I do an operation, which is composed from some write/reads between client and server, when I close fifos on client, it looks like server "think" that there is still someone keeping those fifos opened.
Due to this, the server tries to read 64 byte after each operation, obviusly failing (reading 0 bytes). Only one time per operation this thing happens, it doesn't keep trying to read 64 byte
It doesn't create any problem to clients but it's really strange and I hate those type of bugs
I think it's a problem connected to open/close and to the fact that clients use a lock.
Note, flags used on the open operation are specified in this pseudocode text
Server behaviour:
Open Fifo(1) for READING (O_RDONLY)
Open Fifo(2) for WRITING (O_WRONLY)
Do some operations
Close Fifo(1)
Close Fifo(2)
Client behaviour:
Set a lock on Fifo(1) (waiting if there is already one)
Set a lock on Fifo(2) (same as before)
Open Fifo(1) for WRITING (O_WRONLY)
Open Fifo(2) for READING (O_RDONLY)
Do some operations
Close Fifo(1)
Close Fifo(2)
Get lock from Fifo(1)
Get lock from Fifo(2)
I can't post directly the code, except from the functions used for networking because the project is quite big and I don't use syscalls directly. Here you are:
int Network_Open(const char* path,int oflag)
{
return open(path,oflag);
}
ssize_t Network_IO(int fifo,NetworkOpCodes opcode,void* data,size_t dataSize)
{
ssize_t retsize = 0;
errno = 0;
if (dataSize == 0) return 0;
while ((retsize = (opcode == NetworkOpCode_Write? write(fifo,data,dataSize) : read(fifo,data,dataSize))) < 0)
{
if (errno != EINTR) break;
}
return retsize;
}
Boolean Network_Send(int fifo,const void* data,size_t dataSize)
{
return ((ssize_t)dataSize) == Network_IO(fifo,NetworkOpCode_Write,(void*)data,dataSize);
}
Boolean Network_Receive(int fifo,void* data,size_t dataSize)
{
return ((ssize_t)dataSize) == Network_IO(fifo,NetworkOpCode_Read,data,dataSize);
}
Boolean Network_Close(int fifo)
{
if (fifo >= 0)
return close(fifo) == 0;
}
Any help will be appreciated, thanks.
EDIT 1:
Client output: http://pastie.org/2523854
Server output (strace): http://pastie.org/2523858
Zero bytes returned from (blocking) read() indicates an end of file, i.e., that the other end has closed the FIFO. Read the manpage for read.
The zero bytes result from read() means that the other process has finished. Now your server must close the original file descriptor and reopen the FIFO to serve the next client. The blocking operations will resume once you start working with the new file descriptor.
That's the way it is supposed to work.
AFAIK, after you get the zero bytes, further attempts to read on the file descriptor will also return 0 bytes, in perpetuity (or until you close the file descriptor). Even if another process opens the FIFO, the original file descriptor will continue to indicate EOF (the other client process will be hung waiting for a server process to open the FIFO for reading).
Related
Similar to the problem asked a while ago on kernel 3.x, but I'm seeing it on 4.9.37.
The named fifo is created with mkfifo -m 0666. On the read side it is opened with
int fd = open(FIFO_NAME, O_RDONLY | O_NONBLOCK);
The resulting fd is passed into a call to select(). Everything works ok, till I run echo >> <fifo-name>.
Now the fd appears in the read_fds after the select() returns. A read() on the fd will return one byte of data. So far so good.
The next time when select() is called and it returns, the fd still appears in the read_fds, but read() will always return zero meaning with no data. Effectively the read side would consume 100% of the processor capacity. This is exactly the same problem as observed by the referenced question.
Has anybody seen the same issue? And how can it be resolved or worked-around properly?
I've figured out if I close the read end of the fifo, and re-open it again, it will work properly. This probably is ok because we are not sending a lot of data. Though this is not a nice or general work-around.
This is expected behaviour, because the end-of-input case causes a read() to not block; it returns 0 immediately.
If you look at man 2 select, it says clearly that a descriptor in readfds is set if a read() on that descriptor would not block (at the time of the select() call).
If you used poll(), it too would immediately return with POLLHUP in revents.
As OP notes, the correct workaround is to reopen the FIFO.
Because the Linux kernel maintains exactly one internal pipe object to represent each open FIFO (see man 7 fifo and man 7 pipe), the robust approach in Linux is to open another descriptor to the FIFO whenever an end of input is encountered (read() returning 0), and close the original. During the time when both descriptors are open, they refer to the same kernel pipe object, so there is no race window or risk of data loss.
In pseudo-C:
fifoflags = O_RDONLY | O_NONBLOCK;
fifofd = open(fifoname, fifoflags);
if (fifofd == -1) {
/* Error checking */
}
/* ... */
/* select() readfds contains fifofd, or
poll() returns POLLIN for fifofd: */
n = read(fifofd, buffer, sizeof buffer)
if (!n) {
int tempfd;
tempfd = open(fifopath, fifoflags);
if (tempfd == -1) {
const int cause = errno;
close(fifofd);
/* Error handling */
}
close(fifofd);
fifofd = tempfd;
/* A writer has closed the FIFO. */
} else
/* Handling for the other read() result cases */
The file descriptor allocation policy in Linux is such that tempfd will be the lowest-numbered free descriptor.
On my system (Core i5-7200U laptop), reopening a FIFO in this way takes less than 1.5 µs. That is, it can be done about 680,000 times a second. I do not think this reopening is a bottleneck for any sensible scenario, even on low-powered embedded Linux machines.
Hello I have a server program and a client program. The server program is working fine, as in I can telnet to the server and I can read and write in any order (like a chat room) without any issue. However I am now working on my client program and when I use 'select' and check if the socket descriptor is set to read or write, it always goes to write and then is blocked. As in messages do not get through until the client sends some data.
How can I fix this on my client end so I can read and write in any order?
while (quit != 1)
{
FD_ZERO(&read_fds);
FD_ZERO(&write_fds);
FD_SET(client_fd, &read_fds);
FD_SET(client_fd, &write_fds);
if (select(client_fd+1, &read_fds, &write_fds, NULL, NULL) == -1)
{
perror("Error on Select");
exit(2);
}
if (FD_ISSET(client_fd, &read_fds))
{
char newBuffer[100] = {'\0'};
int bytesRead = read(client_fd, &newBuffer, sizeof(newBuffer));
printf("%s",newBuffer);
}
if(FD_ISSET(client_fd, &write_fds))
{
quit = transmit(handle, buffer, client_fd);
}
}
Here is code to transmit function
int transmit(char* handle, char* buffer, int client_fd)
{
int n;
printf("%s", handle);
fgets(buffer, 500, stdin);
if (!strchr(buffer, '\n'))
{
while (fgetc(stdin) != '\n');
}
if (strcmp (buffer, "\\quit\n") == 0)
{
close(client_fd);
return 1;
}
n = write(client_fd, buffer, strlen(buffer));
if (n < 0)
{
error("ERROR writing to socket");
}
memset(buffer, 0, 501);
}
I think you are misinterpreting the use of the writefds parameer of select(): only set the bit when you want to write data to the socket. In other words, if there is no data, do not set the bit.
Setting the bit will check if there is room for writing, and if yes, the bit will remain on. Assuming you are not pumping megabytes of data, there will always be room, so right now you will always call transmit() which waits for input from the command line with fgets(), thus blocking the rest of the program. You have to monitor both the client socket and stdin to keep the program running.
So, check for READ action on stdin (use STDIN_FILENO to get the file descriptor for that), READ on client_fd always and just write() your data to the client_fd if the amount of data is small (if you need to write larger data chunks consider non-blocking sockets).
BTW, you forget to return a proper value at the end of transmit().
Sockets are almost always writable, except when the socket send buffer is full, which indicates that you are sending faster than the receiver is receiving.
So your transmit() function will be entered every time around the loop, so it will read some data from stdin, which blocks until you type something, so nothing happens.
You should only select on writability when a prior send() has returned EWOULDBLOCK/EAGAIN. Otherwise you should just send, when you have something to send.
I would throw this code away and use two or three threads in blocking mode.
select is used to check whether a socket has become ready to read or write. If it is blocking for read then that indicates no data to read. If it is blocking in write, then that indicates the TCP buffer is likely full and the remote end has to read some data so that the socket will allow more data to be written. Since the select blocks until one of the socket descriptions is ready, you also need to use timeout in select to avoid waiting for a long time.
In your specific case, if your remote/receiving end keep reading data from the socket then the select will not block for the write on the other end. Otherwise the tcp buffer will become full on the sender side and select will block. Answers posted also indicate the importance of handling EAGAIN or EWOULDBLOCK.
Sample flow:
while(bytesleft > 0)
then
nbytes = write data
if(nbytes > 0)
bytesleft -= nbytes;
else
if write returns with EAGAIN or EWOULDBLOCK
call poll or select to wait for the socket to be come ready
endif
endif
if poll or select times out
then handle the timeout error(e.g. the remote end did not send the
data within expected time interval)
endif
end while
The code also should include handle error conditions and read/write returning with (For example, write/read returning with 0). Also note read/recv returning 0 indicates the remote end closed the socket.
I don't know why I'm having a hard time finding this, but I'm looking at some linux code where we're using select() waiting on a file descriptor to report it's ready. From the man page of select:
select() and pselect() allow a program to monitor multiple file descriptors,
waiting until one or more of the file descriptors become "ready" for some
class of I/O operation
So, that's great... I call select on some descriptor, give it some time out value and start to wait for the indication to go. How does the file descriptor (or owner of the descriptor) report that it's "ready" such that the select() statement returns?
It reports that it's ready by returning.
select waits for events that are typically outside your program's control. In essence, by calling select, your program says "I have nothing to do until ..., please suspend my process".
The condition you specify is a set of events, any of which will wake you up.
For example, if you are downloading something, your loop would have to wait on new data to arrive, a timeout to occur if the transfer is stuck, or the user to interrupt, which is precisely what select does.
When you have multiple downloads, data arriving on any of the connections triggers activity in your program (you need to write the data to disk), so you'd give a list of all download connections to select in the list of file descriptors to watch for "read".
When you upload data to somewhere at the same time, you again use select to see whether the connection currently accepts data. If the other side is on dialup, it will acknowledge data only slowly, so your local send buffer is always full, and any attempt to write more data would block until buffer space is available, or fail. By passing the file descriptor we are sending to to select as a "write" descriptor, we get notified as soon as buffer space is available for sending.
The general idea is that your program becomes event-driven, i.e. it reacts to external events from a common message loop rather than performing sequential operations. You tell the kernel "this is the set of events for which I want to do something", and the kernel gives you a set of events that have occured. It is fairly common for two events occuring simultaneously; for example, a TCP acknowledge was included in a data packet, this can make the same fd both readable (data is available) and writeable (acknowledged data has been removed from send buffer), so you should be prepared to handle all of the events before calling select again.
One of the finer points is that select basically gives you a promise that one invocation of read or write will not block, without making any guarantee about the call itself. For example, if one byte of buffer space is available, you can attempt to write 10 bytes, and the kernel will come back and say "I have written 1 byte", so you should be prepared to handle this case as well. A typical approach is to have a buffer "data to be written to this fd", and as long as it is non-empty, the fd is added to the write set, and the "writeable" event is handled by attempting to write all the data currently in the buffer. If the buffer is empty afterwards, fine, if not, just wait on "writeable" again.
The "exceptional" set is seldom used -- it is used for protocols that have out-of-band data where it is possible for the data transfer to block, while other data needs to go through. If your program cannot currently accept data from a "readable" file descriptor (for example, you are downloading, and the disk is full), you do not want to include the descriptor in the "readable" set, because you cannot handle the event and select would immediately return if invoked again. If the receiver includes the fd in the "exceptional" set, and the sender asks its IP stack to send a packet with "urgent" data, the receiver is then woken up, and can decide to discard the unhandled data and resynchronize with the sender. The telnet protocol uses this, for example, for Ctrl-C handling. Unless you are designing a protocol that requires such a feature, you can easily leave this out with no harm.
Obligatory code example:
#include <sys/types.h>
#include <sys/select.h>
#include <unistd.h>
#include <stdbool.h>
static inline int max(int lhs, int rhs) {
if(lhs > rhs)
return lhs;
else
return rhs;
}
void copy(int from, int to) {
char buffer[10];
int readp = 0;
int writep = 0;
bool eof = false;
for(;;) {
fd_set readfds, writefds;
FD_ZERO(&readfds);
FD_ZERO(&writefds);
int ravail, wavail;
if(readp < writep) {
ravail = writep - readp - 1;
wavail = sizeof buffer - writep;
}
else {
ravail = sizeof buffer - readp;
wavail = readp - writep;
}
if(!eof && ravail)
FD_SET(from, &readfds);
if(wavail)
FD_SET(to, &writefds);
else if(eof)
break;
int rc = select(max(from,to)+1, &readfds, &writefds, NULL, NULL);
if(rc == -1)
break;
if(FD_ISSET(from, &readfds))
{
ssize_t nread = read(from, &buffer[readp], ravail);
if(nread < 1)
eof = true;
readp = readp + nread;
}
if(FD_ISSET(to, &writefds))
{
ssize_t nwritten = write(to, &buffer[writep], wavail);
if(nwritten < 1)
break;
writep = writep + nwritten;
}
if(readp == sizeof buffer && writep != 0)
readp = 0;
if(writep == sizeof buffer)
writep = 0;
}
}
We attempt to read if we have buffer space available and there was no end-of-file or error on the read side, and we attempt to write if we have data in the buffer; if end-of-file is reached and the buffer is empty, then we are done.
This code will behave clearly suboptimal (it's example code), but you should be able to see that it is acceptable for the kernel to do less than we asked for both on reads and writes, in which case we just go back and say "whenever you're ready", and that we never read or write without asking whether it will block.
From the same man page:
On exit, the sets are modified in place to indicate which file descriptors actually changed status.
So use FD_ISSET() on the sets passed to select to determine which FDs have become ready.
The server that Im working on (which is a Unix C multi-threaded non-block socket server) need to receive a file from a client and broadcast it to all the other clients connected to the server.
Everything is working at the exception that Im having a hard time to determine when a file is done transferring... since Im using non-block socket Im having the issue that sometimes during the file transfer recv return -1 (which I was assuming was the end of the file) then the next pass more bytes comes in.
I try to hack the whole thing putting "END" at the end of the stream. However, sometimes when multiple files are sent in a row the "END" is part of the same recv buffer as the beginning of the next file. Or even worst, sometimes I end up with a buffer that finish with EN and the next pass the D comes in.
What would be the best approach to avoid the situations mentioned above, I don't really want that each time I receive some bytes from the socket loop the whole accumulated buffer to check if "END" is part of it then cut appropriately... Im sure there's a better solution to this right?
Thanks in advance!
If recv() returns -1 it is an error and you need to inspect errno. Most probably it was EAGAIN or EWOULDBLOCK, which just means there is no data currently in the socket receive buffer. So you need to re-select().
When recv() returns zero the peer has disconnected the socket and the transfer is complete.
Signaling the end of a file with some byte sequence is not reliable, the file could contain that sequence. First send the file length - 4 bytes or 8 if you allow huge file transfer, use network byte order.
if ((n = read(..., filelen)) > 0) {
filelen -= n;
}
The most simpe case EJP is referring to, the case where you take the closing of the socket by the other end as end-of-file, could look like the following:
{
ssize_t sizeRead = 0;
while (sizeRead = recv(...)) {
if (0 > sizeRead) { /* recv() failed */
if ((EGAGAIN == errno) ¦¦ (EWOULDBLOCK == errno)) { /* retry the recv() on those two kinds of error */
usleep(1) /* optional */
continue;
}
else
break;
}
... /* process the data read ... */
}
if (0 > sizeRead) {
/* There had been an error during recv() */
}
}
I'm trying to write a simple daemon in Linux, which will create a FIFO, then collect anything written to the FIFO and write that data to a file at a later time.
My expectations are that once my daemon has created the FIFO, I can do "echo text > /myfifo" repeatedly. When I'm done, I can do "echo quit > /myfifo" and my program will exit and write all data to disk.
I'm currently using poll() to know when there's more data on the FIFO. This works fine until after the first time I echo data to the FIFO. The data is echoed fine, but my poll continuously returns SIGHUP after that.
Do I need to reset (or close & reopen) the FIFO after each process writes to it?
Pseudo-code of my code looks like this:
ret = fifo(my_fifo, mode);
fd = open(my_fifo, O_RDONLY | O_NONBLOCK);
polling.fd = fd;
polling.events = POLLIN | POLLPRI;
do {
ret = poll(&polling, 1, -1);
amt = read(fd, buf, bufsize);
// do stuff
} while (!done);
You have to keep reopening the FIFO, I think. I have a program that monitors a FIFO, and the monitor loop is:
/* Implement monitor mode */
void sql_monitor(char *fifo)
{
if (chk_fifo(fifo) != 0)
cmd_error(E_NOTFIFO, fifo);
/* Monitor -- device is assumed to be a FIFO */
while (1)
{
ctxt_newcontext();
if (ctxt_setinput(fifo) != 0)
sql_exit(1);
sql_file();
ctxt_endcontext();
}
}
The ctxt_newcontext() function stashes the current I/O state; the ctxt_setinput() function sets the input file to the named file - a FIFO in this case. The sql_file() function reads from the file (FIFO) until the end is reached - the file is closed. The ctxt_endcontext() undoes what ctxt_newcontext() does. The process repeats... This code has been around since about 1990.
So, YES, you will need to keep closing and reopening the FIFO after reading the end of the file (after each process such as echo finishes writing to the FIFO).
(You should also notice that there really isn't a need to poll the FIFO unless there's something else for the process to be doing when there is no data. The read() call will wait until there is data before returning.)