C using select() to read from two named pipes (FIFO) - c

I am currently trying to write a program in C which will read from two named pipes and print any data to stdout as it becomes available.
for example: If I open two terminals and ./execute pipe1 pipe2 in one of the terminals (with pipe1 and pipe2 being valid named pipes) and then type echo "Data here." > pipe1 then the name of the pipe (here it is pipe1), the size, and the data should print to stdout-- Here it would look like pipe1 [25]: Data here.
I know I need to open the pipes with the O_RDONLY and O_NONBLOCK flags. I have looked at many examples (quite a few on this forum) of people using select() and I still don't understand what the different parameters being passed to select() are doing. If anyone can provide guidance here it would be hugely helpful. Below is the code I have so far.
int pipeRouter(char[] fifo1, char[] fifo2){
fileDescriptor1 = open(fifo1, O_RDONLY, O_NONBLOCK);
fileDescriptor2 = open(fifo2, O_RDONLY, O_NONBLOCK);
if(fileDescriptor1 < 0){
printf("%s does not exist", fifo1);
}
if(fileDescriptor2 < 0){
printf("%s does not exist", fifo2);
}
}

The select lets you wait for an i/o event instead of waisting CPU cycles on read.
So, in your example, the main loop can look like:
for (;;)
{
int res;
char buf[256];
res = read(fileDescriptor1, buf, sizeof(buf));
if (res > 0)
{
printf("Read %d bytes from channel1\n", res);
}
res = read(fileDescriptor2, buf, sizeof(buf));
if (res > 0)
{
printf("Read %d bytes from channel2\n", res);
}
}
If you add the code and run it, you would notice that:
The program actually does what you want - it reads from both pipes.
CPU utilization is 100% for one core, i.e. program wastes CPU even when there is no data to read.
To solve issue, select and poll APIs are introduced. For select we need to know descriptors (we do), and the maximum out of them.
So let's modify the code a bit:
for (;;)
{
fd_set fds;
int maxfd;
FD_ZERO(&fds); // Clear FD set for select
FD_SET(fileDescriptor1, &fds);
FD_SET(fileDescriptor2, &fds);
maxfd = fileDescriptor1 > fileDescriptor2 ? fileDescriptor1 : fileDescriptor2;
select(maxfd + 1, &fds, NULL, NULL, NULL);
// The minimum information for select: we are asking only about
// read operations, ignoring write and error ones; and not
// defining any time restrictions on wait.
// do reads as in previous example here
}
When running the improved code, the CPU would not be wasted as much, but you will notice, that the read operation is performed even when there is no data for a particular pipe, but there is for another.
To check, which pipe actually has the data, use FD_ISSET after select call:
if (FD_ISSET(fileDescriptor1, &fds))
{
// We can read from fileDescriptor1
}
if (FD_ISSET(fileDescriptor2, &fds))
{
// We can read from fileDescriptor2
}
So, after joining said above, the code would look like:
for (;;)
{
fd_set fds;
int maxfd;
int res;
char buf[256];
FD_ZERO(&fds); // Clear FD set for select
FD_SET(fileDescriptor1, &fds);
FD_SET(fileDescriptor2, &fds);
maxfd = fileDescriptor1 > fileDescriptor2 ? fileDescriptor1 : fileDescriptor2;
select(maxfd + 1, &fds, NULL, NULL, NULL);
if (FD_ISSET(fileDescriptor1, &fds))
{
// We can read from fileDescriptor1
res = read(fileDescriptor1, buf, sizeof(buf));
if (res > 0)
{
printf("Read %d bytes from channel1\n", res);
}
}
if (FD_ISSET(fileDescriptor2, &fds))
{
// We can read from fileDescriptor2
res = read(fileDescriptor2, buf, sizeof(buf));
if (res > 0)
{
printf("Read %d bytes from channel2\n", res);
}
}
}
So, add error handling, and you would be set.

Related

What is the need for "maxfd" when creating a tunnel?

In this link, https://backreference.org/2010/03/26/tuntap-interface-tutorial/, there's a code sample that uses tun/tap interface to create a TCP tunnel as below.
/* net_fd is the network file descriptor (to the peer), tap_fd is the
descriptor connected to the tun/tap interface */
/* use select() to handle two descriptors at once */
maxfd = (tap_fd > net_fd)?tap_fd:net_fd;
while(1) {
int ret;
fd_set rd_set;
FD_ZERO(&rd_set);
FD_SET(tap_fd, &rd_set); FD_SET(net_fd, &rd_set);
ret = select(maxfd + 1, &rd_set, NULL, NULL, NULL);
if (ret < 0 && errno == EINTR) {
continue;
}
if (ret < 0) {
perror("select()");
exit(1);
}
if(FD_ISSET(tap_fd, &rd_set)) {
/* data from tun/tap: just read it and write it to the network */
nread = cread(tap_fd, buffer, BUFSIZE);
/* write length + packet */
plength = htons(nread);
nwrite = cwrite(net_fd, (char *)&plength, sizeof(plength));
nwrite = cwrite(net_fd, buffer, nread);
}
if(FD_ISSET(net_fd, &rd_set)) {
/* data from the network: read it, and write it to the tun/tap interface.
* We need to read the length first, and then the packet */
/* Read length */
nread = read_n(net_fd, (char *)&plength, sizeof(plength));
/* read packet */
nread = read_n(net_fd, buffer, ntohs(plength));
/* now buffer[] contains a full packet or frame, write it into the tun/tap interface */
nwrite = cwrite(tap_fd, buffer, nread);
}
}
What's the purpose of "maxfd" in that code excerpt? The exact lines are:
maxfd = (tap_fd > net_fd)?tap_fd:net_fd;
ret = select(maxfd + 1, &rd_set, NULL, NULL, NULL);
It's an artifact of the way the dangerous and obsolete select function works. It requires an argument that is a bound on the size (in bits) of the fd_set objects passed to it, and cannot work with fd numbers larger than an arbitrary limit imposed by FD_SETSIZE. If you fail to meet these requirements, Undefined Behavior results.
Wherever you see select, you should replace it by poll which does not suffer from these limitations, has an easier-to-use interface, and has more features.

select() on STDIN and Incoming Socket

I'm trying to write a very basic chat client in C which communicates with another machine using sockets, and I'm having some issues with understanding select. Here's a quick snippet of the relevant code.
while(1)
{
FD_ZERO(&readfds);
FD_ZERO(&writefds);
FD_SET(STDIN, &writefds);
FD_SET(connectFD, &readfds);
select(connectFD+1, &readfds, &writefds, NULL, NULL);
if(FD_ISSET(connectFD, &readfds) != 0)
{
char buf[1024];
memset(buf, 0, sizeof(buf));
int lastBit;
lastBit = recv(connectFD, buf, sizeof(buf), 0);
if (lastBit > 0 && lastBit < 1024)
{
buf[lastBit] = '\0';
}
else
{
close(connectFD);
break;
}
printf("%s\n", buf);
}
else if (FD_ISSET(STDIN, &writefds))
{
char msg[1024];
memset(msg, 0, sizeof(msg));
read(STDIN, msg, sizeof(msg));
}
}
}
What I'm looking to do is have incoming messages processed as soon as they arrive, and only have outgoing messages sent after I hit ENTER, but what I have now only processes incoming data after I hit ENTER instead of immediately. I assume it's because read is a blocking call and select is returning when there's ANY data in the buffer, not just when there's a newline (which is when read returns), but I don't know how to process this otherwise. Any advice or tips for leading me down the right path?
Thank you guys so much!
FD_SET(STDIN, &writefds);
You want to read from STDIN so you should add STDIN to the &readfds and not &writefds. Writing to STDIN is almost always possible so your code effectively got the information that writing is possible to STDIN but then attempts to read from STDIN and hangs there until the read actually gets possible.

C - filling TCP socket send buffer

I'm trying to write an experimental client / server program to prove whether the write fails or blocks when the send buffer is full.
Basically, I have an infinite loop on the sender program where I use select() to check if I can write on the buffer (which, I think means that the socket buffer isn't full), if I can write on the buffer than I write() a character. The loop breaks when FD_ISSET(sockfd, &writefds) is false (I can't write on the buffer because it's full).
The reciever program is sleeping for one minute before starting to read(). I expect the sender to fill the buffer within this sleeping time but in fect, the programs never end.
sender:
int main(int argc, char *argv[]) {
char buffer[100];
int sockfd, total = 0, bytes = 0;
fd_set writefds;
sockfd = dial(argv[1], argv[2]);
bzero(buffer, sizeof buffer);
while(1)
{
int ret = 0;
FD_ZERO(&writefds);
FD_SET(sockfd, &writefds);
if((ret = select(sockfd + 1, NULL, &writefds, NULL, 0)) < 0)
{
perror("select");
exit(errno);
}
if(FD_ISSET(sockfd, &writefds))
{
write(sockfd, "a", 1);
total++;
continue;
}
else
{
puts("I can't write in the socket buffer");
break;
}
}
printf("nb chars written: %d\n", total);
return 0;
}
reciever:
int foo(int sockfd) {
char buffer[100];
int t, total = 0;
bzero(buffer, sizeof buffer);
printf("I have a new client\n");
sleep(60);
while((t = read(sockfd, buffer, sizeof buffer)) > 0)
{
total += t;
printf("%d ", total);
}
printf("nb chars read: %d\n", total);
if(t < 0)
{
perror("read");
}
printf("I don't have that client anymore\n");
return 0;
}
Your select timeout is null, so select() will block when the send buffer is full. This means when it returns, the socket is writable, and you'll never get to your code "I can't write in the socket buffer".
See man page http://linux.die.net/man/2/select
If you want a zero timeout, i.e. don't block on select(), you need to pass a pointer to a timeval structure with both fields set to zero.
You're on the right track, but the socket send buffer could be 48k or more. That's a lot of iterations. Try writing 8k at a time, not just one byte. And increase the time before the receiver reads.
NB No real need to test this. It blocks in blocking mode, and fails with EAGAIN/EWOULDBLOCK in non-blocking mode. See the man page.

select() doesn't return after timeout

I create one executable file named as "readmsg". Its source code is as below. The select() works if I only perform readmsg in shell (I can see the output of timeout).
But if I create a FIFO file via command: mknod /tmp/message p, and perform readmsg < /tmp/message in shell. In result, the select() can't return if I don't write something in /tmp/message. My question is: Why I can't get the timeout output?
the source code of "readmsg":
#define STDIN 0
fd_set fds;
struct timeval tv;
while (1) {
FD_ZERO(&fds);
FD_SET(STDIN, &fds);
tv.tv_sec = 1;
tv.tv_usec = 0;
ret = select(STDIN + 1, &fds, NULL, NULL, &tv);
if (ret > 0) {
printf("works\n");
if (FD_ISSET(STDIN, &fds)) {
// read ...
}
} else if (ret == 0) {
printf("timeout!!\n");
} else {
printf("interrupt\n");
}
}
Thanks #Mat. After adding printf() close to main(), there is not output either. Even there is not the process id of readmsg when perform ps.
So it proves the process of readmsg < /tmp/message is blocked before the FIFO is ready to be writen.
There isn't any error. In fact, the readmsg works well when reading messages from redirected FIFO file.

using select() with pipe

I am reading/writing to a pipe created by pipe(pipe_fds). So basically with following code, I am reading from that pipe:
fp = fdopen(pipe_fds[0], "r");
And when ever I get something, I print it out by:
while (fgets(buf, 200, fp)) {
printf("%s", buf);
}
What I want is, when for certain amount of time nothing appears on the pipe to read from, I want to know about it and do:
printf("dummy");
Can this be achieved by select() ? Any pointers on how to do that will be great.
Let's say you wanted to wait 5 seconds and then if nothing was written to the pipe, you print out "dummy."
fd_set set;
struct timeval timeout;
/* Initialize the file descriptor set. */
FD_ZERO(&set);
FD_SET(pipe_fds[0], &set);
/* Initialize the timeout data structure. */
timeout.tv_sec = 5;
timeout.tv_usec = 0;
/* In the interest of brevity, I'm using the constant FD_SETSIZE, but a more
efficient implementation would use the highest fd + 1 instead. In your case
since you only have a single fd, you can replace FD_SETSIZE with
pipe_fds[0] + 1 thereby limiting the number of fds the system has to
iterate over. */
int ret = select(FD_SETSIZE, &set, NULL, NULL, &timeout);
// a return value of 0 means that the time expired
// without any acitivity on the file descriptor
if (ret == 0)
{
printf("dummy");
}
else if (ret < 0)
{
// error occurred
}
else
{
// there was activity on the file descripor
}
IIRC, select has a timeout that you then check with FD_ISSET to tell if it was I/O or not that returned.

Resources