The following little C program (let's call it pointless):
/* pointless.c */
#include <stdio.h>
#include <unistd.h>
void main(){
write(STDOUT_FILENO, "", 0); /* pointless write() of 0 bytes */
sleep(1);
write(STDOUT_FILENO, "still there!\n", 13);
}
will print "still there!" after a small delay, as expected. However,
rlwrap ./pointless prints nothing under AIX and exits immediatly.
Apparently, rlwrap reads 0 bytes after the first write() and
(incorrectly) decides that pointless has called it quits.
When running pointless without rlwrap, and with rlwrap on all
other systems I could lay my hand on (Linux, OSX, FreeBSD), the "still
there!" gets printed, as expected.
The relevant rlwrap (pseudo-)code is this:
/* master is the file descriptor of the master end of a pty, while the slave is 'pointless's stdout */
/* master was opened with O_NDELAY */
while(pselect(nfds, &readfds, .....)) {
if (FD_ISSET(master, &readfds)) { /* master is "ready" for reading */
nread = read(master, buf, BUFFSIZE - 1); /* so try to read a buffer's worth */
if (nread == 0) /* 0 bytes read... */
cleanup_and_exit(); /* ... usually means EOF, doens't it? */
Apparently, on all systems, except AIX, writeing 0 bytes on the
slave end of a pty is a no-op, while on AIX it wakes up the
select() on the master end. Writing 0 bytes seems pointless, but one
of my test programs writes random-length chunks of text, which may
actually happen to have length 0.
On linux, man 2 read states "on success, the number of bytes read is
returned (zero indicates end of file)" (italics are mine) This
question has come up
before
without mention of this scenario.
This begs the question: how can I portably determine whether the
slave end has been closed? (In this case I can probably just wait for
a SIGCHLD and then close shop, but that might open another can of
worms I'd rather avoid)
Edit: POSIX states:
Writing a zero-length buffer (nbyte is 0) to a STREAMS device sends 0 bytes with 0 returned. However, writing a zero-length buffer to a STREAMS-based pipe or FIFO sends no message and 0 is returned. The process may issue I_SWROPT ioctl() to enable zero-length messages to be sent across the pipe or FIFO.
On AIX, pty is indeed a STREAMS device, moreover, not a pipe or FIFO. ioctl(STDOUT_FILENO, I_SWROPT, 0) seems to make it possible to make the pty conform to the rest of the Unix world. The sad thing is that this has to be called from the slave side, and so is outside rlwraps sphere of infuence (even though we could call the ioctl() between fork() and exec() - that would not guarantee that the executed command won't change it back)
Per POSIX:
When attempting to read from an empty pipe or FIFO:
If no process has the pipe open for writing, read() shall return 0 to indicate end-of-file."
So the "read of zero bytes means EOF" is POSIX-compliant.
On the write() side (bolding mine):
Before any action described below is taken, and if nbyte is zero and the file is a regular file, the write() function may detect and return errors as described below. In the absence of errors, or if error detection is not performed, the write() function shall return zero and have no other results. If nbyte is zero and the file is not a regular file, the results are unspecified.
Unfortunately, that means you can't portably depend on a write() of zero bytes to have no effect because AIX is compliant with the POSIX standard for write() here.
You probably have to rely on SIGCHLD.
From the linux man page
If count is zero and fd refers to a regular file, then write()
may return a failure status if one of the errors below is
detected. If no errors are detected, or error detection is not
performed, 0 will be returned without causing any other effect.
If count is zero and fd refers to a file other than a regular
file, the results are not specified.
So, since it is unspecified, it can do whatever it likes in your case.
Related
I'm sending data from one process to another pipe and I want to clear the pipe after reading.
Is there a function in C that can do this ?
Yes. It's just the read function offered by the stdio library. You have to invoke it as many times as you need in order to be sure the pipe will be empty.
As the documentation suggests, the read function attempts reading count bytes from an I/O channel (a pipe in your case) for which you have passed the file descriptor as first argument, and places its content into a buffer with enough room to accommodate it.
Let's recall that the read function may return a value indicating a number of bytes read that is smaller than that of those requested. This is perfectly fine if there are less bytes to read than what you expected.
Also remeber that reading from a pipe is blocking if there's nothing to read and the writer has not yet closed the relative descriptor, thus meaning that you'll not get EOF until the counterpart closes its descriptor. Therefore you'll stuck while attempting to read from pipe. If you are intended to avoid the aforementioned possibility I suggest to follow the solution below based on the poll function to verify whether there's data to read from a file descriptor:
#include <poll.h>
struct pollfd pfd;
int main(void)
{
/* your operations */
pfd.fd = pipe_fd;
pfd.events = POLLIN;
while (poll(&pfd, 1, 0) == 1)
{
/* there's available data, read it */
}
return 0;
}
Today I have encountered some weird looking code that at first glance it's not apparent to me what it does.
send(file_desc,"Input \'y\' to continue.\t",0x18,0);
read(file_desc,buffer,100);
iVar1 = strcmp("y",(char *)buffer);
if (iVar1 == 0) {
// some more code
}
It seems that a text string is being written into the file descriptor. Immediately then after that it reads from that file descriptor into a buffer. And it compares if the text written into the buffer is a "y".
My understanding (please correct me if I am wrong), is that it writes some data which is a text string into the file descriptor, and then the file descriptor acts as a temporary storage location for anything you write to it. And after that it reads that data from the file descriptor into the buffer. It actually is the same file descriptor. It seems as a primitive way of using a file descriptor to copy data from the text string into the buffer. Why not just use a strcpy() instead?
What would be the use case of writing to a file descriptor and then immediately read from it? It seems like a convoluted way to copy data using file descriptors. Or maybe I don't understand this code well enough, what this sequence of a send() and a read() does?
And assuming that this code is instead using the file descriptor to copy the text string "Input \'y\' to continue.\t" into the buffer, why are they comparing it with the string "y"? It should probably be false every single time.
I am assuming that any data written into a file descriptor stays in that file descriptor until it is read from. So here it seems that send() is being used to write the string into, and read() is used to read it back out.
In man send it says:
The only difference between send() and write(2) is the presence of flags. With a zero
flags argument, send() is equivalent to write(2).
why would they use send() instead of write()? This code is just so mind boggling.
Edit: here's the full function where this code is originally from:
void send_read(int file_desc)
{
int are_equal;
undefined2 buffer [8];
char local_28 [32];
/* 0x6e == 110 == 'n' */
buffer[0] = 0x6e;
send(file_desc,"Input \'y\' to continue.\t",0x18,0);
read(file_desc,buffer,100);
are_equal = strcmp("y",(char *)buffer);
if (are_equal == 0) {
FUN_00400a86(file_desc,local_28);
}
else {
close(file_desc);
}
return;
}
The send() and recv() functions are for use with sockets (send: send a message on a socket — recv: receive a message from a connected socket). See also the POSIX description of Sockets in general.
Socket file descriptors are bi-directional — you can read and write on them. You can't read what you wrote, unlike with pipe file descriptors. With pipes, the process writing to the write end of a pipe can read what it wrote from the read end of the pipe — if another process didn't read it first. When a process writes on a socket, that information goes to the peer process and cannot be read by the writer.
send(2) is a system call that can only be used with sockets. A socket is a descriptor that allows you to use it to send data or receive from a remote point (a remote socket) that can be on a different computer or in the same as you are. But it works like a phone line, what you send is received by your parnter and what he/she sends is received by you. read(2) system call can be used by sockets, while send(2) cannot be used by files, so your sample code is mixing calls related to files with calls related to sockets (that's not uncommon, as read(2) and write(2) can both be used with sockets)
The code you post above is erroneous, as it blindly compares the received buffer with strcmp function, assuming that it received a null terminated string. This can be the case, but it also cannot.
Even if the sender (in the other side of the connection) agreed on sending a full message, nul terminated string. The receiver must first get the amount of data received (this is the return value of the read(2) call, which can be:
-! indicating some error on reception. The connection can be reset by the other side, or the other side can have rebooted while you send the data.
0 indicating no more data or end of data (the other side closed the connection) This can happen if the other side has a timeout and you take too much to respond. It closes the connection without sending anything. You just receive nothing.
n some data, less than the buffer size, but including the full packet sent by the peer (and the agreed nul byte it sent with it). This is the only case in which you can safely strcmp the data.
n some data, less than the buffer size, and less than the data transmitted. This can happen due to some data fragmentation of the data in several packets. Then you have to do another read until you have all the data send by your peer. Packet fragmentation is something natural in TCP, for example.
n some data, less than the buffer size, and more than the data transmitted. The sender did another transmit, after the one you receive, and both packets got into the kernel buffer. You have to investigate this case, as you have one full packet, and must save the rest of the received data in the buffer, for later processing, or you'll lose data you have received.
n some data, the full buffer filled, and no space to store the full transmitted data remained. You have filled the buffer and no \0 char came... the packet is larger than the buffer, you run out of buffer space and have to decide what to do (allocate other buffer to receive the rest, discard the data, or whatever you decide to do) This will not happen to you because you expect a packet of 1 or 2 characters, and you have a buffer of 100, but who knows...
At least, and as a minimum safe net, you can do this:
send(file_desc,"Input \'y\' to continue.\t",0x18,0);
int n = read(file_desc,buffer,sizeof buffer - 1); /* one cell reserved for '\0' */
switch (n) {
case -1: /* error */
do_error();
break;
case 0: /* disconnect */
do_disconnect();
break;
default: /* some data */
buffer[n] = '\0'; /* append the null */
break;
}
if (n > 0) {
iVar1 = strcmp("y",(char *)buffer);
if (iVar1 == 0) {
// some more code
}
}
Note:
As you didn't post a complete and verifiable example, I couldn't post a complete and verifiable response.
My apologies for that.
Similar to the problem asked a while ago on kernel 3.x, but I'm seeing it on 4.9.37.
The named fifo is created with mkfifo -m 0666. On the read side it is opened with
int fd = open(FIFO_NAME, O_RDONLY | O_NONBLOCK);
The resulting fd is passed into a call to select(). Everything works ok, till I run echo >> <fifo-name>.
Now the fd appears in the read_fds after the select() returns. A read() on the fd will return one byte of data. So far so good.
The next time when select() is called and it returns, the fd still appears in the read_fds, but read() will always return zero meaning with no data. Effectively the read side would consume 100% of the processor capacity. This is exactly the same problem as observed by the referenced question.
Has anybody seen the same issue? And how can it be resolved or worked-around properly?
I've figured out if I close the read end of the fifo, and re-open it again, it will work properly. This probably is ok because we are not sending a lot of data. Though this is not a nice or general work-around.
This is expected behaviour, because the end-of-input case causes a read() to not block; it returns 0 immediately.
If you look at man 2 select, it says clearly that a descriptor in readfds is set if a read() on that descriptor would not block (at the time of the select() call).
If you used poll(), it too would immediately return with POLLHUP in revents.
As OP notes, the correct workaround is to reopen the FIFO.
Because the Linux kernel maintains exactly one internal pipe object to represent each open FIFO (see man 7 fifo and man 7 pipe), the robust approach in Linux is to open another descriptor to the FIFO whenever an end of input is encountered (read() returning 0), and close the original. During the time when both descriptors are open, they refer to the same kernel pipe object, so there is no race window or risk of data loss.
In pseudo-C:
fifoflags = O_RDONLY | O_NONBLOCK;
fifofd = open(fifoname, fifoflags);
if (fifofd == -1) {
/* Error checking */
}
/* ... */
/* select() readfds contains fifofd, or
poll() returns POLLIN for fifofd: */
n = read(fifofd, buffer, sizeof buffer)
if (!n) {
int tempfd;
tempfd = open(fifopath, fifoflags);
if (tempfd == -1) {
const int cause = errno;
close(fifofd);
/* Error handling */
}
close(fifofd);
fifofd = tempfd;
/* A writer has closed the FIFO. */
} else
/* Handling for the other read() result cases */
The file descriptor allocation policy in Linux is such that tempfd will be the lowest-numbered free descriptor.
On my system (Core i5-7200U laptop), reopening a FIFO in this way takes less than 1.5 µs. That is, it can be done about 680,000 times a second. I do not think this reopening is a bottleneck for any sensible scenario, even on low-powered embedded Linux machines.
I get confused with one line of code:
temp_uart_count = read(VCOM, temp_uart_data, 4096);
I found more about read function at http://linux.die.net/man/3/read, but if everything is okay it returns 0, so how we can get num of bytes received from that?
temp_uart_count is used to count how much bytes we received from virtual COM port and stored it to temp_uart_data which is 4096 bytes wide.
Am I really getting how much bytes i received with this line of code?
... but if everything is okay it returns 0, so how we can get num of bytes received from that?
A return code of zero simply means that read() was unable to provide any data.
Am I really getting how much bytes i received with this line of code?
Yes, a positive return code (i.e. >= 0) from read() is an accurate count of bytes that were returned in the buffer. Zero is a valid count.
If you're expecting more data, then simply repeat the read() syscall. (However you may have setup the termios arguments poorly, e.g. VMIN=0 and VTIME=0).
And - zero indicates end of file
If you get 0, it means that the end of file (or an equivalent condition) has been reached and there is nothing else to read.
The above (one from a comment, and the other in an answer) are incorrect.
Reading from a tty device (e.g. a serial port) is not like reading from a file on a block device, but is temporal. Data for reading is only available as it is received over the comm link.
A non-blocking read() will return with -1 and errno set to EAGAIN when there is no data available.
A blocking non-canonical read() will return zero when there is no data available. Correlate your termios configuration with this to confirm that a return of zero is valid (and does not indicate "end of file").
In either case, the read() can be repeated to get more data when/if it arrives.
Also when using non-canonical (aka raw) mode (or non-blocking reads), do not expect or rely on the the read() to perform message or packet management for you. You will need to add a layer to your program to read bytes, concatenate those bytes into a complete message datagram/packet, and validate it before that message can be processed.
ssize_t read(int fd, void *buf, size_t count); returns you the size of bytes he read and stores it into the value you passed in parameters. And when errors happen it returns -1 (with errno set to EINTR) or to return the number of bytes already read..
From the linux man :
On files that support seeking, the read operation commences at the current file offset, and the file offset is incremented by the number of bytes read. If the current file offset is at or past the end of file, no bytes are read, and read() returns zero.
Yes, temp_uart_count will contain the actual number of bytes read, and obviously that number will be smaller or equal to the number of elements of temp_uart_data. If you get 0, it means that the end of file (or an equivalent condition) has been reached and there is nothing else to read.
If it returns -1 this indicate that an error has occurred and you'll need to check the errno variable to understand what happened.
I've been writing a little program for fun that transfers files over TCP in C on Linux. The program reads a file from a socket and writes it to file (or vice versa). I originally used read/write and the program worked correctly, but then I learned about splice and wanted to give it a try.
The code I wrote with splice works perfectly when reading from stdin (redirected file) and writing to the TCP socket, but fails immediately with splice setting errno to EINVAL when reading from socket and writing to stdout. The man page states that EINVAL is set when neither descriptor is a pipe (not the case), an offset is passed for a stream that can't seek (no offsets passed), or the filesystem doesn't support splicing, which leads me to my question: does this mean that TCP can splice from a pipe, but not to?
I'm including the code below (minus error handling code) in the hopes that I've just done something wrong. It's based heavily on the Wikipedia example for splice.
static void splice_all(int from, int to, long long bytes)
{
long long bytes_remaining;
long result;
bytes_remaining = bytes;
while (bytes_remaining > 0) {
result = splice(
from, NULL,
to, NULL,
bytes_remaining,
SPLICE_F_MOVE | SPLICE_F_MORE
);
if (result == -1)
die("splice_all: splice");
bytes_remaining -= result;
}
}
static void transfer(int from, int to, long long bytes)
{
int result;
int pipes[2];
result = pipe(pipes);
if (result == -1)
die("transfer: pipe");
splice_all(from, pipes[1], bytes);
splice_all(pipes[0], to, bytes);
close(from);
close(pipes[1]);
close(pipes[0]);
close(to);
}
On a side note, I think that the above will block on the first splice_all when the file is large enough due to the pipe filling up(?), so I also have a version of the code that forks to read and write from the pipe at the same time, but it has the same error as this version and is harder to read.
EDIT: My kernel version is 2.6.22.18-co-0.7.3 (running coLinux on XP.)
What kernel version is this? Linux has had support for splicing from a TCP socket since 2.6.25 (commit 9c55e01c0), so if you're using an earlier version, you're out of luck.
You need to splice_all from pipes[0] to to every time you do a single splice from from to pipes[1] (the splice_all is for the amount of bytes just read by the last single splice) . Reason: pipes represents a finite kernel memory buffer. So if bytes is more than that, you'll block forever in your splice_all(from, pipes[1], bytes).