Why does calling write() with stdin result in output? [duplicate] - c

I was working on an assignment where a program took a file descriptor as an argument (generally from the parent in an exec call) and read from a file and wrote to a file descriptor, and in my testing, I realized that the program would work from the command-line and not give an error if I used 0, 1 or 2 as the file descriptor. That made sense to me except that I could write to stdin and have it show on the screen.
Is there an explanation for this? I always thought there was some protection on stdin/stdout and you certainly can't fprintf to stdin or fgets from stdout.
#include <stdlib.h>
#include <stdio.h>
#include <unistd.h>
int main()
{
char message[20];
read(STDOUT_FILENO, message, 20);
write(STDIN_FILENO, message, 20);
return 0;
}

Attempting to write on a file marked readonly or vice-versa would cause write and read to return -1, and fail. In this specific case, stdin and stdout are actually the same file. In essence, before your program executes (if you don't do any redirection) the shell goes:
if(!fork()){
<close all fd's>
int fd = open("/dev/tty1", O_RDWR);
dup(fd);
dup(fd);
execvp("name", argv);
}
So, stdin, out, and err are all duplicates of the same file descriptor, opened for reading and writing.

read(STDIN_FILENO, message, 20);
write(STDOUT_FILENO, message, 20);
Should work. Note - stdout my be a different place from stdin (even on the command line). You can feed output from another process as stdin into you process, or arrange the stdin/stdout to be files.
fprintf/fgets have a buffer - thus reducing the number of system calls.

Best guess - stdin points to where the input is coming from, your terminal and stdout points to where output should be going, your terminal. Since they both point to the same place they are interchangeable(in this case)?

If you run a program on UNIX
myapp < input > output
You can open /proc/{pid}/fd/1 and read from it, open /proc/{pid}/fd/0 and write to it and for example, copy output to input. (There is possibly a simpler way to do this, but I know it works)
You can do any manner of things which are plain confusing if you put your mind to it. ;)

It's very possible that file descriptors 0, 1, and 2 are all open for both reading and writing (and in fact that they all refer to the same underlying "open file description"), in which case what you're doing will work. But as far as I know, there's no guarantee, so it also might not work. I do believe POSIX somewhere specifies that if stderr is connected to the terminal when a program is invoked by the shell, it's supposed to be readable and writable, but I can't find the reference right off..
Generally, I would recommend against ever reading from stdout or stderr unless you're looking for a terminal to read a password from, and stdin has been redirected (not a tty). And I would recommend never writing to stdin - it's dangerous and you could end up clobbering a file the user did not expect to be written to!

Related

Are stdin and stdout actually the same file?

I am completely confused, is it possible that stdin, stdout, and stderr point to the same filedescriptor internally?
Because it makes no difference in C if i want to read in a string from the console if I am using stdin as input or stdout.
read(1, buf, 200) works as read(0, buf, 200) how is this possible?
(0 == STDIN_FILENO == fileno(stdin),
1 == STDOUT_FILENO == fileno(stdout))
When the input comes from the console, and the output goes to the console, then all three indeed happen to refer to the same file. (But the console device has quite different implementations for reading and writing.)
Anyway, you should use stdin/stdout/stderr only for their intended purpose; otherwise, redirections like the following would not work:
<inputfile myprogram >outputfile
(Here, stdin and stdout refer to two different files, and stderr refers to the console.)
One thing that some people seem to be overlooking: read is the low-level system call. Its first argument is a Unix file descriptor, not a FILE* like stdin, stdout and stderr. You should be getting a compiler warning about this:
warning: passing argument 1 of ‘read’ makes integer from pointer without a cast [-Wint-conversion]
int r = read(stdout, buf, 200);
^~~~~~
On my system, it doesn't work with either stdin or stdout. read always returns -1, and errno is set to EBADF, which is "Bad file descriptor". It seems unlikely to me that those exact lines work on your system: the pointer would have to point to memory address 0, 1 or 2, which won't happen on a typical machine.
To use read, you need to pass it STDIN_FILENO, STDOUT_FILENO or STDERR_FILENO.
To use a FILE* like stdin, stdout or stderr, you need to use fread instead.
is it possible that stdin, stdout, and stderr point to the same filedescriptor internally?
A file descriptor is an index into the file descriptor table of your process (see also credentials(7)...). By definition STDIN_FILENO is 0, STDOUT_FILENO is 1, annd STDERR_FILENO is 2. Read about proc(5) to query information about some process (for example, try ls -l /proc/$$/fd in your interactive shell).
The program (usually, but not always, some shell) which has execve(2)-d your executable might have called dup2(2) to share (i.e. duplicate) some file descriptors.
See also fork(2), intro(2) and read some Linux programming book, such as the old ALP.
Notice that read(2) from STDOUT_FILENO could fail (e.g. with errno(3) being EBADF) in the (common) case where stdout is not readable (e.g. after redirection by the shell). If reading from the console, it could be readable. Read also the Tty Demystified.
There is nothing prohibiting any number of file-handles referring the same thing in the kernel.
And the default for a terminal-program is to have STDIN, STDOUT and STDERR refer to the same terminal.
So, it might look like it doesn't matter which you use, but it will all go wrong if the caller does any handle-redirection, which is quite common.
The most common is piping output from one program into the input of the next, but keeping stdout out of that.
An example for the shell:
source | filter | sink
Programs such as login and xterm typically open the tty device once when creating a new terminal session, and duplicate the file descriptor two or three times, arranging for file descriptors 0, 1 and 2 to be linked to the open file description of the opened tty device. They typically close all other file descriptors before exec-ing the shell. So if no further redirection is done by the shell or its child processes, the file descriptors, 0, 1 and 2, remain linked to the same file. Because the underlying tty device was opened in read-write mode, all three file descriptors have both read and write access.

I cannot write to stdin with /proc/{pid}/fd/0

I have this program:
#include <stdio.h>
int main() {
char buf[10];
puts("gimme input:");
fread(buf, 1, 10, stdin);
printf("got %s", buf);
}
When I run this and open another terminal I try to write to stdin:
echo "ASDFASDFASDF" > /proc/{pid}/0
ASDFSADFSADF gets printed on the terminal that is running my C program, but fread still doesn't return until I type in the actual terminal. It also does not print any of the text that I wrote to /proc/{pid}/0
Is there something else I have to do to programatically input text to stdin?
If stdin is a terminal, then writing something to stdin will write to the terminal. Reading from the terminal will read whatever is typed into the terminal, not what's written to the terminal. This is just how terminals work.
If you want a program to read from something other than a terminal, you have to direct that to happen. Or, if you want to use a virtual terminal that you can put information into it and have it be read out, you have to direct that to happen.
Probably the simplest solution is to create a pipe with mkpipe and have the program read from the pipe rather than a terminal.
When you execute the echo command output-ing to the File Descriptor 0 you're just sending text. If you check the file descriptor using ls -l probably it is pointing to an device TTY or PTY/PTS. If you check the FD type using lsof it will be tty. It means you need to interact with this FD such as TTY.
Basically you need to simulate the input to get the expected behavior.
You can do this by calling the kernel tool ioctl.tiocsti(). I added a python code into the following similar question: Writing to File descriptor 0 (STDIN) only affects terminal. Program doesn't read

Why is it possible to write() to STDIN?

I have the following code:
int main()
{
char str[] = "Hello\n";
write(0, str, 6); // write() to STDIN
return 0;
}
When I compiled and executed this program, Hello was printed in the terminal.
Why did it work? Did write() replace my 0 (STDIN) argument with 1 (STDOUT)?
Well, old Unix systems were originaly used with serial terminals, and a special program getty was in charge to manage the serial devices, open and configure them, display a message on an incoming connexion (break signal), and pass the opened file descriptors to login and then the shell.
It used to open the tty device as input/output to configure it, and that was then duplicated in file descriptors 0, 1 and 2. And by default the (still good old) stty command operates by default on standard input. To ensure compatibility, on modern Linuxes, when you are connected to a terminal, file descriptor 0 is still opened for input/output.
It can be used as a quick and dirty hack to only display prompts when standard input is connected to a terminal, because if standard input is redirected to a read only file or pipe, all writes will fail (without any harm for the process) and nothing will be printed. But it is anyway a dirty hack: just imagine what happens if a caller passes a file opened for input/output as standard input... That's why good practices recommend to use stderr for prompts or messages to avoid having them lost in redirected stream while keeping output and input in separate streams, which is neither harder nor longer.
TL/DR: if you are connected to a terminal, standard input is opened for input/output even if the name and standard usage could suggest it is read only.
Because by default your terminal will echo stdin back out to the console. Try redirecting it to a file; it didn't actually write to stdout.
Are you confusing write with fwrite? The first parameter in write is a "file descripter", but it's not stdin. Try doing an fwrite to stdin -- it doesn't happen.

Is STDIN_FILENO a default input for /bin/more?

I have written a very simple pipe in a small C code where the parent process writes to the pipe and the child process reads it and displays through more. I have used dup2 to attach the read descriptor to STDIN.
else /* where pid=fork() is 0 */
{
close(fd[1]);
if(fd[0]!=STDIN_FILENO)
{
if (dup2(fd[0],STDIN_FILENO)!=STDIN_FILENO)
{
perror("dup2 redirection");
exit(1);
}
}
close(fd[0]);
execl("/bin/more","more",(char *)0);
}
The last part has been taken from some existing code. My question is how does /bin/more knows that it has to work on STDIN. If I run simple more on AIX session, it throws error. But when more runs with execl, from a C code, it runs without any argument and it considers STDIN as argument. Can some one please explain?
I have written simple pipe before as well. I need to read specifically using the read descriptor. But here it seems that /bin/more does it without being instructed to read.
Yes, more reads from standard input if it is not given a file to read. If the input is a pipe (or file) and the standard output is a terminal, it will read from standard output (which sound preposterous, but actually works). But if the input is a pipe and the standard output is not a terminal, it is designed to then behave like cat, because normally it will read from the terminal at the end of each page of output, but that's where the data is coming from anyway.
To get it to work 'normally', you'd have use a "pty" (pseudo-tty) for the input, which is not a trivial exercise.

Writing to stdin and reading from stdout (UNIX/LINUX/C Programming)

I was working on an assignment where a program took a file descriptor as an argument (generally from the parent in an exec call) and read from a file and wrote to a file descriptor, and in my testing, I realized that the program would work from the command-line and not give an error if I used 0, 1 or 2 as the file descriptor. That made sense to me except that I could write to stdin and have it show on the screen.
Is there an explanation for this? I always thought there was some protection on stdin/stdout and you certainly can't fprintf to stdin or fgets from stdout.
#include <stdlib.h>
#include <stdio.h>
#include <unistd.h>
int main()
{
char message[20];
read(STDOUT_FILENO, message, 20);
write(STDIN_FILENO, message, 20);
return 0;
}
Attempting to write on a file marked readonly or vice-versa would cause write and read to return -1, and fail. In this specific case, stdin and stdout are actually the same file. In essence, before your program executes (if you don't do any redirection) the shell goes:
if(!fork()){
<close all fd's>
int fd = open("/dev/tty1", O_RDWR);
dup(fd);
dup(fd);
execvp("name", argv);
}
So, stdin, out, and err are all duplicates of the same file descriptor, opened for reading and writing.
read(STDIN_FILENO, message, 20);
write(STDOUT_FILENO, message, 20);
Should work. Note - stdout my be a different place from stdin (even on the command line). You can feed output from another process as stdin into you process, or arrange the stdin/stdout to be files.
fprintf/fgets have a buffer - thus reducing the number of system calls.
Best guess - stdin points to where the input is coming from, your terminal and stdout points to where output should be going, your terminal. Since they both point to the same place they are interchangeable(in this case)?
If you run a program on UNIX
myapp < input > output
You can open /proc/{pid}/fd/1 and read from it, open /proc/{pid}/fd/0 and write to it and for example, copy output to input. (There is possibly a simpler way to do this, but I know it works)
You can do any manner of things which are plain confusing if you put your mind to it. ;)
It's very possible that file descriptors 0, 1, and 2 are all open for both reading and writing (and in fact that they all refer to the same underlying "open file description"), in which case what you're doing will work. But as far as I know, there's no guarantee, so it also might not work. I do believe POSIX somewhere specifies that if stderr is connected to the terminal when a program is invoked by the shell, it's supposed to be readable and writable, but I can't find the reference right off..
Generally, I would recommend against ever reading from stdout or stderr unless you're looking for a terminal to read a password from, and stdin has been redirected (not a tty). And I would recommend never writing to stdin - it's dangerous and you could end up clobbering a file the user did not expect to be written to!

Resources