I have the following code:
int main()
{
char str[] = "Hello\n";
write(0, str, 6); // write() to STDIN
return 0;
}
When I compiled and executed this program, Hello was printed in the terminal.
Why did it work? Did write() replace my 0 (STDIN) argument with 1 (STDOUT)?
Well, old Unix systems were originaly used with serial terminals, and a special program getty was in charge to manage the serial devices, open and configure them, display a message on an incoming connexion (break signal), and pass the opened file descriptors to login and then the shell.
It used to open the tty device as input/output to configure it, and that was then duplicated in file descriptors 0, 1 and 2. And by default the (still good old) stty command operates by default on standard input. To ensure compatibility, on modern Linuxes, when you are connected to a terminal, file descriptor 0 is still opened for input/output.
It can be used as a quick and dirty hack to only display prompts when standard input is connected to a terminal, because if standard input is redirected to a read only file or pipe, all writes will fail (without any harm for the process) and nothing will be printed. But it is anyway a dirty hack: just imagine what happens if a caller passes a file opened for input/output as standard input... That's why good practices recommend to use stderr for prompts or messages to avoid having them lost in redirected stream while keeping output and input in separate streams, which is neither harder nor longer.
TL/DR: if you are connected to a terminal, standard input is opened for input/output even if the name and standard usage could suggest it is read only.
Because by default your terminal will echo stdin back out to the console. Try redirecting it to a file; it didn't actually write to stdout.
Are you confusing write with fwrite? The first parameter in write is a "file descripter", but it's not stdin. Try doing an fwrite to stdin -- it doesn't happen.
Related
I was tasked with creating a test program in C that reads the contents of the standard input and then prints them.
But I have a little doubt: what is exactly standard input?
Is it what I type in the keyboard? Is it a file I have to read?
Both of them?
And the same goes for standard output: is it the console? a file?
The C standard (e.g. C99 or C11) defines what should be expected from the standard <stdio.h> header (after having suitably #include-d it). See stdio(3) man page.
Then you have the stdin and stdout and stderr file handles (pointers to some FILE which is an abstract data type).
The fact that stdin is related to some device (e.g. a keyboard) is implementation specific.
You could (but that would be unethical and/or inefficient) implement the C standard with e.g. a room of human slaves (that is unethical, if you use paid workers that would be just inefficient), instead of using a computer. Often, computers gives your some implementation of the C standard thru the help of some operating system.
You may want to know, inside your C program, if stdin is a "keyboard" or redirected from some "file". Unfortunately, AFAIK, there is no C99-standard way to know that.
As you mention, stdin, stdout and stderr should be available in your program at startup (i.e. after entering main ....). Hence, unless you fclose the stdin stream, you can read it (with getchar, scanf, getline, fgets, fscanf ... and friends) without any prior care (so you don't need to fopen it yourself).
On Linux or most Posix systems, you might use as an approximation isatty(STDIN_FILENO) - see isatty(3) for more - to test if stdin "is" the "keyboard" (by testing if it is some tty). See also this & that.
Yes, standard input (stdin) is input exepected from the keyboard. So, could be in the form of user input from a basic program or from a command line argument. Standard output (stdout) is the output of the code, usually to the terminal window. You could output your code almost anywhere, i.e. to a file, to a textbox, browser, but the standard is the stdout which is the terminal.
Hope that helps.
Normally, the standard input is the keyboard and the standard output the screen. However, you can redirect this in the command line using the "<" and ">" symbols. A command line like
dir /s > "Tree.txt"
will change the standard output for the dir command to be the specified file. So all output goes to that file. The called application or command itself doesn't normally even notice the difference.
stdin is file descriptor 0, you can get a file to stdin by:
cat file |yourprog
#or
yourprog <file
likewise for stdout (file descriptor 1)
yourprog | someotherprog #pipe your stdout to the stdin of another program
yourprog > somefile #save stdout to a file
yourprog >> somefile #append stdout to a file
and stderr (fd 2)
yourprog 2> errlogfile
if you have a program that takes a file but doesn't handle stdin, you can use the above formats by doing this (assuming -f if the input file argument)
myprog -f /dev/stdin
//and a horrible example of how not to read from stdin and write to stdout
char buf[4096];
while(write(1,buf,read(0,buf,4096)));
standard output (or stdout) refers to the standardized streams of data that are produced by command line programs (i.e., all-text mode programs) in Linux and other Unix-like operating systems.
I was working on an assignment where a program took a file descriptor as an argument (generally from the parent in an exec call) and read from a file and wrote to a file descriptor, and in my testing, I realized that the program would work from the command-line and not give an error if I used 0, 1 or 2 as the file descriptor. That made sense to me except that I could write to stdin and have it show on the screen.
Is there an explanation for this? I always thought there was some protection on stdin/stdout and you certainly can't fprintf to stdin or fgets from stdout.
#include <stdlib.h>
#include <stdio.h>
#include <unistd.h>
int main()
{
char message[20];
read(STDOUT_FILENO, message, 20);
write(STDIN_FILENO, message, 20);
return 0;
}
Attempting to write on a file marked readonly or vice-versa would cause write and read to return -1, and fail. In this specific case, stdin and stdout are actually the same file. In essence, before your program executes (if you don't do any redirection) the shell goes:
if(!fork()){
<close all fd's>
int fd = open("/dev/tty1", O_RDWR);
dup(fd);
dup(fd);
execvp("name", argv);
}
So, stdin, out, and err are all duplicates of the same file descriptor, opened for reading and writing.
read(STDIN_FILENO, message, 20);
write(STDOUT_FILENO, message, 20);
Should work. Note - stdout my be a different place from stdin (even on the command line). You can feed output from another process as stdin into you process, or arrange the stdin/stdout to be files.
fprintf/fgets have a buffer - thus reducing the number of system calls.
Best guess - stdin points to where the input is coming from, your terminal and stdout points to where output should be going, your terminal. Since they both point to the same place they are interchangeable(in this case)?
If you run a program on UNIX
myapp < input > output
You can open /proc/{pid}/fd/1 and read from it, open /proc/{pid}/fd/0 and write to it and for example, copy output to input. (There is possibly a simpler way to do this, but I know it works)
You can do any manner of things which are plain confusing if you put your mind to it. ;)
It's very possible that file descriptors 0, 1, and 2 are all open for both reading and writing (and in fact that they all refer to the same underlying "open file description"), in which case what you're doing will work. But as far as I know, there's no guarantee, so it also might not work. I do believe POSIX somewhere specifies that if stderr is connected to the terminal when a program is invoked by the shell, it's supposed to be readable and writable, but I can't find the reference right off..
Generally, I would recommend against ever reading from stdout or stderr unless you're looking for a terminal to read a password from, and stdin has been redirected (not a tty). And I would recommend never writing to stdin - it's dangerous and you could end up clobbering a file the user did not expect to be written to!
This problem maybe a little bit hard to state. For example, a program receive a string from stdin, but it need a interactive input from user, like this:
echo "Some text to handle later after command is specified" | a.out
And in the beginning of the program:
printf("Please input command first");
scanf("%s", &cmd);
/* Some Code Here */
/* process "Some text to handle later after command is specified" */
Is there a way to "suspend" previous input stream and wait for the scanf's ones?
The standard does not specify any way to get interactive user input besides reading from stdin. Since your stdin is occupied with a pipe, you need to tread an implementation-specific path.
For Unix-like systems that would be a special file named /dev/tty. fopen it and use normal stdio functions.
On Windows you probably need to call Console API.
Threre's no guarantee a program is attached to any interactive device, so prepare to fail.
Note that it's considered bad style to write programs this way. If there's any user input expected, a well-witten program should just use stdin. All other input streams should then be passed as filenames via command-line arguments.
When using pipes, the shell sets up the programs stdin to be from the output of the previous command. So reading should not be a problem.
The problem here is that you should not print any output if the input is from a pipe (or redirection). This can be done by checking the result of the isatty function:
if (isatty(fileno(stdin)))
{
/* Only print prompt if input is an interactive terminal */
printf(...);
}
scanf(...);
Or am I misreading you, in that you want to read both from the user, and from the pipe? Then you probably have to open a direct connection to the terminal.
For this you could use ttyname to get the name of the TTY device of stdout and open that device for input to read the user input. That won't work if the stdout is leading to a pipe (or is being redirected) as well.
I was working on an assignment where a program took a file descriptor as an argument (generally from the parent in an exec call) and read from a file and wrote to a file descriptor, and in my testing, I realized that the program would work from the command-line and not give an error if I used 0, 1 or 2 as the file descriptor. That made sense to me except that I could write to stdin and have it show on the screen.
Is there an explanation for this? I always thought there was some protection on stdin/stdout and you certainly can't fprintf to stdin or fgets from stdout.
#include <stdlib.h>
#include <stdio.h>
#include <unistd.h>
int main()
{
char message[20];
read(STDOUT_FILENO, message, 20);
write(STDIN_FILENO, message, 20);
return 0;
}
Attempting to write on a file marked readonly or vice-versa would cause write and read to return -1, and fail. In this specific case, stdin and stdout are actually the same file. In essence, before your program executes (if you don't do any redirection) the shell goes:
if(!fork()){
<close all fd's>
int fd = open("/dev/tty1", O_RDWR);
dup(fd);
dup(fd);
execvp("name", argv);
}
So, stdin, out, and err are all duplicates of the same file descriptor, opened for reading and writing.
read(STDIN_FILENO, message, 20);
write(STDOUT_FILENO, message, 20);
Should work. Note - stdout my be a different place from stdin (even on the command line). You can feed output from another process as stdin into you process, or arrange the stdin/stdout to be files.
fprintf/fgets have a buffer - thus reducing the number of system calls.
Best guess - stdin points to where the input is coming from, your terminal and stdout points to where output should be going, your terminal. Since they both point to the same place they are interchangeable(in this case)?
If you run a program on UNIX
myapp < input > output
You can open /proc/{pid}/fd/1 and read from it, open /proc/{pid}/fd/0 and write to it and for example, copy output to input. (There is possibly a simpler way to do this, but I know it works)
You can do any manner of things which are plain confusing if you put your mind to it. ;)
It's very possible that file descriptors 0, 1, and 2 are all open for both reading and writing (and in fact that they all refer to the same underlying "open file description"), in which case what you're doing will work. But as far as I know, there's no guarantee, so it also might not work. I do believe POSIX somewhere specifies that if stderr is connected to the terminal when a program is invoked by the shell, it's supposed to be readable and writable, but I can't find the reference right off..
Generally, I would recommend against ever reading from stdout or stderr unless you're looking for a terminal to read a password from, and stdin has been redirected (not a tty). And I would recommend never writing to stdin - it's dangerous and you could end up clobbering a file the user did not expect to be written to!
I am working on a linux daemon and having some issues with the stdin/stdout. Normally because of the nature of a daemon you do not have any stdin or stdout. However, I do have a function in my daemon that is called when the daemon runs for the first time to specify different parameters that are required for the daemon to run successfully. When this function is called the terminal becomes so sluggish that I have to launch a seperate shell and kill the daemon with top to get a responsive prompt back. Now I suspect that this has something to do with the forking process closing the stdin/stdout but I am not quite sure how I could work around this. If you guys could shed some light on the situation that would be most appreciated. Thanks.
Edit:
int main(argc, char *argv[]) {
/* setup signal handling */
/* check command line arguments */
pid_t pid, sid;
pid = fork();
if (pid < 0) {
exit(EXIT_FAILURE);
}
if(pid > 0){
exit(EXIT_SUCCESS);
}
sid = setsid();
if(sid < 0) {
exit(EXIT_FAILURE);
}
umask(027);
/* set syslogging */
/* do some logic to determine wether we are running the daemon for the first time and if we are call the one time function which uses fgets() to recieve some input */
while(1) {
/* do required work */
}
/* do some clean up procedures and exit */
return 0;
}
You guys mention using a config file. This is is exactly what I do to store the parameters recieved via input. However I still initially need to get these from the user via the stdin. The logic for determining whether we are running for the first time is based off of the existence of the config file.
Normally, the standard input of a daemon should be connected to /dev/null, so that if anything is read from standard input, you get an EOF immediately. Normally, standard output should be connected to a file - either a log file or /dev/null. The latter means all writes will succeed, but no information will be stored. Similarly, standard error should be connected to /dev/null or to a log file.
All programs, including daemons, are entitled to assume that stdin, stdout and stderr are appropriately opened file streams.
It is usually appropriate for a daemon to control where its input comes from and outputs go to. There is seldom occasion for input to come from other than /dev/null. If the code was written to survive without standard output or standard error (for example, it opens a standard log channel, or perhaps uses syslog(3)) then it may be appropriate to close stdout and stderr. Otherwise, it is probably appropriate to redirect them to /dev/null, while still logging messages to a log file. Alternatively, you can redirect both stdout and stderr to a log file - beware continuously growing log files.
Your sluggish-to-impossible response time might be because your program is not paying attention to EOF in a read loop somewhere. It might be prompting for user input on /dev/null, and reading a response from /dev/null, and not getting a 'y' or 'n' back, it tries again, which chews up your system horribly. Of course, the code is flawed in not handling EOF, and counting the number of times it gets an invalid response and stopping being silly after a reasonable number of attempts (16, 32, 64). The program should shut up shop sanely and safely if it expects a meaningful input and continues not to get it.
You guys mention using a config file. This is is exactly what I do to store the parameters recieved via input. However I still initially need to get these from the user via the stdin. The logic for determining whether we are running for the first time is based off of the existence of the config file.
Instead of reading stdin, have the user write the config file themselves; check for its existence before forking, and exit with an error if it doesn't. Include a sample config file with the daemon, and document its format in your daemon's manpage. You do have a manpage, yes? Your config file is textual, yes?
Also, your daemonization logic is missing a key step. After forking, but before calling setsid, you need to close fds 0, 1, and 2 and reopen them to /dev/null (do not attempt to do this with fclose and fopen). That should fix your sluggish terminal problem.
Your design is wrong. Daemon processes should not take input via stdin or deliver output to stdout/stderr. You'll close those descriptors as part of the daemonizing phase. Daemons should take configuration parameters from the command line, a config file, or both. If runtime-input is required you'll have to read a file, open a socket, etc., but the point of a daemon is that it should be able to run and do its thing without a user being present at the console.
If you want to run your program detached, use the shell: (setsid <command> &). Do not fork() inside your program, which will cause sysadmin nightmare.
Don't use syslog() nor redirect stdout or stderr.
Better yet, use a daemon manager such as daemon tools, runit, OpenRC and systemd, to daemonize your program for you.
Use a config file. Do not use STDIN or STDOUT with a daemon. Daemons are meant to run in the background with no user interaction.
If you insist on using stdin/keyboard input to fire up the daemon (e.g. to get some magic passphrase you wouldn't want to store in a file) then handle all I/O before the fork().