I have a piece of software that is able to read commands from stdin for debug purposes in a separate thread. When my software runs as foreground process read behaves as expected, its blocking and waits for input by the user, i.e the thread sleeps.
When the software is run as a background process, read constantly returns 0 (possible EOF detected?).
The problem here is, that this specific read is in a while(true) loop. It runs as fast as it can and steals precious CPU load on my embedded device.
I tried redirecting /dev/null to the process but the behavior was the same. I am running my custom Linux on an ARM Cortex A5 board.
The problematic piece of code follows and is run inside its own thread:
char bufferUserInput[256];
const int sizeOfBuffer = SIZE_OF_ARRAY(bufferUserInput);
while (1)
{
int n = read(0, bufferUserInput, sizeOfBuffer); //filedes = 0 equals to reading from stdin
printf("n is: %d\n", n);
printf("Errno: %s",strerror(errno));
if (n == 1)
{
continue;
}
if ((1 < n)
&& (n < sizeOfBuffer)
&& ('\n' == bufferUserInput[n - 1]))
{
printf("\r\n");
bufferUserInput[n - 1] = '\0';
ProcessUserInput(&bufferUserInput[0]);
} else
{
n = 0;
}
}
I am looking for a way to prevent read from constantly returning when running in the background and wait for user input (which of course will never come).
If you start your program in the "background" (as ./program &) from a shell script, it's stdin will be redirected from /dev/null (with some exceptions).
Trying to read from /dev/null will always return 0 (EOF).
Example (on linux):
sh -c 'ls -l /proc/self/fd/0 & wait'
... -> /dev/null
sh -c 'dd & wait'
... -> 0 bytes copied, etc
The fix from the link above should also work for you:
#! /bin/sh
...
exec 3<&0
./your_program <&3 &
...
When stdin is not a terminal, read is returning with 0 because you are at the end of the file. read only blocks after reading all available input when there could be more input in the future, which is considered to be possible for terminals, pipes, sockets, etc. but not for regular files nor for /dev/null. (Yes, another process could make a regular file bigger, but that possibility isn't considered in the specification for read.)
Ignoring the various problems with your read loop that other people have pointed out (which you should fix anyway, as this will make reading debug commands from the user more reliable) the simplest change to your code that will fix the problem you're having right now is: check on startup whether stdin is a terminal, and don't launch the debug thread if it isn't. You do that with the isatty function, declared in unistd.h.
#include <stdio.h>
#include <unistd.h>
// ...
int main(void)
{
if (isatty(fileno(stdin)))
start_debug_thread();
// ...
}
(Depending on your usage context, it might also make sense to run the debug thread when stdin is a pipe or a socket, but I would personally not bother, I would rely on ssh to provide a remote (pseudo-)terminal when necessary.)
read() doesn't return 0 when reading from the terminal in a backgrounded process.
It either continues to block while causing a SIGTTIN to be sent to the process (which may break the blocking and cause retval=-1,errno=EINTR to be returned or it causes retval=-1, errno EIO if SIGTTIN is ignore.
The snippet below demonstrates this:
#include <unistd.h>
#include <stdio.h>
#include <signal.h>
int main()
{
char c[256];
ssize_t nr;
signal(SIGTTIN,SIG_IGN);
nr = read(0,&c,sizeof(c));
printf("%zd\n", nr);
if(0>nr) perror(0);
fflush(stdout);
}
The code snippet you've shown can't possibly test reveal 0-returns since you never test for zero-ness in the return value.
Related
I had this simple shell like program that works both in interactive and non-interactive mode. I have simplified the code as much as I can to present my question, but it is still a bit long, so sorry for that!
#include <stdio.h>
#include <stdlib.h>
#include <sys/wait.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <unistd.h>
/**
*main-entry point for gbk
*Return: returns the index of 0 on sucess
*/
int main(void)
{
char *cmd = malloc(1 * sizeof(char)), *cmdargs[2];
size_t cmdlen = 0;
int childid, len;
struct stat cmdinfo;
while (1)
{
printf("#cisfun$ ");
len = getline(&cmd, &cmdlen, stdin);
if (len == -1)
{
free(cmd);
exit(-1);
}
/*replace the ending new line with \0*/
cmd[len - 1] = '\0';
cmdargs[0] = cmd;
cmdargs[1] = NULL;
childid = fork();
if (childid == 0)
{
if (stat(*cmdargs, &cmdinfo) == 0 && cmdinfo.st_mode & S_IXUSR)
execve(cmdargs[0], cmdargs, NULL);
else
printf("%s: command not found\n", *cmdargs);
exit(0);
}
else
wait(NULL);
}
free(cmd);
exit(EXIT_SUCCESS);
}
To summarize what this program does, it will first print the prompt #cisfun$ , waits for an input in interactive mode and takes the piped value in non-interactive mode, creates a child process, the child process checks if the string passed is a valid executable binary, and if it is, it executes it other wise it prints a command not found message and prompts again.
I have got this program to work fine for most of the scenarios in interactive mode, but when I run it in non-interactive mode all sorts of crazy (unexpected) things start to happen.
For example, when I run echo "/bin/ls"|./a.out, (a.out is the name of the compiled program)
you would first expect the #cisfun$ message to be printed since that is the first thing performed in the while loop, and then the output of the /bin/ls command, and finally #cisfun$ prompt, but that isn't what actually happens. Here is what happens,
It is very weird the ls command is run even before the first print message. I, at first, thought there was some threading going on and the printf was slower than the child process executing the ls command. But I am not sure if that is true as I am a noob. and also things get a bit crazier if I was printing a message with '\n' at the end rather than just a string. (if I change printf("#cisfun$ "); to printf("#cisfun$\n");) the following happens,
It works as it should, so it got me thinking what is the relation between '\n', fork and speed of printf. Just in short what is the explanation for this.
The second question I have is, why doesn't my program execute the first command and go to an interactive mode, I don't understand why it terminates after printing the second #cisfun$ message. By checking the status code (255) after exit I have realized that the effect is the same as pressing ctr+D in the interactive mode, which I believe is exited by the getline function. But I dont understand why EOF is being inserted in the second prompt.
I have a sample program that takes in an input from the terminal and executes it in a cloned child in a subshell.
#define _GNU_SOURCE
#include <stdlib.h>
#include <sys/wait.h>
#include <sched.h>
#include <unistd.h>
#include <string.h>
#include <signal.h>
int clone_function(void *arg) {
execl("/bin/sh", "sh", "-c", (char *)arg, (char *)NULL);
}
int main() {
while (1) {
char data[512] = {'\0'};
int n = read(0, data, sizeof(data));
// fgets(data, 512, stdin);
// int n = strlen(data);
if ((strcmp(data, "exit\n") != 0) && n > 1) {
char *line;
char *lines = strdup(data);
while ((line = strsep(&lines, "\n")) != NULL && strcmp(line, "") != 0) {
void *clone_process_stack = malloc(8192);
void *stack_top = clone_process_stack + 8192;
int clone_flags = CLONE_VFORK | CLONE_FS;
clone(clone_function, stack_top, clone_flags | SIGCHLD, (void *)line);
int status;
wait(&status);
free(clone_process_stack);
}
} else {
exit(0);
}
}
return 0;
}
The above code works in an older Linux system (with minimal RAM( but not in a newer one. Not works means that if I type a simple command like "ls" I don't see the output on the console. But with the older system I see it.
Also, if I run the same code on gdb in debugger mode then I see the output printed onto the console in the newer system as well.
In addition, if I use fgets() instead of read() it works as expected in both systems without an issue.
I have been trying to understand the behavior and I couldn't figure it out. I tried doing an strace. The difference I see is that the wait() return has the output of the ls program in the cases it works and nothing for the cases it does not work.
Only thing I can think of is that read(), since its not a library function has undefined behavior across systems. But I can't agree as to how its affecting the output.
Can someone point me out to why I might be observing this behavior?
EDIT
The code is compiled as:
gcc test.c -o test
strace when it's not working as expected is shown below
strace when it's working as expected (only difference is I added a printf("%d\n", n); following the call for read())
Thank you
Shabir
There are multiple problems in your code:
a successful read system call can return any non zero number between 1 and the buffer size depending on the type of handle and available input. It does not stop at newlines like fgets(), so you might get line fragments, multiple lines, or multiple lines and a line fragment.
furthermore, if read fills the whole buffer, as it might when reading from a regular file, there is no trailing null terminator, so passing the buffer to string functions has undefined behavior.
the test if ((strcmp(data, "exit\n") != 0) && n > 1) { is performed in the wrong order: first test if read was successful, and only then test the buffer contents.
you do not set the null terminator after the last byte read by read, relying on buffer initialization, which is wasteful and insufficient if read fills the whole buffer. Instead you should make data one byte longer then the read size argument, and set data[n] = '\0'; if n > 0.
Here are ways to fix the code:
using fgets(), you can remove the line splitting code: just remove initial and trailing white space, ignore empty and comment lines, clone and execute the commands.
using read(), you could just read one byte at a time, collect these into the buffer until you have a complete line, null terminate the buffer and use the same rudimentary parser as above. This approach mimics fgets(), by-passing the buffering performed by the standard streams: it is quite inefficient but avoids reading from handle 0 past the end of the line, thus leaving pending input available for the child process to read.
It looks like 8192 is simply too small a value for stack size on a modern system. execl needs more than that, so you are hitting a stack overflow. Increase the value to 32768 or so and everything should start working again.
I've found that a open filestream will get messed up if we do fork() before closing it. It is well known that concurrency, i.e., race conditions can happen when parent and child process want to modify the filestream. However, even when the child process doesn't ever touch the filestream, it still has undefined behavior. I was wondering if someone can explain this maybe from how the kernel deals with a filestream during the stages where child process is forked and exited.
Below is a quick snippet of a strange behavior:
#include <unistd.h>
#include <stdio.h>
#include <stdlib.h>
#include <sys/types.h>
#include <sys/wait.h>
int main() {
// Open file
FILE* fp = fopen("test.txt", "r");
int count = 0;
char* buffer = NULL;
size_t capacity = 0;
ssize_t line = 0;
while ( (line = getline(&buffer, &capacity, fp)) != -1 ) {
if (line > 0 && buffer[line - 1] == '\n') // remove the end '\n'
buffer[line - 1] = 0;
pid_t pid = fork();
if (pid == 0) {
// fclose(fp); // Magic line here: when you add this, everything is fine
if (*buffer == '2')
execlp("xyz", "xyz", NULL);
else
execlp("pwd", "pwd", NULL);
exit(1);
} else {
waitpid(pid, NULL, 0);
}
count++;
}
printf("Loops: %d\n", count);
return 0;
}
Just copy the code into a new file (e.g., test.c). And create a .txt file test.txt with the simple content
1
2
3
4
and run
$ gcc test.c && ./a.out
There are 4 lines in the file. The loops is expected to read each line and execute exactly 4 times (1 2 3 4). And I choose to let it exec an invalid command "xyz" when it's in the 2nd loop. Then, you will find the loop actually executes 6 times (1 2 3 4 3 4)! The fact is that, when all four commands executed are valid, nothing will go wrong. But if there is an invalid command executed, every command after it will be executed twice. (Please note that this strange behavior only occurs with Linux machine, my Mac OS is doing okay, not sure about Windows. So the problem is platform-dependent?)
It looks like whenever I fork(), the filestream in parent is no longer promised to be the old fp (non-deterministic behavior), even when my child process doesn't touch it.
A temporary solution I found is: fclose(fp) in child process. This will silence the above strange behavior, but in more complex conditions, there are still other things can be observed. It would be appreciated if somebody can give me some insight into this problem. Thanks
As said in the comments already you need to close open file descriptors before calling exec.
In this blogpost (section 4) there is a neat code sample you can use to ensure all fds are closed even in complex applications where you don't always know what files are open at the moment:
for ( i=getdtablesize(); i>2; --i)
close(i); /* close all descriptors */
(slightly modified to keep stdin, stdout, stderr open)
It's kind of hacky but it works. If you want to avoid that you can also set the O_CLOEXEC flag on each file descriptor that you open. Since when using fopen you do not directly call open() you can accomplish this by adding the 'e' flag to it (when using glibc >= 2.7):
FILE* fp = fopen("test.txt", "er");
When calling exec*() all file descriptors with this flag are automatically closed.
I need a command in cmd that works like pause but I can code to continue.
e.g.
system("pause");
some lines of code;`
The problem with system("pause") is that "some lines of code" will not work until the user press sth.
I want to continue cmd with some command.
I want something that run the code but update cmd only when I give the
permission to it.
If I understand correctly, the code shall produce output which you don't want to be shown before you press a key. If you don't mind to have the output paged, you could use something like
FILE *stream = popen("PAUSE<CON&&MORE", "w");
and let the code output to stream (with fprintf(stream, ...) etc.).
Don't ever use system() if you can avoid it. It's crude, error-prone, and non-portable.
C11 introduces threading support, including thrd_sleep(). That should be your preferred solution (if supported by your compiler setup).
If your compiler vendor does not support C11, bugger him about it. That standard is almost four years old now.
WinAPI defines the Sleep() function:
VOID WINAPI Sleep(
_In_ DWORD dwMilliseconds
);
This function causes a thread to relinquish the remainder of its time
slice and become unrunnable for an interval based on the value of
dwMilliseconds.
#include <windows.h>
int main()
{
Sleep( 5000 ); // pause execution for at least 5 seconds
some_lines_of_code;
return 0;
}
I think what you're looking for is a method to check if stdin contains data ready to read; you want to use some non-blocking or asynchronous I/O so that you can read input when it becomes available, and perform other tasks until then.
You won't find a whole heap about non-blocking/asynchronous I/O in standard C, but in POSIX C you can set STDIN_FILENO as non-blocking using fcntl. As an example, here's a program which prompts you to press enter (like pause does) and busy-loops, allowing your code to conduct other (preferably non-blocking) actions inside the loop while it waits for the keystroke (ahemm, byte, since stdin is technically a file):
#include <stdio.h>
#include <fcntl.h>
int main(void) {
char c;
puts("Press any key to continue...");
fcntl(STDIN_FILENO, F_SETFL, fcntl(STDIN_FILENO, F_GETFL, 0) | O_NONBLOCK);
while (read(STDIN_FILENO, 1, &c) != 1 && errno == EAGAIN) {
/* code in here will execute repeatedly until a key is struck or a byte is sent */
errno = 0;
}
if (errno) {
/* code down here will execute when an input error occurs */
}
else {
/* code down here will execute when that precious byte is finally sent */
}
}
That's non-blocking I/O. Other alternatives include using asynchronous I/O or extra threads. You should probably use non-blocking I/O or asynchronous I/O (i.e. epoll or kqueue) for this task in particular; using extra threads just to determine when a character is sent to stdin is likely a little bit too hefty.
I am writing a C program on unix which should redirect it's output to the file, and write to it some text every second in infinite loop:
#include <fcntl.h>
#include <stdio.h>
#include <unistd.h>
int main(void) {
int outDes = open("./output.txt", O_APPEND | O_WRONLY);
dup2(outDes, 1);
while(1) {
printf("output text\n");
sleep(1);
}
}
But it writes nothing to the output file. I tried to change the 'while' loop for 'for' with 10 loops, and I found that it writes all 10 lines to the file at once after the series ends. It is not very good for me, while I need to have an infinite loop.
When I'm not redirecting output, it is all ok, and new line appears every second on terminal.
I also tried to put one
printf("text\n");
before redirecting output to the file. Then the program wrote the lines to the file in real time, which is good, but wrote there the first (non redirected) line too. I don't want this first line in my output file, I don't understand how it could be written into file when output was not redirected yet (maybe redirect remained there since last run?), and how it could cause that the lines are suddenly written in real time.
Can anyone explain me how does it work?
You are not checking the return value of open() and dup2(). If either open() or dup2() failed, it won't write anything in output.txt.
if (outDes < -1) {
perror("open");
return 1;
}
if (dup2(outDes, 1) == -1) {
perror("dup2");
return 1;
}
stdio streams are buffered, and the writes happen in memory before being done on the real file description.
Try adding a fflush(stdout) after printf().
You're running afoul of a poorly documented DWIMmy feature in many Unix C libraries. The first time you write to stdout or stderr, the library probes the underlying file descriptor (with isatty(3)). If it's a (pseudo-)terminal, the library puts the FILE in "line buffered" mode, meaning that it'll buffer input until a newline is written and then flush it all to the OS. But if the file descriptor is not a terminal, it puts the FILE in "fully buffered" mode, where it'll buffer something like BUFSIZ bytes of output before flushing them, and pays no attention to line breaks.
This is normally the behavior you want, but if you don't want it (as in this case), you can change it with setvbuf(3). This function (although not the behavior I described above) is ISO standard C. Here's how to use it in your case.
#include <stdio.h>
#include <unistd.h>
int
main(void)
{
if (freopen("output.txt", "a", stdout)) {
perror("freopen");
return 1;
}
if (setvbuf(stdout, 0, _IOLBF, 0)) {
perror("setvbuf");
return 1;
}
for (;;) {
puts("output text");
sleep(1);
}
/* not reached */
}