This question already has answers here:
Output redirection using fork() and execl()
(2 answers)
Closed 8 years ago.
I was trying to redirect the output from an arduino ( USB ) to some file at the computer using the next code:
#include <stdio.h>
#include <stdlib.h>
#include <signal.h>
int main()
{
pid_t pid;
pid = fork();
if (pid == 0) {
execl("/bin/cat","cat /dev/cu.usbmodem1421 - 9600 >> data.txt",NULL);
}
printf("Cuando desee terminar la recolección de datos presione cualquier tecla: ");
getchar();
kill(pid, SIGKILL);
return 0;
}
Using ps to verify if everything is fine, i can see the process running behind my main program. After stoping the program the data file has nothing on it. I tried to use system() which is a little bit nasty because i need to kill the program manually using OSX terminal. I think maybe the syntaxis is wrong and all i need is to add another parameter but nothing seems to work.
As written, your code executes /bin/cat with the name (argv[0]) of:
cat /dev/cu.usbmodem1421 - 9600 >> data.txt
and no other arguments, so it reads from its standard input and writes to its standard output and sits around until it detects EOF on its standard input (or you kill it).
The simplest option is to use system() to run the command.
Failing that, you will need to split up the string into separate arguments, and handle the I/O redirection in the child. Note that the code would read from 3 files:
/dev/cu.usbmodem1421
-
9600
The second would be interpreted as standard input again. If the 9600 is meant to be a modem speed or something, cat is the wrong command.
Seems like you're doing a fork/execl to do something you could do using simple file I/O.
That being said... execl syntax is, you pass each parameter separately, followed by a null pointer. So, something like this (taking your command literally...):
execl("/bin/cat", "/dev/cu.usbmodem1421", "-", "9600", ">>", "data.txt", (char *) NULL);
Related
I had this simple shell like program that works both in interactive and non-interactive mode. I have simplified the code as much as I can to present my question, but it is still a bit long, so sorry for that!
#include <stdio.h>
#include <stdlib.h>
#include <sys/wait.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <unistd.h>
/**
*main-entry point for gbk
*Return: returns the index of 0 on sucess
*/
int main(void)
{
char *cmd = malloc(1 * sizeof(char)), *cmdargs[2];
size_t cmdlen = 0;
int childid, len;
struct stat cmdinfo;
while (1)
{
printf("#cisfun$ ");
len = getline(&cmd, &cmdlen, stdin);
if (len == -1)
{
free(cmd);
exit(-1);
}
/*replace the ending new line with \0*/
cmd[len - 1] = '\0';
cmdargs[0] = cmd;
cmdargs[1] = NULL;
childid = fork();
if (childid == 0)
{
if (stat(*cmdargs, &cmdinfo) == 0 && cmdinfo.st_mode & S_IXUSR)
execve(cmdargs[0], cmdargs, NULL);
else
printf("%s: command not found\n", *cmdargs);
exit(0);
}
else
wait(NULL);
}
free(cmd);
exit(EXIT_SUCCESS);
}
To summarize what this program does, it will first print the prompt #cisfun$ , waits for an input in interactive mode and takes the piped value in non-interactive mode, creates a child process, the child process checks if the string passed is a valid executable binary, and if it is, it executes it other wise it prints a command not found message and prompts again.
I have got this program to work fine for most of the scenarios in interactive mode, but when I run it in non-interactive mode all sorts of crazy (unexpected) things start to happen.
For example, when I run echo "/bin/ls"|./a.out, (a.out is the name of the compiled program)
you would first expect the #cisfun$ message to be printed since that is the first thing performed in the while loop, and then the output of the /bin/ls command, and finally #cisfun$ prompt, but that isn't what actually happens. Here is what happens,
It is very weird the ls command is run even before the first print message. I, at first, thought there was some threading going on and the printf was slower than the child process executing the ls command. But I am not sure if that is true as I am a noob. and also things get a bit crazier if I was printing a message with '\n' at the end rather than just a string. (if I change printf("#cisfun$ "); to printf("#cisfun$\n");) the following happens,
It works as it should, so it got me thinking what is the relation between '\n', fork and speed of printf. Just in short what is the explanation for this.
The second question I have is, why doesn't my program execute the first command and go to an interactive mode, I don't understand why it terminates after printing the second #cisfun$ message. By checking the status code (255) after exit I have realized that the effect is the same as pressing ctr+D in the interactive mode, which I believe is exited by the getline function. But I dont understand why EOF is being inserted in the second prompt.
I'm writing a shell in C. The shell has internal and external commands. The external commands can be extended with an ampersand (&) so they run in the background.
When I type e.g program&, the program executes with no problem in the background, making the shell available while the program is executing.
But, the output of the program can get mixed with the normal output of the shell. And that's what I'm trying to fix, with no success.
Note: In the example above I talked about program as any other command. The program basically sleeps and then prints "Hello, World!". Also, "program" is in /bin/, which is my default directory of external commands.
sish> pwd
[current directory]
sish> program&
sish> [now if I don't type anything...] Hello, World!
[now the current command is being written here].
For example, in bash, this behavior is different, it seems that, if the user is "still" writing the command, the output of the background program is holding (or waiting), so the user can execute a program. Hence, not messing with the output.
I read a bit about signals, I tried to setup a signal handler that printed something on SIGCHLD but it also had the same behavior.
CODE:
while(1) {
int internal=0;
int background=0;
int redirect=0; // 1 - output ; 2 - input
// those variables are not important for the question
printf("sish> ");
fgets(command,BUFFER_SIZE,stdin);
}
... (some lines that are not important for the question)
child = fork(); // pid_t child -> global variable
if(!child) {
if(redirect==1) {
int fd = open(words[2],O_WRONLY|O_CREAT|O_TRUNC,0660);
dup2(fd,1);
execlp(words[0],words[0],NULL);
}
else if(redirect==2) {
int fd = open(words[2],O_RDWR);
dup2(fd,0);
execlp(words[0],words[0],NULL);
}
else {
if(execvp(bin_path,words)==-1) {
printf("Error! Does the program exist?\n");
}
}
}
NOTES:
- I KNOW I'M NOT CHECKING FOR ERRORS IN THE FORK, I WILL ADD THAT WHEN I SOLVE THIS BUG.
- I ALSO PRINTED THE STDERR WITH perror, I GOT NOTHING.
I expect this (e.g):
sish> pwd
[current directory]
sish> program&
sish> [I wait...] pwd
[current directory]
Hello, World!
I have a piece of software that is able to read commands from stdin for debug purposes in a separate thread. When my software runs as foreground process read behaves as expected, its blocking and waits for input by the user, i.e the thread sleeps.
When the software is run as a background process, read constantly returns 0 (possible EOF detected?).
The problem here is, that this specific read is in a while(true) loop. It runs as fast as it can and steals precious CPU load on my embedded device.
I tried redirecting /dev/null to the process but the behavior was the same. I am running my custom Linux on an ARM Cortex A5 board.
The problematic piece of code follows and is run inside its own thread:
char bufferUserInput[256];
const int sizeOfBuffer = SIZE_OF_ARRAY(bufferUserInput);
while (1)
{
int n = read(0, bufferUserInput, sizeOfBuffer); //filedes = 0 equals to reading from stdin
printf("n is: %d\n", n);
printf("Errno: %s",strerror(errno));
if (n == 1)
{
continue;
}
if ((1 < n)
&& (n < sizeOfBuffer)
&& ('\n' == bufferUserInput[n - 1]))
{
printf("\r\n");
bufferUserInput[n - 1] = '\0';
ProcessUserInput(&bufferUserInput[0]);
} else
{
n = 0;
}
}
I am looking for a way to prevent read from constantly returning when running in the background and wait for user input (which of course will never come).
If you start your program in the "background" (as ./program &) from a shell script, it's stdin will be redirected from /dev/null (with some exceptions).
Trying to read from /dev/null will always return 0 (EOF).
Example (on linux):
sh -c 'ls -l /proc/self/fd/0 & wait'
... -> /dev/null
sh -c 'dd & wait'
... -> 0 bytes copied, etc
The fix from the link above should also work for you:
#! /bin/sh
...
exec 3<&0
./your_program <&3 &
...
When stdin is not a terminal, read is returning with 0 because you are at the end of the file. read only blocks after reading all available input when there could be more input in the future, which is considered to be possible for terminals, pipes, sockets, etc. but not for regular files nor for /dev/null. (Yes, another process could make a regular file bigger, but that possibility isn't considered in the specification for read.)
Ignoring the various problems with your read loop that other people have pointed out (which you should fix anyway, as this will make reading debug commands from the user more reliable) the simplest change to your code that will fix the problem you're having right now is: check on startup whether stdin is a terminal, and don't launch the debug thread if it isn't. You do that with the isatty function, declared in unistd.h.
#include <stdio.h>
#include <unistd.h>
// ...
int main(void)
{
if (isatty(fileno(stdin)))
start_debug_thread();
// ...
}
(Depending on your usage context, it might also make sense to run the debug thread when stdin is a pipe or a socket, but I would personally not bother, I would rely on ssh to provide a remote (pseudo-)terminal when necessary.)
read() doesn't return 0 when reading from the terminal in a backgrounded process.
It either continues to block while causing a SIGTTIN to be sent to the process (which may break the blocking and cause retval=-1,errno=EINTR to be returned or it causes retval=-1, errno EIO if SIGTTIN is ignore.
The snippet below demonstrates this:
#include <unistd.h>
#include <stdio.h>
#include <signal.h>
int main()
{
char c[256];
ssize_t nr;
signal(SIGTTIN,SIG_IGN);
nr = read(0,&c,sizeof(c));
printf("%zd\n", nr);
if(0>nr) perror(0);
fflush(stdout);
}
The code snippet you've shown can't possibly test reveal 0-returns since you never test for zero-ness in the return value.
This question already has an answer here:
C/Unix Strange behaviour while using system calls and printf
(1 answer)
Closed 3 years ago.
I just started using the system() function in c, and I thought of starting the same executable from within it self using the system function, so I wrote the following program
#include <stdlib.h>
#include <stdio.h>
int main()
{
printf("some string");
system("./a.out");
}
-I used gcc to compile it-
when I ran the program it did not print anything, it just kept going until I used the shortcut ctrl-c to stop the execution,then it started printing the output(it did not print anything until I stopped it)
I believe the statements should execute sequentially, why did it not print anything until I stopped it?
By default, when stdoutis connected to a terminal, it is line-buffered.
printf("some string");
doesn't have a '\n' in it and you aren't calling fflush(stdout); after it either, so all this printf("some string"); does is copy "some string" into your stdout's output buffer.
The buffer is flushed as the end of main.
printf("some string\n"); would flush the buffer immediately, provided stdout is connected to a terminal and you didn't change stdout's buffering.
printf("some string"); fflush(stdout); will flush the buffer immediately regardless of context and without the need for the '\n'.
The following simplified piece of code is executed by a thread in the background. The thread runs until he is told to exit (by user input).
In the code below I have removed some error checking for better readability. Even with error checking the code works well and both the master and the slave are created and/or opened.
...
int master, slave;
char *slavename;
char *cc;
master = posix_openpt(O_RDWR);
grantpt(master);
unlockpt(master);
slavename = ptsname(master);
slave = open(slavename, O_RDWR);
printf("master: %d\n",master);
printf("slavename: %s\n",slavename);
On my machine the output is the following:
master: 3
slavename: /dev/pts/4
So I thought that opening an xterm with the command xterm -S4/3 (4 = pt-slave, 3 = pt-master) while my program is running should open a new xterm window for the created pseudoterminal. But xterm just starts running without giving an error or any further informations but does not open a window at all. Any suggestions on that?
EDIT:
Now with Wumpus Q. Wumbley's help xterm starts normally, but I can't redirect any output to it. I tried:
dup2(slave, 1);
dup2(slave, 2);
printf("Some test message\n");
and opening the slave with fopen and then using fprinf. Both didn't work.
The xterm process needs to get access to the file descriptor somehow. The intended usage of this feature is probably to launch xterm as a child process of the one that created the pty. There are other ways, though. You could use SCM_RIGHTS file descriptor passing (pretty complicated) or, if you have a Linux-style /proc filesystem try this:
xterm -S4/3 3<>/proc/$PID_OF_YOUR_OTHER_PROGRAM/fd/3
'
You've probably seen shell redirection operators before: < for stdin, > for stdout, 2> for stderr (file descriptor 2). Maybe you've also seen other file descriptors being opend for input or output with things like 3<inputfile 4>outputfile. Well the 3<> operator here is another one. It opens file descriptor 3 in read/write mode. And /proc/PID/fd/NUM is a convenient way to access files opened by another process.
I don't know about the rest of the question. I haven't tried to use this mode of xterm before.
OK, the trick with /proc was a bad idea. It's equivalent to a fresh open of /dev/ptmx, creating a new unrelated pty.
You're going to have to make the xterm a child of your pty-creating program.
Here's the test program I used to explore the feature. It's sloppy but it revealed some interesting things. One interesting thing is that xterm writes its window ID to the pty master after successful initialization. This is something you'll need to deal with. It appears as a line of input on the tty before the actual user input begins.
Another interesting thing is that xterm (the version in Debian at least) crashes if you use -S/dev/pts/2/3 in spite of that being specifically mentioned in the man page as an allowed format.
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
#include <unistd.h>
#include <fcntl.h>
int main(void)
{
int master;
char *slavename, window[64], buf[64];
FILE *slave;
master = posix_openpt(O_RDWR);
grantpt(master);
unlockpt(master);
slavename = ptsname(master);
printf("master: %d\n", master);
printf("slavename: %s\n", slavename);
snprintf(buf, sizeof buf, "-S%s/%d", strrchr(slavename,'/')+1, master);
if(!fork()) {
execlp("xterm", "xterm", buf, (char *)0);
_exit(1);
}
slave = fopen(slavename, "r+");
fgets(window, sizeof window, slave);
printf("window: %s\n", window);
fputs("say something: ", slave);
fgets(buf, sizeof buf, slave);
fprintf(slave, "you said %s\nexiting in 3 seconds...\n", buf);
sleep(3);
return 0;
}