I'm on a project of creating a terminal shell from C (with Bash as a reference) and I enventually had to deal with pipes.
The way I made them worked with basic commands like ls | rev | wc -l
However my program enter a never-ending loop when I try to pipe commands that never ends like this one: base64 /dev/urandom | head -c 1000; Bash does not.
The way I created my pipeline make my program wait for base64 to end before calling head.
I don't understand when and how I am supposed to wait & execute commands anymore.
How can I reproduce Bash's behavior with such piped commands in C ? Did I make a simple mistake or should I totally rethink my system ?
Here is in pseudo-code how I do my command execution. It lacks details & security like closing the pipes but the whole idea is present.
while (command)
{
pipe(fd);
if (!fork())
{
dup2();
execve(command);
}
wait();
command = command->next;
}
The short answer is:
Get rid of the wait() for starters.
But there's more. Not that I'm an expert, but from observing bash behavior, I have learned that all components in a pipe are executed simultaneously in parallel
AFAIK, bash ( or perhaps the kernel implementation of the fifo ) will generate signals to a process when another process on the other side of the pipe has closed the pipe. 'base64 /dev/urandom' terminates because it received a signal when the head closed it's stdin.
As you have seen, base64 /dev/urandom never ends by itself.
Going forward, kick off all of the processes, creating fifos that have their stdout and stdin connected. I hope this gets you going in the right direction. There's a lot of discussion of how to use fifos out there so hopefully this is a nudge in the right direction.
Related
I am trying to write a program , which does a fork and exec a child process and executes it in the back ground .
One approach I would see is to redirect the output to /dev/NULL file and come back to my main program . Any other ideas ?
After a process is started, shell has no more control on process file descriptors so you can not silence it by a shell command i.e. terminal has its stdin, stdout and stderr bound to the terminal and you cannot do anything about it without re-gaining control over that terminal.
There is a tool called retty how you can use it can be seen at this link retty this tool is used to attach processes running on terminals
Beside you can also use the built in disown command to disown the process which will prevent from sending a SIGHUP signal to the program when the shell exits
This link can be helpful Link to a similar problem
I'm working on a fairly simple application in C. The end goal is to pipe the output from one process to in input of another in a *nix environment (yes, I am aware of the pipe() command and dup/dup2 but I'm trying to find away around using those commands). I was wondering if there is any way to connect the streams rather than using file descriptors (The systems aren't guaranteed to be POSIX compliant).
So basically I want to do something like this (pseudo-code)
pid = fork()
if pid == 0
// assign this process's stdin to the parents stdout.
stdin = parent.stdout;
exec() // launch new process that receives the parents stdout as stdin
// child stuff....
else
// parent stuff....
I know that it probably won't be as simple as just doing an assignment as above, but is there any way to do this using only streams? I tried looking around, but couldn't find anything..
Thanks!
sorry if I'm missing the point here but the whole philosophy of *nix is one program, one job. If you need a program to dump the contents of a program to the screen then you have the cat command. If the files too big and you need page breaks you pipe the output of cat to the more command:
cat myfile.txt | more
If you need to pipe between two terminal applications then you're meant to use the command line to do so:
myprog1 | myprog2
Obviously that's the philosophical approach, so if that doesn't help then can you clarify what you're trying to pipe and why you're trying to do it in process ?
Ive got my program in C, 6 source files, and the aim is to copy those files to any other Linux OS computer, and (probably compile, im newbie, so not sure what is needed here) run this program in background. Something like:
user#laptop:~$ program
Program is running in a background. In order to stop Program, type
XXX.
Any tips on this?
Thanks in advance!
Put a daemon(0,0); call in your C program.
stopping it is a bit trickier, I suppose there is only one copy of the program running. Put the program's PID in a file, write another utility (XXX) which reads the PID from the file and kills it.
Important: daemon forks, get the PID of the program after calling daemon.
But maybe you are too newby and just want to execute your program with program& and later kill it.
I completely missunderstood the question. You need shell scripting for this.
For file copying you can use scp. Execute command on the other host with ssh. It should be something like (not tested):
pid=`ssh user#host "make >/dev/null 2>&1; nohup ./program; echo $!`
later you can stop it with
ssh user#host "kill $pid"
First, you should fork().
In parent, you should just exit, in child process - you should handle SIGHUP signal.
In such way - you have daemon.
I am trying to write a program that runs in the background that can "type" into a parent process, e.g. issue shell commands as if I had typed them myself at the keyboard. I have tried doing this with ungetc() to push back to STDIN:
#include <stdio.h>
int main (int argc, char** argv) {
ungetc('x', stdin);
return 0;
}
I would expect that doing:
$ gcc -o unget unget.c
$ ./unget&
Would have left me at the $ prompt with x there as if I'd just typed it, but instead I get nothing. Have I "lost" STDIN by going into the background? Thanks!
What you're trying to do simply cannot work. ungetc operates on the stdio FILE buffer, not the underlying open file description, and thus there is no way for it to be shared with another process.
You might try running the interactive session in screen and using screen's exec command to redirect file descriptors through a process that will inject data. Or you could implement something like this yourself using pseudo-tty devices.
Further, from your comments, I think what you're trying to do is an extremely bad idea. If you get unlucky and the input comes in the middle of you typing something interactively, it could have disastrous consequences. For instance imaging the automated command is
command_foo my_important_file
Now suppose you're in the middle of typing
rm -rf useless_crap
Bam! my_important_file just got deleted.
This second answer is not so much an answer to your question as written, but to the problem you're trying to solve. It's much more robust than sending keystrokes to your shell.
In the shell, use the trap command to setup a signal handler. For example:
trap "echo hello" USR2
Replace USR2 with whatever signal you want to use. Then run a child process that periodically sends the signal to its parent.
No -- ungetc only "pushes" the character back into the programs own buffer, so when the next character that same program reads will be what was passed to ungetc. Transmitting something back to the parent requires something entirely different (e.g., creating some pipes).
Just off the top of my head, maybe you could tweak the pipes of your terminal so that the cout of your child is cin for the parent. In bash it would go something like this:
exec 6>&1
exec 7<&0
exec 1>&7
exec 0<&6
Then when you start your programs the pipes should be inverted. So everything you put into cout should come out at cin from your parent (the bash process in this case).
I am trying to control ftp client from C program (OS X). I did fork and execve - process is started ok. The problem is with pipes - I can send command to ftp client process and get feedback from it just fine (If i send "help\n" i get back help output) but what I never get in pipe is "ftp> " prompt. Any ideas?
Ivan
Your ftp client is probably behaving differently if stdin/stdout is a terminal or something else (lots of program do, for a start the C library does buffering in a different way...) If you want to control that, search information about pseudo-terminals, that's a little too technical to be explained here. (And looks first at programs like expect, it's possible you won't have to write yours).
A program can examine stdin to find out whether it's a terminal or a pipe. In your case, the FTP program probably does that (for example to know whether it can use escape sequences to render progress bars or offer command line editing).
If you really need the prompt, you have to look into PTYs (pseudo terminals) which emulate a console.
wild guess: isn't the "ftp>" prompt written to STDERR ?