Let's have a simple example:
Process.run("ping", {"google.com"}) do |proc|
proc.output.each_line { |line| puts line }
end
which runs a process, constantly reading its output and printing the output to stdout. Currently, when I press a key, it just appears along with the output of the running process, but I'd like to have some kind of key processing, so I would be able to manage the running process from the keyboard: stop it, or restart it with modified arguments. How to do that?
Or, to narrow the question down, how to make this output-input pair to be non-blocking for each other? Currently it takes one step and then waits for its counterpart to happen.
Process.run("ping", {"google.com"}) do |proc|
until proc.output.closed?
puts proc.output.gets
puts "Got key: #{STDIN.raw &.read_char}"
end
end
Using a terminal for interactive input is not as simple as it might seem. You can try with STDIN.raw &.read_char. But this is limited and likely won't get you very far.
A tool which is usually used for this is readline. There are Crystal bindings in the stdlib (see Readline). They're currently undocumented, but should work. You could also try https://github.com/Papierkorb/fancyline which is a pure Crystal implementation of readline.
This examples shows how to handle STDIN and a single TCP session. This works by spawning the handlers to fibers. With some lines more you can share a bash or REPL session with multiple clients.
example 1: long running process
#Long running Process, here an interactive shell for example
spawn do
begin
cmd = "bash -i"
Process.run(cmd, shell: true) do | p |
pp p
my.bashpid = p.pid
my.bashinputfd = p.input.dup #get copy of current fd
p.input.puts("exec 2>&1") #stderr > stdout
p.input.puts("echo connected to bash") #send to STDIN of bash
while line = p.output.read_line
puts line #send to STDOUT
#send output of shell process to all online clients
Mc.get_clients.each do |all|
all.puts (line) #send to Client
end
end
end
rescue exception
puts "Error: #{exception.message}"
puts "Shell process has ended, program exits"
exit
end
end
example 2:
require "socket"
#public vars
channel = Channel(String).new
csocket = Socket.tcp(Socket::Family::INET)
socket_conn=false
puts "Welcome:"
spawn do
server = TCPServer.new("0.0.0.0", 9090)
loop do #handle tcp client reconnects - do forever
socket = server.accept
csocket = socket.dup
socket_conn=true
p! socket_conn
print "\r"
while line = socket.gets
channel.send(line)
end
socket_conn=false
p! socket_conn
print "\r"
end
end
spawn do #handle stdin input char by char - do until ctrl-c pressed
while (char = STDIN.raw &.read_char) != '\u{3}' #ctrl-c
channel.send(char.to_s)
end
channel.send('\u{3}'.to_s)
end
loop do #do until cttrl-c
r = channel.receive
if r == "\u0003" #handle ctrl-c from channel
break
end
p! socket_conn
print "\r"
p! r
print "\r"
if socket_conn
csocket.puts "got: #{r}"
end
puts "got: #{r}"
print "\r"
end
Related
edit half of the problem has been fixed and the question has been edited to reflect that fixing. I still am only seeing this perform the first command in a series of pipes.
I am stuck on the very last thing in my shell that I am making from scratch. I want to be able to parse commands like "finger | tail -n +2 | cut -c1-8 | uniq | sort" and I think that I am close. What I have right now, when given that line as an array of strings, only executes the "finger" part.
Here is the part of code that isn't working as it needs to be, I've narrowed it down to just this for loop and the part after:
mypipe[2];
//start out with stdin
oldinfd = 0;
//run the beginning commands. connect in/out of pipes to eachother.
for(int i = 0; i < max; i++){
pipe(mypipe);
processline(commands[i], oldinfd, mypipe[1]); //doesn't wait on children
close(mypipe[1]);
oldinfd = mypipe[0];
}
//run the final command, put output into stdout
processline(commands[max], oldinfd, 1); //waits on children
...
This loop runs the program and should end up looking like this:
stdin -> [0]pipe[write] -> [read]pipe[write] -> ... ->[read]pipe[1] -> stdout
My processline function takes in a line ('finger') and execs it. for instance, processline("finger", 0, 1) run the finger command without flaws. For the purposes of this question assume that processline works perfectly, what I'm having troubles with is how I'm using the pipes and their write and read ends.
Also with print statements, I have confirmed that the correct words are being sent to the correct parts. (the for loop receives "finger", "tail -n +2", "cut -c1-8", and "uniq", while the commands[max] that the last line receives is "sort").
I'm rewriting a shell in C and I fell on a problem.
When writing a command — for example echo "this — we got a new prompt ("dquote>", in zsh) and we can exit it with "Ctrl + c" and get back to our last command prompt.
I'm stuck there; I just can't get out of my read function (listen on "dquote>"), I tried to write on stdout an EOF when pressing "ctrl + c" but it doesn't read it.
I switched to non-canonical mode.
I catch signal with signal(SIGINT, sig_hand);
then i execute this part of code when signal is catched:
static void sig_hand(int sig)
{
if (g_shell.is_listen_bracket) // if is the first prompt or no
putchar(4); // EOT
else
{
putstr("\n");
print_prompt();
}
}
and my read function:
int j;
char command[ARG_MAX];
char buff[3];
j = -1;
while (1)
{
bzero(buff, 3);
read(0, buff, 3);
if (buff[0] == 4 && !buff[1] && !buff[2])
return (ctrl_d(shell));
else if (isprint(buff[0]) && !buff[1] && !buff[2]) // if is between 32 and 126 (ascii)
{
command[++j] = buff[0];
putchar(buff[0]);
}
}
command[++j] = '\0';
return (strdup(command));
So my code waiting on "read(0, buff, 3);", and i want to quit it when pressing ctrl + c.
Thanks for helping !
Don't think of EOF as a character that you can 'print to stdout', it is a state that the file handle can be in, and it implies that there is no more data coming. To put your stdout into the EOF state, you would have to call close() - which is most likely not what you want.
Watch out - 0x04 is actually EOT, End of Transmission.
In any case, why do you want to send EOT to the stdout of your application? If this behaved the way it would appear you think it does, then the terminal emulator (or whatever is connected to your stdout) would quit - not your shell, and it certainly wouldn't revert your shell from the 'waiting for more input' state to the 'waiting for input' state.
You need to handle the ^C in the signal handler, and adjust your shell's state appropriately, getting it to abandon the current input mode and redraw the basic prompt.
Edit: What you need to remember is that your 'shell' is writing text (output) and control characters to the terminal emulator - the terminal emulator is writing text (input) and control characters to your 'shell'.
If you want to revert the prompt from dquote> to mysh$, then you must update the terminal by writing a new prompt to it.
To keep track of what you are currently doing, it might make most sense to use the State Machine approach. You might have a handful of states, including:
INPUT_WAITING
INPUT_WAITING_CONT
COMMAND_RUN
When in the INPUT_WAITING state, you would print the mysh$ prompt, and handle input.
When a newline is received, you would then decide 'do we have all of the information?' before advancing to the INPUT_WAITING_CONT state if not, or the COMMAND_RUN state if you do.
The INPUT_WAIT_CONT state would printout the dquote> prompt, and take similar actions... 'do we have enough information?' Yes: COMMAND_RUN, No: INPUT_WAIT_CONT.
It's then up to you to revert to the INPUT_WAIT state and redraw the mysh$ prompt when the user presses ^C, or when the command execution has completed.
So I am communicating with a device by using echo to send and cat to receive. Here's a snippet of my code:
fp = popen("echo "xyz" > /dev/ttyACM0 | cat - /dev/ttyACM0", "r");
while (fgets(ret_val, sizeof(ret_val)-1, fp) != NULL)
{
if (strcmp(ret_val, "response") == 0)
{
close(fp);
return ret_val;
}
}
Ok, The problem is, cat seems to stay open, because when I run this code in a loop, it works the first time, then hangs at the spot I call popen. Am I correct in assuming cat is the culprit?
Is there a way to terminate cat as soon as I run the command, so I just get the response from my device? Thanks!
In the command:
echo "xyz" > /dev/ttyACM0 | cat - /dev/ttyACM0
TTY devices normally do not open until carrier is present, or CLOCAL is set. The cat could be waiting on open. Assuming the device opens, then the cat will hang waiting to read characters until either (1) it receives an EOF character such as control-D, or (2) carrier is lost or (3) you kill it.
Another problem here is that the pipe between echo and cat immediately closes, because the output of the echo is redirected to the same TTY device, and the redirection closes the pipe.
Generally TTY devices are ornery beasts and require special handling to get the logic right. Probably you are better to read up on TTY devices especially:
man termios
If you are doing something REALLY SIMPLE, you might get by with:
fp = popen("echo 'xyz' >/dev/ttyACM0 & (read x; echo \"$x\")");
Keep in mind that both the echo and the read might hang waiting for carrier and that you will get at most one line of output from the popen, and the read could hang waiting for an EOL character.
This whole approach is fraught with problems. TTY devices require delicate care. You are using a hammer in the dark.
There's no easy way to kill the process launched by popen, as there's no API to get the pid -- there's only pclose which waits until it ends of its own account (and youe should ALWAYS use pclose instead of fclose to close a FILE * opened by popen.)
Instead, you're probably better off not using popen at all -- just use fopen and write what you want with fputs:
fp = fopen("/dev/ttyACM0", "r+");
fputs("xyz\n", fp); // include the newline explicitly
fflush(fp); // always flush after writing before reading
while (fgets(ret_val, sizeof(ret_val)-1, fp) != NULL) {
:
I have a script that is running two commands. The first command is writing data to a temp file. The second command is piping to awk while the first command is running in the background. awk, in the second command, needs to read the data from the temp file, but it's parsing its own data faster than data is getting written to the temp file.
Here's an example:
#!/bin/bash
command1 > /tmp/data.txt &
# command1 takes several minutes to run, so start command 2 while it runs in the background
command2 | awk '
/SEARCH/ {
#Matched input so pull next line from temp file
getline temp_line < "/tmp/data.txt"
}
'
This works, unless awk parses the data from command2 so fast that command1 can't keep up with it. I.e. awk is getting an EOF from /tmp/data.txt before command1 has finished writing to it.
I've also tried wrapping some checks around getline, like:
while ((getline temp_line < "/tmp/data.txt") < 0) {
system("sleep 1") # let command1 write more to the temp file
}
# Keep processing now that we have read the next line
But I think once it hits an EOF in the temp file, it stops trying to read from it. Or something like that.
The overall script works as long as command1 writes to the temp file faster than awk tries to read from it. If I put a sleep 10 command between the two commands, then the temp file builds enough buffer and the script produces the output I need. But I may be parsing files much larger than what I've tested on, or the commands might run at different speeds on different systems, etc, so I'd like a safety mechanism to wait for the file until data has been written to it.
Any ideas how I can do this?
I think you'd need to close the file between iterations and read it from the start again back to where you had read it before, something like this (untested);
sleepTime = 0
while ((getline temp_line < "/tmp/data.txt") <= 0) {
close("/tmp/data.txt")
system("sleep " ++sleepTime) # let command1 write more to the temp file
numLines = 0
while (++numLines < prevLines) {
if ( (getline temp_line < "/tmp/data.txt") <= 0 ) {
print "Aaargghhh, my file is gone!" | "cat>&2"
exit
}
}
}
++prevLines
Note that I built in a variable "sleepTime" to have your command sleep longer each time through the loop so if it's taking your tmp file a long time to fill up your 2nd command waits longer for it each iteration. Use that or not as you like.
Using getline in nested loops with system() commands all seems a tad clumsy and error-prone though - I can't help thinking there's probably a better approach but I don't know what off the top of my head.
i have the following bit of C code which reads from a pipe and then should block but it never blocks
int pipe_fd;
int res;
int open_mode = O_RDONLY;
char buf[100];
int bytes_read = 0;
memset (buf, '\0', sizeof(buf));
pipe_fd = open(FIFO_NAME, open_mode);
if (access(FIFO_NAME, F_OK) == -1)
{
res = mkfifo(FIFO_NAME, 0777);
if (res != 0)
{
fprintf (stderr, "Could not create fifo %s\n", FIFO_NAME);
exit (EXIT_FAILURE);
}
}
for(;;)
{
do
{
res = read(pipe_fd, buf, sizeof(buf));
bytes_read += res;
}while (res > 0);
// process data then go back and block
............
}
It is sent a simple buffer by some code in a bash script like this './test 1'
#!/bin/bash
pipe=/tmp/pipe
if [[ ! -p $pipe ]]; then
echo "Reader not running"
exit 1
fi
if [[ "$1" ]]; then
echo "some string" >$pipe
else
echo "q" >$pipe
fi
I run the C code program in gdb and initially it does block on the read but as soon as i call the bash script the C code no longer blocks , it does successfully read the data from
the buffer and then each time it reads there are 0 bytes read so not sure why its no longer blocking. The 'some string' data is correctly received at the other side.
I just need it to sit there waiting for data process it and then go back and wait for more
I run the C code program in gdb and initially it does block on the read but as soon as i call the bash script the C code no longer blocks , it does successfully read the data from the buffer and then each time it reads there are 0 bytes read so not sure why its no longer blocking. The 'some string' data is correctly received at the other side.
0 means EOF. FIFO can be read or written only when there are processes connected to it for both reading and writing. When there are no more writers (your shell scripts terminated) readers are notified about that through read() returning the EOF.
FIFOs behave that way to be compatible with shell pipe logic e.g.:
$ mkfifo ./tmp1
$ cat < input > ./tmp1 &
$ cat < ./tmp1 > /dev/null
If read() will not return EOF, the second cat would block forever.
I just need it to sit there waiting for data process it and then go back and wait for more
In your C program you have to reopen() the FIFO after read() returned the EOF first time.
P.S. Found quite nice FIFO summary for your. Check the table on the second page.
your bash script closes the pipe so the C is getting an "eof" condition
I think write side shell script close the pipe every time when echo something.
So, the write script need to open the pipe and repeatedly use the opended descriptor to write something and close the opended descripter finally.