Broken pipe with built_in command like echo - c

I have some issue in my program.
I have a school homework where I have to reproduce some features of bash
The homework is almost done however I have an issue with built_in command and pipes indeed I have a broken pipe error whenever a built_in command is the last pipeline like such :
ls | echo (this line will produce a broken pipe error) where as this one
echo | ls will work just fine.
after some research I found out what was the problem, I have this error because for built_in command (meaning that I don't use execve to call them) I don't fork them into a child process while the other command are forked, so my guess is that the built_command close the read_end of the pipe way too fast while ls is still writing into the pipe.
I handled this error with forking a process too for the built_in commands.
However I was wondering if they were a solution without forking the built_in commands as this would use unnecessary resources.
void execute_routine(t_data *data, t_cmd *cmd)
{
pid_t pid_ret;
if (data -> s_pipes && !ft_strcmp(cmd -> prev_stop, "|")
&& cmd -> prev_cmd -> p_close)
close_fd(data, "bash", &data -> s_pipes -> s_pipes[1]);
if (!built_in(data, cmd, 0))
{
pid_ret = fork();
if (pid_ret < 0)
print_err_and_exit(data, NULL, "bash", 1);
if (pid_ret == 0)
forking(cmd);
cmd -> pid = pid_ret;
}
handle_pipes(data, cmd);
}
Here above, is the function that will execute each command get by the help of readline,
you can see that if the command is a built_in we won't fork it and it will be handled right into that function for the case of echo here is the function :
int echo(t_data *data, t_cmd *cmd)
{
int fd;
data -> status = 1;
if (open_check_files_built_in(cmd, cmd -> tab))
return (1);
fd = where_to_write(data, cmd);
if (ft_tab_len(cmd -> args) == 1)
{
if (write_to_fd(data, "\n", fd) < 0)
return (print_err_built_in("bash", 1));
data -> status = 0;
return (1);
}
if (write_args_(data, cmd, fd))
return (1);
if (cmd -> last_in && cmd -> last_in -> type == IN)
close_fd_built_in(&cmd -> last_in -> fd);
if (cmd -> last_out)
close_fd_built_in(&cmd -> last_out -> fd);
data -> status = 0;
return (1);
}
for echo I only look for out redirection and writing into the returned fd.
When I am done with the function I the pipe in that function.
void handle_pipes(t_data *data, t_cmd *cmd)
{
if (data -> s_pipes && data -> prev_pipes == -1
&& !ft_strcmp(cmd -> prev_stop, "|"))
close_fd(data, "bash", &data -> s_pipes -> read_end -> s_pipes[0]);
close_fd(data, "bash error", &data -> prev_pipes);
if (data -> inited)
{
data -> prev_pipes = data -> pipes[0];
close_fd(data, "bash pipes close", &data -> pipes[1]);
}
data -> inited = 0;
}
After that I wait all of my process with the function below
int i;
i = -1;
while (cmds[++i])
{
if (cmds[i]-> pid && waitpid(
cmds[i]-> pid, &data -> status, 0) < 0 && errno != ECHILD)
print_err_and_exit(data, NULL, "Error with waitpid", 1);
}
Like I said earlier the only differences between built_in command and the others is that built_in commands are not forked and directly handled in the parent process while the others are forked and handled in the child process.
Although I have the broken pipe error signal the pipeline still output the expected result,
for example ls | echo bonjour would still output bonjour to STDOUT but the error SIGPIPE would be sent to waitpid by the process handling the ls command, while echo bonjour | ls will also work fine and return a status code of 0 and not 13.

The issue is that you misunderstand the nature of "echo" vs the nature of "ls", and again the nature of "pipes"
"ls" is a potentially multi-line function. It generates a stream.
A "pipe" is a mechanism to create an inter-process communication between one process creating a stream, and another process using that stream as input.
Here is where you failed to understand the nature of "echo": echo is NOT a multi-line (a.k.a. stream) -oriented tool! It is a single line tool geared to a "single" parameter string. That string could be multi-line, but that is passed to echo which only interprets it as a single parameter, without caring what the contents are.
Even the man page makes no reference to echo accepting input from stdin! In other words, it doesn't even open the pipe stream.
Hope that clarifies what you perceive as a mystery regarding the "ls | echo" pipeline.

Related

how to redirect output from cd in a shell written in C

I m writing a basic shell in C, I have all the redirect operators implemented, however, when I try to redirect "cd" I run into this problem :
cd works perfectly without any output redirection, but
when I m trying to do something like this :
cd inexistant_directory > output_file
the output file is not created, in bash, running that command does redirect the stdout, as I previously stated redirection operators with external commands work good
when I encounter cd command, I call
char*path = get_path(parameters); //implemented by me, works on rest of the cases
int ret =chdir(path);
I don't call this in the child process but in parent(shell process itself)
What am I doing wrong ?
Thank you,
PS : The OS on which I run this is Ubuntu 12.10 however, the code is POSIX compliant
LE: I can't post the whole code as it goes to around 600 lines,
here's my logic tho
if(internal_command) {
//do quit, exit or cd
} else if (variable_assignemt){
//do stuff
} else {
// external command
pid = fork();
if(pid == -1) {
//crash
} else if (pid == 0) {
do_redirects()
call_external_cmd
}
default :
wait(pid, &status);
So, I think that to solve the issue I need to redirect stdout in parrent(shell process)
and restore it after command is executed
Not redirecting stdout in parent process (shell) was indeed the cause of
cd's unwanted behaviour, my solution was the following:
if(we_have_out_redirection == 1) {
if(out != NULL) {
char *outParrent = out;
fflush(stdout);
outBackup = dup(STDOUT_FILENO); //I save stdout for future restoration
int fd = open(outParrent, O_WRONLY | O_CREAT | O_TRUNC, 0644); //open output file
int rc;
rc = dup2(fd, STDOUT_FILENO); //redirect stdout
retval = chdir(out); //execute cd command
//restore stdout
close(fd);
fflush(stdout);
rc = dup2(outBackup, 1);
close(outBackup);
}
}
Thanks to Jake223 for pointing out I forgot to redirect in parrent !

Implementing pipelining in C. What would be the best way to do that?

I can't think of any way to implement pipelining in c that would actually work. That's why I've decided to write in here. I have to say, that I understand how do pipe/fork/mkfifo work. I've seen plenty examples of implementing 2-3 pipelines. It's easy. My problem starts, when I've got to implement shell, and pipelines count is unknown.
What I've got now:
eg.
ls -al | tr a-z A-Z | tr A-Z a-z | tr a-z A-Z
I transform such line into something like that:
array[0] = {"ls", "-al", NULL"}
array[1] = {"tr", "a-z", "A-Z", NULL"}
array[2] = {"tr", "A-Z", "a-z", NULL"}
array[3] = {"tr", "a-z", "A-Z", NULL"}
So I can use
execvp(array[0],array)
later on.
Untli now, I believe everything is OK. Problem starts, when I'm trying to redirect those functions input/output to eachother.
Here's how I'm doing that:
mkfifo("queue", 0777);
for (i = 0; i<= pipelines_count; i++) // eg. if there's 3 pipelines, there's 4 functions to execvp
{
int b = fork();
if (b == 0) // child
{
int c = fork();
if (c == 0)
// baby (younger than child)
// I use c process, to unblock desc_read and desc_writ for b process only
// nothing executes in here
{
if (i == 0) // 1st pipeline
{
int desc_read = open("queue", O_RDONLY);
// dup2 here, so after closing there's still something that can read from
// from desc_read
dup2(desc_read, 0);
close(desc_read);
}
if (i == pipelines_count) // last pipeline
{
int desc_write = open("queue", O_WRONLY);
dup2(desc_write, 0);
close(desc_write);
}
if (i > 0 && i < pipelines_count) // pipeline somewhere inside
{
int desc_read = open("queue", O_RDONLY);
int desc_write = open("queue", O_WRONLY);
dup2(desc_write, 1);
dup2(desc_read, 0);
close(desc_write);
close(desc_read);
}
exit(0); // closing every connection between process c and pipeline
}
else
// b process here
// in b process, i execvp commands
{
if (i == 0) // 1st pipeline (changing stdout only)
{
int desc_write = open("queue", O_WRONLY);
dup2(desc_write, 1); // changing stdout -> pdesc[1]
close(desc_write);
}
if (i == pipelines_count) // last pipeline (changing stdin only)
{
int desc_read = open("queue", O_RDONLY);
dup2(desc_read, 0); // changing stdin -> pdesc[0]
close(desc_read);
}
if (i > 0 && i < pipelines_count) // pipeline somewhere inside
{
int desc_write = open("queue", O_WRONLY);
dup2(desc_write, 1); // changing stdout -> pdesc[1]
int desc_read = open("queue", O_RDONLY);
dup2(desc_read, 0); // changing stdin -> pdesc[0]
close(desc_write);
close(desc_read);
}
wait(NULL); // it wait's until, process c is death
execvp(array[0],array);
}
}
else // parent (waits for 1 sub command to be finished)
{
wait(NULL);
}
}
Thanks.
Patryk, why are you using a fifo, and moreover the same fifo for each stage of the pipeline?
It seems to me that you need a pipe between each stage. So the flow would be something like:
Shell ls tr tr
----- ---- ---- ----
pipe(fds);
fork();
close(fds[0]); close(fds[1]);
dup2(fds[0],0);
pipe(fds);
fork();
close(fds[0]); close(fds[1]);
dup2(fds[1],1); dup2(fds[0],0);
exex(...); pipe(fds);
fork();
close(fds[0]); etc
dup2(fds[1],1);
exex(...);
The sequence that runs in each forked shell (close, dup2, pipe etc) would seem like a function (taking the name and parameters of the desired process). Note that up until the exec call in each, a forked copy of the shell is running.
Edit:
Patryk:
Also, is my thinking correct? Shall it work like that? (pseudocode):
start_fork(ls) -> end_fork(ls) -> start_fork(tr) -> end_fork(tr) ->
start_fork(tr) -> end_fork(tr)
I'm not sure what you mean by start_fork and end_fork. Are you implying that ls runs to completion before tr starts? This isn't really what is meant by the diagram above. Your shell will not wait for ls to complete before starting tr. It starts all of the processes in the pipe in sequence, setting up stdin and stdout for each one so that the processes are linked together, stdout of ls to stdin of tr; stdout of tr to stdin of the next tr. That is what the dup2 calls are doing.
The order in which the processes run is determined by the operating system (the scheduler), but clearly if tr runs and reads from an empty stdin it has to wait (to block) until the preceding process writes something to the pipe. It is quite possible that ls might run to completion before tr even reads from its stdin, but it is equally possible that it wont. For example if the first command in the chain was something that ran continually and produced output along the way, the second in the pipeline will get scheduled from time to time to prcess whatever the first sends along the pipe.
Hope that clarifies things a little :-)
It might be worth using libpipeline. It takes care of all the effort on your part and you can even include functions in your pipeline.
The problem is you're trying to do everything at once. Break it into smaller steps instead.
1) Parse your input to get ls -al | out of it.
1a) From this you know you need to create a pipe, move it to stdout, and start ls -al. Then move the pipe to stdin. There's more coming of course, but you don't worry about it in code yet.
2) Parse the next segment to get tr a-z A-Z |. Go back to step 1a as long as your next-to-spawn command's output is being piped somewhere.
Implementing pipelining in C. What would be the best way to do that?
This question is a bit old, but here's an answer that was never provided. Use libpipeline. libpipeline is a pipeline manipulation library. The use case is one of the man page maintainers who had to frequently use a command like the following (and work around associated OS bugs):
zsoelim < input-file | tbl | nroff -mandoc -Tutf8
Here's the libpipeline way:
pipeline *p;
int status;
p = pipeline_new ();
pipeline_want_infile (p, "input-file");
pipeline_command_args (p, "zsoelim", NULL);
pipeline_command_args (p, "tbl", NULL);
pipeline_command_args (p, "nroff", "-mandoc", "-Tutf8", NULL);
status = pipeline_run (p);
The libpipeline homepage has more examples. The library is also included in many distros, including Arch, Debian, Fedora, Linux from Scratch and Ubuntu.

C: How to redirect named pipe to stdin/out of child process

Basically I want to do in C (and without buffering) the same as this bash-script:
#!/bin/sh
cat ./fifo_in | myprogram > ./fifo_out
In other words I want to exec "myprogram" and redirect its stdin and stdout to two pipes which have been created previously.
Another program is feeding data into fifo_in and reading out of fifo_out.
Of course it would be easy to just read from ./fifo_in, buffer it in the parent and write to myprogram's stdin (and reverse for stdout and ./fifo_out) but I think there is probably a way to let "myprogram" read/write directly from/to the fifos without buffering in the parent process.
Edit:
Eugen's answer seems to be the correct one, but I cannot get it to work.
I use this function on the C-side, which seems correct to me:
pid_t execpipes(const char *wd, const char *command, const char *pipename)
{
char pipename_in[FALK_NAMESIZE];
char pipename_out[FALK_NAMESIZE];
strcpy(pipename_in, FALKPATH);
strcat(pipename_in, "/");
strcat(pipename_in, FALK_FIFO_PATH);
strcat(pipename_in, "/");
strncat(pipename_in, pipename, FALK_NAMESIZE-2);
strcpy(pipename_out, pipename_in);
strcat(pipename_out, "R");
pid_t pid;
pid = fork();
if (pid < 0)
{ //Error occured
perror("fork");
exit(1);
}
if (pid == 0)
{
chdir(wd);
d("execpipes: pipename_in=\"%s\"\n", pipename_in);
d(" pipename_out=\"%s\"\n", pipename_out);
freopen(pipename_in,"r",stdin);
freopen(pipename_out,"w",stdout);
d("execpipes: command=\"%s\"\n", command);
execl("/bin/sh", "sh", "-c", command, (char *)NULL); // using execv is probably faster
// Should never get here
perror("execl");
exit(1);
}
return pid;
}
I read and write the pipes from a PHP-script (only relevant part posted):
$pipe_in = fopen($fp.$pipename, "w");
$DEBUG .= "Write to pipe_in\n";
$ret = fwrite($pipe_in, $in);
$pipe_out = fopen($fp.$pipename.'R', "r");
$DEBUG .= "Read from pipe_out\n";
$atext = fread($pipe_out, 200000); // Program hangs here
The program is started correctly, receives the input via $pipe_in correctly, processes the data correctly and (because it ran fine for many months) I assume it puts out the data correctly to stdout, but when I try to read from $pipe_out, it hangs. I know that the pipes themselves are set up correctly because if I don't open $pipe_out, the program does not get any input - which makes sense because there is no reader for $pipe_out and therefore the pipeline is not complete. So I can open $pipe_out, but I cannot read anything from it, which is quite strange.
Edit2:
Program works now, thanks guys - For some reason the first pipe has to be closed before you can read from the second pipe:
$pipe_in = fopen($fp.$pipename, "w");
$pipe_out = fopen($fp.$pipename.'R', "r");
$DEBUG .= "Write to pipe_in\n";
$ret = fwrite($pipe_in, $in);
fclose($pipe_in);
$DEBUG .= "Read from pipe_out\n";
$atext = fread($pipe_out, 200000);
fclose($pipe_out);
unlink($fp.$pipename);
unlink($fp.$pipename.'R');
I'd write a small wrapper for myprogram, that does
freopen("./fifo_in","r",stdin)
freopen("./fifo_out","w",stdout)
(Ofcourse not with constant paths!), then execve myprogram
Korn shell supports coprocesses, which I think effectively does what you ask: read from a pipe and write to a pipe (which can be stdout and stdin of a C process)
http://www.dartmouth.edu/~rc/classes/ksh/coprocesses.html
How about
myprogram < ./fifo_in > ./fifo_out
?
As for getting rid of the buffering: Since your program directly reads/writes the pipes, the buffering shouldn't hurt you.
An important point is that the process which writes fifo_in should flush properly so you don't have to wait. The same goes for your output: As soon as a "work unit" is complete, flush your stdout which will make the data available to whoever reads the output pipe.
But you can't do anything in myprogram to make the writer of fifo_in flush its buffers.
[EDIT] To do this from C (without the help of a shell), use code like this:
- Put the names of the two pipes into local variables on the stack
- Call `fork()`. If that returns '0', then open the two fifos with `freopen()` [like Eugen suggested][1]
- Call `execve` to launch the real exec.
That's (in a nutshell) what the shell is doing when it runs commands. Make sure the parent process (the one where fork() returns a PID != 0) handles the signal SIGCHLD
Perhaps you are looking of a named pipe? For example:
mkfifo fifo_in
As a test stub for my_program.c, to read fifo_in via the buffered stdin:
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
int main(void) {
char buf[80];
if (!freopen("./fifo_in", "r", stdin)) {
perror("freopen");
exit(EXIT_FAILURE);
}
while (!ferror(stdin)) {
while (fgets(buf, sizeof buf, stdin))
fputs(buf, stdout);
sleep(1);
}
return 0;
}
Then as a test for the writer, using the bash shell:
for x in {1..10}; do
echo $x
echo $x >> fifo_in
sleep 1
done
Notes:
I'd prefer to use unbuffered I/O.
The writer, at least on my machine, blocks until there is a reader.
The reader, in this sample, cannot tell when the writer is finished.

How to intercept SSH stdin and stdout? (not the password)

I realize this question is asked frequently, mainly by people who want to intercept the password-asking phase of SSH. This is not what I want. I'm after the post-login text.
I want to write a wrapper for ssh, that acts as an intermediary between SSH and the terminal. I want this configuration:
(typing on keyboard / stdin) ----> (wrapper) ----> (ssh client)
and the same for output coming from ssh:
(ssh client) -----> (wrapper) -----> stdout
I seem to be able to attain the effect I want for stdout by doing a standard trick I found online (simplified code):
pipe(fd)
if (!fork()) {
close(fd[READ_SIDE]);
close(STDOUT_FILENO); // close stdout ( fd #1 )
dup(fd[WRITE_SIDE]); // duplicate the writing side of my pipe ( to lowest # free pipe, 1 )
close(STDERR_FILENO);
dup(fd[WRITE_SIDE]);
execv(argv[1], argv + 1); // run ssh
} else {
close(fd[WRITE_SIDE]);
output = fdopen(fd[READ_SIDE], "r");
while ( (c = fgetc(output)) != EOF) {
printf("%c", c);
fflush(stdout);
}
}
Like I said, I think this works. However, I can't seem to do the opposite. I can't close(STDIN_FILENO) and dup the readside of a pipe. It seems that SSH detects this and prevents it. I've read I can use the "-t -t" option to force SSH to ignore the non-stdin nature of its input; but when I try this it still doesn't work.
Any hints?
Thanks very much!
Use popen (instead of execv) to execute the ssh cmd and be able to read and write to the session.
A pipe will not work if you want to allow any interactive use of ssh with the interceptor in place. In this case, you need to create a pseudo-tty. Look up the posix_openpt, ptsname, and grantpt functions. There's also a nonstandard but much-more-intuitive function called openpty, and a wrapper for it called forkpty, which make what you're trying to do extremely easy.
Python's Paramiko does all of this with SSH but it is in Python source code. However, for a C programmer, reading Python is a lot like reading pseudocode so go to the source and learn exactly what works.
Here's a working example that writes to ssh:
#include <unistd.h>
int main(int argc, char **argv)
{
int pid;
int fds[2];
if (pipe(fds))
return -1;
pid = fork();
if (!pid)
{
close(fds[1]);
close(STDERR_FILENO);
dup2(fds[0], STDIN_FILENO);
execvp(argv[1], argv + 1);
}
else
{
char buf[256];
int rc;
close(fds[0]);
while ((rc = read(STDIN_FILENO, buf, 256)) > 0)
{
write(fds[1], buf, rc);
}
}
wait(NULL);
return 0;
}
This line is probably wrong:
execv(argv[1], argv + 1); // run ssh
The array must be terminated by a NULL pointer, if you are using argv[] the parameter from main() I don't think there is any guarantee that this is the case. Edit: just checked the C99 standard and argv is NULL terminated.
execv() does not search the path for the file to execute, so if you are passing ssh as the parameter, it is equivalent to ./ssh which is probably not what you want. You could use execvp() but that is a security risk if a malicious program called ssh appears in $PATH before /bin/ssh. Better to use execv() and force the correct path.

Linux C: "Interactive session" with separate read and write named pipes?

I am trying to work with "Introduction to Interprocess Communication Using Named Pipes - Full-Duplex Communication Using Named Pipes", link ; in particular fd_server.c (included below for reference)
Here is my info and compile line:
:~$ cat /etc/issue
Ubuntu 10.04 LTS \n \l
:~$ gcc --version
gcc (Ubuntu 4.4.3-4ubuntu5) 4.4.3
:~$ gcc fd_server.c -o fd_server
fd_server.c creates two named pipes, one for reading and one for writing. What one can do, is: in one terminal, run the server and read (through cat) its write pipe:
:~$ ./fd_server & 2>/dev/null
[1] 11354
:~$ cat /tmp/np2
and in another, write (using echo) to server's read pipe:
:~$ echo "heeellloooo" > /tmp/np1
going back to first terminal, one can see:
:~$ cat /tmp/np2
HEEELLLOOOO
0[1]+ Exit 13 ./fd_server 2> /dev/null
What I would like to do, is make sort of a "interactive" (or "shell"-like) session; that is, the server is run as usual, but instead of running cat and echo, I'd like to use something akin to screen. What I mean by that, is that screen can be called like screen /dev/ttyS0 38400, and then it makes a sort of a interactive session, where what is typed in terminal is passed to /dev/ttyS0, and its response is written to terminal. Now, of course, I cannot use screen, because in my case the program has two separate nodes, and as far as I can tell, screen can refer to only one.
How would one go about to achieve this sort of "interactive" session in this context (with two separate read/write pipes)?
Code below:
#include <stdio.h>
#include <errno.h>
#include <ctype.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
//#include <fullduplex.h> /* For name of the named-pipe */
#define NP1 "/tmp/np1"
#define NP2 "/tmp/np2"
#define MAX_BUF_SIZE 255
#include <stdlib.h> //exit
#include <string.h> //strlen
int main(int argc, char *argv[])
{
int rdfd, wrfd, ret_val, count, numread;
char buf[MAX_BUF_SIZE];
/* Create the first named - pipe */
ret_val = mkfifo(NP1, 0666);
if ((ret_val == -1) && (errno != EEXIST)) {
perror("Error creating the named pipe");
exit (1);
}
ret_val = mkfifo(NP2, 0666);
if ((ret_val == -1) && (errno != EEXIST)) {
perror("Error creating the named pipe");
exit (1);
}
/* Open the first named pipe for reading */
rdfd = open(NP1, O_RDONLY);
/* Open the second named pipe for writing */
wrfd = open(NP2, O_WRONLY);
/* Read from the first pipe */
numread = read(rdfd, buf, MAX_BUF_SIZE);
buf[numread] = '0';
fprintf(stderr, "Full Duplex Server : Read From the pipe : %sn", buf);
/* Convert to the string to upper case */
count = 0;
while (count < numread) {
buf[count] = toupper(buf[count]);
count++;
}
/*
* Write the converted string back to the second
* pipe
*/
write(wrfd, buf, strlen(buf));
}
Edit:
Right, just to clarify - it seems I found a document discussing something very similar, it is - a modification of the script there ("For example, the following script configures the device and starts a background process for copying all received data from the serial device to standard output...") for the above program is below:
# stty raw #
( ./fd_server 2>/dev/null; )&
bgPidS=$!
( cat < /tmp/np2 ; )&
bgPid=$!
# Read commands from user, send them to device
echo $(kill -0 $bgPidS 2>/dev/null ; echo $?)
while [ "$(kill -0 $bgPidS 2>/dev/null ; echo $?)" -eq "0" ] && read cmd; do
# redirect debug msgs to stderr, as here we're redirected to /tmp/np1
echo "$? - $bgPidS - $bgPid" >&2
echo "$cmd"
echo -e "\nproc: $(kill -0 $bgPidS 2>/dev/null ; echo $?)" >&2
done >/tmp/np1
echo OUT
# Terminate background read process - if they still exist
if [ "$(kill -0 $bgPid 2>/dev/null ; echo $?)" -eq "0" ] ;
then
kill $bgPid
fi
if [ "$(kill -0 $bgPidS 2>/dev/null ; echo $?)" -eq "0" ] ;
then
kill $bgPidS
fi
# stty cooked
So, saving the script as say starter.sh and calling it, results with the following session:
$ ./starter.sh
0
i'm typing here and pressing [enter] at end
0 - 13496 - 13497
I'M TYPING HERE AND PRESSING [ENTER] AT END
0~�.N=�(�~� �����}����#������~� [garble]
proc: 0
OUT
which is what I'd call for "interactive session" (ignoring the debug statements) - server waits for me to enter a command; it gives its output after it receives a command (and as in this case it exits after first command, so does the starter script as well). Except that, I'd like to not have buffered input, but sent character by character (meaning the above session should exit after first key press, and print out a single letter only - which is what I expected stty raw would help with, but it doesn't: it just kills reaction to both Enter and Ctrl-C :) )
I was just wandering if there already is an existing command (akin to screen in respect to serial devices, I guess) that would accept two such named pipes as arguments, and establish a "terminal" or "shell" like session through them; or would I have to use scripts as above and/or program own 'client' that will behave as a terminal..
If you just want to be able to receive multiple lines, rather than exiting after one, this is simple. You just need to place a loop around your read/write code, like so (quick and dirty):
while( 1 ) {
numread = read(rdfd, buf, MAX_BUF_SIZE);
fprintf(stderr, "Full Duplex Server : Read From the pipe : %sn", buf);
/* Convert to the string to upper case */
count = 0;
while (count < numread) {
buf[count] = toupper(buf[count]);
count++;
}
/*
* Write the converted string back to the second
* pipe
*/
write(wrfd, buf, strlen(buf));
}
Of course, now you have an application which will never exit, and will start doing nothing as soon as it gets an EOF, etc. So, you can reorganize it to check for errors:
numread = read(rdfd, buf, MAX_BUF_SIZE);
while( numread > 0) {
/* ... etc ... */
numread = read(rdfd,buf, MAX_BUF_SIZE);
}
if( numread == 0 ) {
/* ... handle eof ... */
}
if( numread < 0 ) {
/* ... handle io error ... */
}
From the man page, read returns 0 for EOF and -1 for an error (you have read the man page, right? http://linux.die.net/man/2/read ). So what this does is keeps on grabbing bytes from the read pipe until it reaches EOF or some error, in which case you (probably) print a message and exit. That said, you might just do a reopen when you get an EOF so you can get more input.
Once you've modified your program to read continuously, entering multiple lines interactively is simple. Just execute:
cat - > /tmp/np1
The '-' explicitly tells cat to read from stdin (this is the default, so you don't actually need the dash). So cat will pass everything you enter on to your pipe program. You can insert an EOF using Ctrl+D, which will cause cat to stop reading stdin. What happens to your pipe program depends on how you handle the EOF in your read loop.
Now, if you want another program that does all the io, without cat, (so you end up with a stdio echo program), the pseudocode is going to look sort of like this:
const int stdin_fd = 0; // known unix constant!
int readpipe_fd = open the read pipe, as before
int writepipe_fd = open the write pipe, as before
read stdin into buffer
while( stdin is reading correctly ) {
write data from stdin to read pipe
check write is successful
read write pipe into buffer
check read is successful
write buffer to stdout (fprintf is fine)
read stdin into buffer.
}
You can use the read system call to read stdin if you feel like it, but you can also just use stdio. Reading, writing, and opening your pipes should all be identical to your server program, except read/write is all reversed.

Resources