Pass data through an anonymous pipe to another program - c

This is my code I'm trying to use to pass data to the other program.:
static int callWithFile(char* buff) {
int myPipes[2];
if( pipe( myPipes ) < 0 ){
perror("Can't pipe through \n");
exit(13);
}
int pid = fork();
switch(pid){
case 0:
{
if(verbose_flag) printf("pid is %d; pipe fds are.... %d & %d\n", getpid(), myPipes[PIPE_READ], myPipes[PIPE_WRITE]);
//close (myPipes[PIPE_READ]);
write (myPipes[PIPE_WRITE], buff, strlen(buff) + 1);
close (myPipes[PIPE_WRITE]);
char* pipeArg;
if(verbose_flag){
asprintf (&pipeArg, "/proc/%d/fd/%d", getpid(), myPipes[PIPE_READ]);
printf("\n%s\n", pipeArg);
}
asprintf (&pipeArg, "/dev/fd/%d", myPipes[PIPE_READ]);
char* progArgv[] = {
"prog",
"--new_settings",
pipeArg,
//"/dev/fd/0",
NULL
};
// This works just fine
// FILE* fp = fopen(pipeArg, "r");
// if (fp == NULL) {
// perror("Can't open fd pipe file \n");
// exit(14);
// }
// fread(buff, sizeof(char), strlen(buff) + 1, fp);
// printf("buff: %s", buff);
execvp(prog_path, progArgv);
perror("execvp screwed up");
exit(15);
}
case -1:
perror("fork screwed up ");
exit(16);
}
close (myPipes[PIPE_READ]);
close (myPipes[PIPE_WRITE]);
wait(NULL);
puts("done");
}
In all aspects, the code appears to be correct and providing the file descriptor for the other program to read from.
However, for some reason, the other program tells it can't open and read the file.
This is the program that reads the data: https://github.com/tuxedocomputers/tuxedo-control-center/blob/master/src/common/classes/ConfigHandler.ts#L87
It complains: Error on read option --new_settings with path: /dev/fd/4
I already confirmed that it is correct JSON, so that shouldn't be the problem.
As for debugging it, I can't make it run on my machine for some reason.
Cannot launch program because corresponding JavaScript cannot be found..
My objective is to have the equivalent of this in bash:
program <(echo $buff)
Where $buff is the contents of the buff function argument.

Everything in your code is correct except this:
write (myPipes[PIPE_WRITE], buff, strlen(buff) + 1);
See that + 1? That's the failure. You are sending a null byte (AKA character 0 or '\0') to the program when its JSON parser doesn't expect it.
Try this instead (without the + 1):
write (myPipes[PIPE_WRITE], buff, strlen(buff));

Related

C - How to pipe to a program that read only from file

I want to pipe a string to a program that read input only from file, but not from stdin. Using it from bash, i can do something like
echo "hi" | program /dev/stdin
and I wanted to replicate this behaviour from C code. What I did is this
#include <unistd.h>
#include <stdio.h>
#include <stdlib.h>
#include <sys/wait.h>
#include <string.h>
int main() {
pid_t pid;
int rv;
int to_ext_program_pipe[2];
int to_my_program_pipe[2];
if(pipe(to_ext_program_pipe)) {
fprintf(stderr,"Pipe error!\n");
exit(1);
}
if(pipe(to_my_program_pipe)) {
fprintf(stderr,"Pipe error!\n");
exit(1);
}
if( (pid=fork()) == -1) {
fprintf(stderr,"Fork error. Exiting.\n");
exit(1);
}
if(pid) {
close(to_my_program_pipe[1]);
close(to_ext_program_pipe[0]);
char string_to_write[] = "this is the string to write";
write(to_ext_program_pipe[1], string_to_write, strlen(string_to_write) + 1);
close(to_ext_program_pipe[1]);
wait(&rv);
if(rv != 0) {
fprintf(stderr, "%s %d\n", "phantomjs exit status ", rv);
exit(1);
}
char *string_to_read;
char ch[1];
size_t len = 0;
string_to_read = malloc(sizeof(char));
if(!string_to_read) {
fprintf(stderr, "%s\n", "Error while allocating memory");
exit(1);
}
while(read(to_my_program_pipe[0], ch, 1) == 1) {
string_to_read[len]=ch[0];
len++;
string_to_read = realloc(string_to_read, len*sizeof(char));
if(!string_to_read) {
fprintf(stderr, "%s\n", "Error while allocating memory");
}
string_to_read[len] = '\0';
}
close(to_my_program_pipe[0]);
printf("Output: %s\n", string_to_read);
free(string_to_read);
} else {
close(to_ext_program_pipe[1]);
close(to_my_program_pipe[0]);
dup2(to_ext_program_pipe[0],0);
dup2(to_my_program_pipe[1],1);
if(execlp("ext_program", "ext_program", "/dev/stdin" , NULL) == -1) {
fprintf(stderr,"execlp Error!");
exit(1);
}
close(to_ext_program_pipe[0]);
close(to_my_program_pipe[1]);
}
return 0;
}
It is not working.
EDIT
I don't get the ext_program output, that should be saved in string_to_read. The program just hangs. I can see that ext_program is executed, but I don't get anything
I would like to know if there is an error, or if what I want cannot be done. Also I know that the alternative is to use named pipes.
EDIT 2: more details
As I still can not get my program working, I post the complete code
#include <unistd.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/wait.h>
int main() {
pid_t pid;
int rv;
int to_phantomjs_pipe[2];
int to_my_program_pipe[2];
if(pipe(to_phantomjs_pipe)) {
fprintf(stderr,"Pipe error!\n");
exit(1);
}
if(pipe(to_my_program_pipe)) {
fprintf(stderr,"Pipe error!\n");
exit(1);
}
if( (pid=fork()) == -1) {
fprintf(stderr,"Fork error. Exiting.\n");
exit(1);
}
if(pid) {
close(to_my_program_pipe[1]);
close(to_phantomjs_pipe[0]);
char jsToExectue[] = "var page=require(\'webpage\').create();page.onInitialized=function(){page.evaluate(function(){delete window._phantom;delete window.callPhantom;});};page.onResourceRequested=function(requestData,request){if((/http:\\/\\/.+\?\\\\.css/gi).test(requestData[\'url\'])||requestData.headers[\'Content-Type\']==\'text/css\'){request.abort();}};page.settings.loadImage=false;page.settings.userAgent=\'Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2228.0 Safari/537.36\';page.open(\'https://stackoverflow.com\',function(status){if(status!==\'success\'){phantom.exit(1);}else{console.log(page.content);phantom.exit();}});";
write(to_phantomjs_pipe[1], jsToExectue, strlen(jsToExectue) + 1);
close(to_phantomjs_pipe[1]);
int read_chars;
int BUFF=1024;
char *str;
char ch[BUFF];
size_t len = 0;
str = malloc(sizeof(char));
if(!str) {
fprintf(stderr, "%s\n", "Error while allocating memory");
exit(1);
}
str[0] = '\0';
while( (read_chars = read(to_my_program_pipe[0], ch, BUFF)) > 0)
{
len += read_chars;
str = realloc(str, (len + 1)*sizeof(char));
if(!str) {
fprintf(stderr, "%s\n", "Error while allocating memory");
}
strcat(str, ch);
str[len] = '\0';
memset(ch, '\0', BUFF*sizeof(ch[0]));
}
close(to_my_program_pipe[0]);
printf("%s\n", str);
free(str);
wait(&rv);
if(rv != 0) {
fprintf(stderr, "%s %d\n", "phantomjs exit status ", rv);
exit(1);
}
} else {
dup2(to_phantomjs_pipe[0],0);
dup2(to_my_program_pipe[1],1);
close(to_phantomjs_pipe[1]);
close(to_my_program_pipe[0]);
close(to_phantomjs_pipe[0]);
close(to_my_program_pipe[1]);
execlp("phantomjs", "phantomjs", "--ssl-protocol=TLSv1", "/dev/stdin" , (char *)NULL);
}
return 0;
}
What I am trying to do is to pass to phantomjs a script to execute through pipe and then read the resulting HTML as a string. I modified the code as told, but phantomjs still does not read from stdin.
I tested the script string by creating a dumb program that writes it to a file and then executed phantomjs normally and that works.
I also tryed to execute execlp("phantomjs", "phantomjs", "--ssl-protocol=TLSv1", "path_to_script_file" , (char *)NULL); and that works too, the output HTML is showed.
It does not work when using pipe.
An Explanation At Last
Some experimentation with PhantomJS shows that the problem is writing a null byte at the end of the JavaScript program sent to PhantomJS.
This highlights two bugs:
The program in the question sends an unnecessary null byte.
PhantomJS 2.1.1 (on a Mac running macOS High Sierra 10.13.3) hangs when an otherwise valid program is followed by a null byte
The code in the question contains:
write(to_phantomjs_pipe[1], jsToExectue, strlen(jsToExectue) + 1);
The + 1 means that the null byte terminating the string is also written to phantomjs. And writing that null byte causes phantomjs to hang. That is tantamount to a bug — it certainly isn't clear why PhantomJS hangs without detecting EOF (there is no more data to come), and without giving an error, etc.
Change that line to:
write(to_phantomjs_pipe[1], jsToExectue, strlen(jsToExectue));
and the code works as expected — at least with PhantomJS 2.1.1 on a Mac running macOS High Sierra 10.13.3.
Initial analysis
You aren't closing enough file descriptors in the child.
Rule of thumb: If you
dup2()
one end of a pipe to standard input or standard output, close both of the
original file descriptors returned by
pipe()
as soon as possible.
In particular, you should close them before using any of the
exec*()
family of functions.
The rule also applies if you duplicate the descriptors with either
dup()
or
fcntl()
with F_DUPFD
The child code shown is:
} else {
close(to_ext_program_pipe[1]);
close(to_my_program_pipe[0]);
dup2(to_ext_program_pipe[0],0);
dup2(to_my_program_pipe[1],1);
if(execlp("ext_program", "ext_program", "/dev/stdin" , NULL) == -1) {
fprintf(stderr,"execlp Error!");
exit(1);
}
close(to_ext_program_pipe[0]);
close(to_my_program_pipe[1]);
}
The last two close() statements are never executed; they need to appear before the execlp().
What you need is:
} else {
dup2(to_ext_program_pipe[0], 0);
dup2(to_my_program_pipe[1], 1);
close(to_ext_program_pipe[0]);
close(to_ext_program_pipe[1]);
close(to_my_program_pipe[0]);
close(to_my_program_pipe[1]);
execlp("ext_program", "ext_program", "/dev/stdin" , NULL);
fprintf(stderr, "execlp Error!\n");
exit(1);
}
You can resequence it splitting the close() calls, but it is probably better to regroup them as shown.
Note that there is no need to test whether execlp() failed. If it returns, it failed. If it succeeds, it does not return.
There could be another problem. The parent process waits for the child to exit before reading anything from the child. However, if the child tries to write more data than will fit in the pipe, the process will hang, waiting for some process (which will have to be the parent) to read the pipe. Since they're both waiting for the other to do something before they will do what the other is waiting for, it is (or, at least, could be) a deadlock.
You should also revise the parent process to do the reading before the waiting.
if (pid) {
close(to_my_program_pipe[1]);
close(to_ext_program_pipe[0]);
char string_to_write[] = "this is the string to write";
write(to_ext_program_pipe[1], string_to_write, strlen(string_to_write) + 1);
close(to_ext_program_pipe[1]);
char *string_to_read;
char ch[1];
size_t len = 0;
string_to_read = malloc(sizeof(char));
if(!string_to_read) {
fprintf(stderr, "%s\n", "Error while allocating memory");
exit(1);
}
while (read(to_my_program_pipe[0], ch, 1) == 1) {
string_to_read[len] = ch[0];
len++;
string_to_read = realloc(string_to_read, len*sizeof(char));
if (!string_to_read) {
fprintf(stderr, "%s\n", "Error while allocating memory\n");
exit(1);
}
string_to_read[len] = '\0';
}
close(to_my_program_pipe[0]);
printf("Output: %s\n", string_to_read);
free(string_to_read);
wait(&rv);
if (rv != 0) {
fprintf(stderr, "%s %d\n", "phantomjs exit status ", rv);
exit(1);
}
} …
I'd also rewrite the code to read in big chunks (1024 bytes or more). Just don't copy more data than the read returns, that's all. Repeatedly using realloc() to allocate one more byte to the buffer is ultimately excruciatingly slow. It won't matter much if there's only a few bytes of data; it will matter if there are kilobytes or more data to process.
Later: Since the PhantomJS program generates over 90 KiB of data in response to the message it was sent, this was a factor in the problems — or would have been were it not for the hang-on-null-byte bug in PhantomJS.
Still having problems 2018-02-03
I extracted the code, as amended, into a program (pipe89.c, compiled to pipe89). I got inconsistent crashes when the space allocated changed. I eventually realized that you're reallocating one byte too little space — it took a lot longer than it should have done (but it would help if Valgrind was available for macOS High Sierra — it isn't yet).
Here's the fixed code with debugging information commented output:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/wait.h>
#include <unistd.h>
int main(void)
{
pid_t pid;
int rv;
int to_ext_program_pipe[2];
int to_my_program_pipe[2];
if (pipe(to_ext_program_pipe))
{
fprintf(stderr, "Pipe error!\n");
exit(1);
}
if (pipe(to_my_program_pipe))
{
fprintf(stderr, "Pipe error!\n");
exit(1);
}
if ((pid = fork()) == -1)
{
fprintf(stderr, "Fork error. Exiting.\n");
exit(1);
}
if (pid)
{
close(to_my_program_pipe[1]);
close(to_ext_program_pipe[0]);
char string_to_write[] = "this is the string to write";
write(to_ext_program_pipe[1], string_to_write, sizeof(string_to_write) - 1);
close(to_ext_program_pipe[1]);
char ch[1];
size_t len = 0;
char *string_to_read = malloc(sizeof(char));
if (string_to_read == 0)
{
fprintf(stderr, "%s\n", "Error while allocating memory");
exit(1);
}
string_to_read[len] = '\0';
while (read(to_my_program_pipe[0], ch, 1) == 1)
{
//fprintf(stderr, "%3zu: got %3d [%c]\n", len, ch[0], ch[0]); fflush(stderr);
string_to_read[len++] = ch[0];
char *new_space = realloc(string_to_read, len + 1); // KEY CHANGE is " + 1"
//if (new_space != string_to_read)
// fprintf(stderr, "Move: len %zu old %p vs new %p\n", len, (void *)string_to_read, (void *)new_space);
if (new_space == 0)
{
fprintf(stderr, "Error while allocating %zu bytes memory\n", len);
exit(1);
}
string_to_read = new_space;
string_to_read[len] = '\0';
}
close(to_my_program_pipe[0]);
printf("Output: %zu (%zu) [%s]\n", len, strlen(string_to_read), string_to_read);
free(string_to_read);
wait(&rv);
if (rv != 0)
{
fprintf(stderr, "%s %d\n", "phantomjs exit status ", rv);
exit(1);
}
}
else
{
dup2(to_ext_program_pipe[0], 0);
dup2(to_my_program_pipe[1], 1);
close(to_ext_program_pipe[0]);
close(to_ext_program_pipe[1]);
close(to_my_program_pipe[0]);
close(to_my_program_pipe[1]);
execlp("ext_program", "ext_program", "/dev/stdin", NULL);
fprintf(stderr, "execlp Error!\n");
exit(1);
}
return 0;
}
It was tested on a program which wrote 5590 byte out for 27 bytes of input. That isn't as massive a multiplier as in your program, but it proves a point.
I still think you'd do better not reallocating a single extra byte at a time — the scanning loop should use a buffer of, say, 1 KiB and read up to 1 KiB at a time, and allocate the extra space all at once. That's a much less intensive workout for the memory allocation system.
Problems continuing on 2018-02-05
Taking the code from the Edit 2 and changing only the function definition from int main() { to int main(void) { (because the compilation options I use don't allow old-style non-prototype function declarations or definitions, and without the void, that is not a prototype), the code is
working fine for me. I created a surrogate phantomjs program (from another I already have lying around), like this:
#include <stdio.h>
int main(int argc, char **argv, char **envp)
{
for (int i = 0; i < argc; i++)
printf("argv[%d] = <<%s>>\n", i, argv[i]);
for (int i = 0; envp[i] != 0; i++)
printf("envp[%d] = <<%s>>\n", i, envp[i]);
FILE *fp = fopen(argv[argc - 1], "r");
if (fp != 0)
{
int c;
while ((c = getc(fp)) != EOF)
putchar(c);
fclose(fp);
}
else
fprintf(stderr, "%s: failed to open file %s for reading\n",
argv[0], argv[argc-1]);
return(0);
}
This code echoes the argument list, the environment, and then opens the file named as the last argument and copies that to standard output. (It is highly specialized because of the special treatment for argv[argc-1], but the code before that is occasionally useful for debugging complex shell scripts.)
When I run your program with this 'phantomjs', I get the output I'd expect:
argv[0] = <<phantomjs>>
argv[1] = <<--ssl-protocol=TLSv1>>
argv[2] = <</dev/stdin>>
envp[0] = <<MANPATH=/Users/jleffler/man:/Users/jleffler/share/man:/Users/jleffler/oss/share/man:/Users/jleffler/oss/rcs/man:/usr/local/mysql/man:/opt/gcc/v7.3.0/share/man:/Users/jleffler/perl/v5.24.0/man:/usr/local/man:/usr/local/share/man:/usr/share/man:/opt/gnu/share/man>>
envp[1] = <<IXH=/opt/informix/12.10.FC6/etc/sqlhosts>>
…
envp[49] = <<HISTFILE=/Users/jleffler/.bash.jleffler>>
envp[50] = <<_=./pipe31>>
var page=require('webpage').create();page.onInitialized=function(){page.evaluate(function(){delete window._phantom;delete window.callPhantom;});};page.onResourceRequested=function(requestData,request){if((/http:\/\/.+?\\.css/gi).test(requestData['url'])||requestData.headers['Content-Type']=='text/css'){request.abort();}};page.settings.loadImage=false;page.settings.userAgent='Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2228.0 Safari/537.36';page.open('https://stackoverflow.com',function(status){if(status!=='success'){phantom.exit(1);}else{console.log(page.content);phantom.exit();}});
At this point, I have to point the finger at phantomjs in your environment; it doesn't seem to behave as expected when you do the equivalent of:
echo "$JS_PROG" | phantomjs /dev/stdin | cat
Certainly, I cannot reproduce your problem any more.
You should take my surrogate phantomjs code and use that instead of the real phantomjs and see what you get.
If you get output analogous to what I showed, then the problem is with the real phantomjs.
If you don't get output analogous to what I showed, then maybe there is a problem with your code from the update to the question.
Later: Note that because the printf() uses %s to print the data, it would not notice the extraneous null byte being sent to the child.
In the pipe(7) man it is written that you should read from pipe ASAP:
If a process attempts to write to a
full pipe (see below), then write(2) blocks until sufficient data has
been read from the pipe to allow the write to complete. Nonblocking
I/O is possible by using the fcntl(2) F_SETFL operation to enable the
O_NONBLOCK open file status flag.
and
A pipe has a limited capacity. If the pipe is full, then a write(2)
will block or fail, depending on whether the O_NONBLOCK flag is set
(see below). Different implementations have different limits for the
pipe capacity. Applications should not rely on a particular
capacity: an application should be designed so that a reading process
consumes data as soon as it is available, so that a writing process
does not remain blocked.
In your code you write, wait and only then read
write(to_ext_program_pipe[1], string_to_write, strlen(string_to_write) + 1);
close(to_ext_program_pipe[1]);
wait(&rv);
//...
while(read(to_my_program_pipe[0], ch, 1) == 1) {
//...
Maybe the pipe is full or ext_program is waiting for the data to be read, you should wait() only after the read.

Sending special characters to pseudo-terminal?

I'm trying to write a C program that creates a pseudo-terminal running a new bash instance, and records all the input and output that goes through it. The eventual goal would be to asynchronously send this to a server, where somebody else could view your terminal activity in real time.
I've completed the pseudo-term creation step, and I can start a new bash instance and log "most" of the input and output. My issue right now is that the pseudo-term isn't properly recognizing arrow keys. They get printed to the screen as ASCII values (^[[A, ^[[[B, ^[[C, ^[[D), instead of moving the cursor around the command line.
Here's the slave portion of the pty, which will run bash:
if(pid == 0){ //child
struct termios term_settings;
close(ptyfds.master);
rc = tcgetattr(ptyfds.slave, &term_settings);
cfmakeraw(&term_settings);
tcsetattr(ptyfds.slave, TCSANOW, &term_settings);
//replace stdin,out,err with the slave filedesc
close(0);
close(1);
close(2);
dup(ptyfds.slave);
dup(ptyfds.slave);
dup(ptyfds.slave);
//We can close original fd and use 0,1,2
close(ptyfds.slave);
//Make this process the session lead
setsid();
//Slave side of PTY becomes the new controlling terminal
ioctl(0, TIOCSCTTY, 1);
char ** child_argv = (char **) malloc(argc * sizeof(char*));
int i;
for(i=1; i<argc; i++){
child_argv[i-1] = strdup(argv[i]); //could be bash, bc, python
}
child_argv[i-1] = NULL;
rc = execvp(child_argv[0], child_argv);
}
And here's the master side of the pty, sending input to the slave and capturing its output.
if(pid == 0){ //parent
fd_set fd_in;
close(ptyfds.slave);
FILE *logFile = fopen("./log", "w");
while(1){
//Add stdin and master fd to object
FD_ZERO(&fd_in);
FD_SET(0,&fd_in);
FD_SET(ptyfds.master, &fd_in);
//intercept data from stdin or from slave out (which is redirected to master)
rc = select(ptyfds.master+1, &fd_in, NULL,NULL,NULL);
switch(rc){
case -1:
fprintf(stderr, "Error %d on select()\n", errno);
exit(1);
default:
if (FD_ISSET(0, &fd_in)){ //There's data on stdin
rc = read(0, input, sizeof(input));
if(rc > 0){
input[rc] = '\0';
write(ptyfds.master, input, rc);//send to master -> slave
fputs(input, logFile);
}
else if(rc < 0){
fprintf(stderr, "Error %d on stdin\n", errno);
exit(1);
}
}
if(FD_ISSET(ptyfds.master, &fd_in)){ //There's data from slave
rc = read(ptyfds.master, input, sizeof(input)-1);
if(rc > 0){
input[rc] = '\0';
write(1, input, rc);//send to stdout
fputs(input, logFile);
}
else if (rc < 0){
fprintf(stderr, "Error %d on read master pty\n", errno);
exit(1);
}
}
}//switch
}//while
}//end parent
I've tried messing around with the termios flags here, but there are none that specify arrow keys.
What do I need to do?
Much of this code came from here.
I think there was a mistake in the example program.
I was able to fix it by moving:
rc = tcgetattr(ptyfds.slave, &term_settings);
cfmakeraw(&term_settings);
tcsetattr(ptyfds.slave, TCSANOW, &term_settings);
into the master section and replacing ptyfds.slave with STDIN_FILENO
(This sets STDIN to raw mode, rather than the slave)

How to write and read at the same file using "popen" in C

I'm using Intel Edison and SensorTag. In order to get temperature data via BLE, there are a bunch of commands. When I define popen as:
popen(command,"w");
code works fine most of the times. (Crashes other times due to delay issues I assume as I don't control the responses.)
However, when I want to control the command/console responses (such as step into next line when bluetooth connection is established and if not try to connect again etc.), I cannot read the responses. My "data" variable is not changed.
I also tried other modes of "popen" but they give run-time errors.
Here is the code I'm using:
#include <stdlib.h>
#include <stdio.h>
#include <unistd.h>
int endsWith (char* base, char* str) {
int blen = strlen(base);
int slen = strlen(str);
return (blen >= slen) && (0 == strcmp(base + blen - slen, str));
}
FILE* get_popen(char* command, int close, int block) {
FILE *pf;
char data[512];
// Setup our pipe for reading and execute our command.
pf = popen(command,"w");
// Error handling
if (block == 1) {
// Get the data from the process execution
char* result;
do {
result=fgets(data, 512 , stderr);
if (result != NULL) {
printf("Data is [%s]\n", data);
}
} while (result != NULL);
// the data is now in 'data'
}
if (close != 0) {
if (pclose(pf) != 0)
fprintf(stderr," Error: Failed to close command stream \n");
}
return pf;
}
FILE* command_cont_exe(FILE* pf, char* command, int close, int block) {
char data[512];
// Error handling
if (pf == NULL) {
// print error
return NULL;
}
fwrite(command, 1, strlen(command), pf);
fwrite("\r\n", 1, 2, pf);
if (block == 1) {
// Get the data from the process execution
char* result;
do {
result=fgets(data, 512 , stderr);
if (result != NULL) {
printf("Data is [%s]\n", data);
}
} while (result != NULL);//
}
// the data is now in 'data'
if (close != 0) {
if (pclose(pf) != 0)
fprintf(stderr," Error: Failed to close command stream \n");
}
return pf;
}
int main()
{
char command[50];
sprintf(command, "rfkill unblock bluetooth");
get_popen(command, 1, 0);
printf("Working...(rfkill)\n");
sleep(2);
sprintf(command, "bluetoothctl 2>&1");
FILE* pf = get_popen(command, 0, 1);
printf("Working...(BT CTRL)\n");
sleep(3);
sprintf(command, "agent KeyboardDisplay");
command_cont_exe(pf, command, 0, 1);
printf("Working...(Agent)\n");
sleep(3);
//Main continues...
You cannot do this with popen, but can build a program using fork, exec and pipe. The last opens two file descriptors, which are related: the parent's connection to a pipe, and the child's connection. To make a two-way connection to a child process, you must use two calls to pipe.
The file-descriptors opened by pipe are not buffered, so you would use read and write to communicate with the child (rather than fgets and fprintf).
For examples and discussion, see
Does one end of a pipe have both read and write fd?
Read / Write through a pipe in C
UNIX pipe() : Example Programs
pipe(7) - Linux man page
6.2.2 Creating Pipes in C
Unfortunately, you can use popen() in one direction only. To get a bidirectional communication, you need to create two anonymous pipes with pipe() for stdin and stdout and assign them to the file handles 0 and 1 with dup2().
See http://tldp.org/LDP/lpg/node11.html for more details.

read() function not allowing anything to be printed after newline character

I have a function that reads a file from a server and returns the data:
int readMessageFromServer(int fileDescriptor) {
char buffer[MAXMSG];
int nOfBytes;
nOfBytes = read(fileDescriptor, buffer, MAXMSG);
if(nOfBytes < 0) {
perror("Could not read data from server\n");
exit(EXIT_FAILURE);
}
else
if(nOfBytes == 0)
return(-1);
else
printf("Server Message: %s\n", buffer);
return(0);
}
The problem is with the line
printf("Server Message: %s\n", buffer);
If I change this line to
printf("Server Message: %s\n>", buffer);
It refuses to print the '>' sign until it gets more data.
Is this a known limitation or am I doing something wrong?
I should probably add that the call to this function looks like this:
while(readMessageFromServer(sock) > 0) {continue;};
Besides the fact that you probably wanted to write the > inside the quotes, you'll need to flush the output buffer by calling fflush(stdout). The buffers are usually only flushed after newlines.
nOfBytes = read(fileDescriptor, buffer, MAXMSG);
There is no guarantee how many bytes you read or whether they constitute a null terminated string. At a minimum you should change to something like this:
int readMessageFromServer(int fileDescriptor) {
char buffer[MAXMSG];
int nOfBytes;
nOfBytes = read(fileDescriptor, buffer, MAXMSG - 1);
if(nOfBytes < 0) {
perror("Could not read data from server\n");
exit(EXIT_FAILURE);
}
else
if(nOfBytes == 0)
return(-1);
else
{
buffer[nOfBytes] = '\0';
printf("Server Message: %s\n", buffer);
return(0);
}
printf uses stdout which is a buffered output.
Changing printf with fprintf(stderr, ...) should solve your problem.

Unix Shell Implementing Cat in C - File Descriptor Issue

I've about got my practice implementation of a Unix shell done, except I'm having an issue with implementing cat when its output is to a file; IE: cat foo.txt > bar.txt - outputting foo's contents to bar.
Let's start from the main function & then I'll define the submethods:
int main(int argc, char **argv)
{
printf("[MYSHELL] $ ");
while (TRUE) {
user_input = getchar();
switch (user_input) {
case EOF:
exit(-1);
case '\n':
printf("[MYSHELL] $ ");
break;
default:
// parse input into cmd_argv - store # commands in cmd_argc
handle_user_input();
//determine input and execute foreground/background process
execute_command();
}
background = 0;
}
printf("\n[MYSHELL] $ ");
return 0;
}
handle_user_input just populates the cmd_argv array to execute the user_input, and removes the > and sets an output flag if the user wishes to output to a file. This is the meat of that method:
while (buffer_pointer != NULL) {
cmd_argv[cmd_argc] = buffer_pointer;
buffer_pointer = strtok(NULL, " ");
if(strcmp(cmd_argv[cmd_argc], ">") == 0){
printf("\nThere was a '>' in %s # index: %d for buffer_pointer: %s \n", *cmd_argv,cmd_argc,buffer_pointer);
cmd_argv[cmd_argc] = strtok(NULL, " ");
output = 1;
}
cmd_argc++;
if(output){
filename = buffer_pointer;
printf("The return of handling input for filename %s = %s + %s \n", buffer_pointer, cmd_argv[0], cmd_argv[1]);
return;
}
}
execute_command is then called, interpreting the now populated cmd_argv. Just to give you an idea of the big picture. Obviously, none of these cases match and the create_process method is called:
int execute_command()
{
if (strcmp("pwd", cmd_argv[0]) == 0){
printf("%s\n",getenv("PATH"));
return 1;
}
else if(strcmp("cd", cmd_argv[0]) == 0){
change_directory();
return 1;
}
else if (strcmp("jobs", cmd_argv[0]) == 0){
display_job_list();
return 1;
}
else if (strcmp("kill", cmd_argv[0]) == 0){
kill_job();
}
else if (strcmp("EOT", cmd_argv[0]) == 0){
exit(1);
}
else if (strcmp("exit", cmd_argv[0]) == 0){
exit(-1);
}
else{
create_process();
return;
}
}
Pretty straight forward, right?
create_process is where I'm having issues.
void create_process()
{
status = 0;
int pid = fork();
background = 0;
if (pid == 0) {
// child process
if(output){
printf("Output set in create process to %d\n",output);
output = 0;
int output_fd = open(filename, O_RDONLY);
printf("Output desc = %d\n",output_fd);
if (output_fd > -1) {
dup2(output_fd, STDOUT_FILENO);
close(output_fd);
} else {
perror("open");
}
}
printf("Executing command, but STDOUT writing to COMMAND PROMPT instead of FILE - as I get the 'open' error above \n");
execvp(*cmd_argv,cmd_argv);
// If an error occurs, print error and exit
fprintf (stderr, "unknown command: %s\n", cmd_argv[0]);
exit(0);
} else {
// parent process, waiting on child process
waitpid(pid, &status, 0);
if (status != 0)
fprintf (stderr, "error: %s exited with status code %d\n", cmd_argv[0], status);
}
return;
}
My printed output_fd = -1, and I manage to get the perror("open") inside the else stating: open: No such file or directory. It then prints that it's "writing to COMMAND PROMPT instead of FILE", as I display to the console. Then executes execvp which handles cat foo.txt, but prints it to the console instead of the file.
I realize it shouldn't at this point, as having output_fd = -1 isnt desirable and should be returning another value; but I cant figure out how to use file descriptors correctly in order to open a new/existing file with cat foo.txt > bar.txt and write to it, as WELL AS GET BACK to the command line's stdin.
I have managed to output to the file, but then lose getting back the correct stdin. Could someone please direct me here? I feel like I'm going in circles over something silly I'm doing wrong or looking over.
Any help is greatly GREATLY appreciated.
Why do you use O_RDONLY if you want to write to the file? My guess is that you should use something like:
int output_fd = open(filename, O_WRONLY|O_CREAT, 0666);
(The 0666 is to set up the access rights when creating).
And obviously, if you can't open a redicted file, you shouldn't launch the command.
First, obvious thing I notice is that you've opened the file O_RDONLY. Not going to work so well for output!
Second, basic process for redirecting the output is:
open file for writing
dup stdout so you can keep a copy if needed. same with stderr if redirecting.
fcntl your duplicate to CLOEXEC (alternatively, use dup3)
dup2 file to stdout
exec the command
and finally, are you really passing around command names as global variables? I think this will come back to haunt you once you try and implement cat foo | ( cat bar; echo hi; cat ) > baz or somesuch.

Resources