pppd popen hanging in C - c

I am launching pppd using popen in my program to make obtaining the IP address and interface name a little bit easier. My code runs fine independently and is a pretty typical implementation. The problem begins when it runs in the full program (too large to post)... the loop seems to hang for quite a while at the fgets() line. The popen is launched in its own thread that is then managed based on the output.
The popen/pppd code is essentially the following.
int main(void){
pthread_create(&thread, NULL, pppd, (char *)NULL);
pthread_join(thread, NULL);
return 0;
}
void *pppd(char *args){
char* command = malloc(32);
sprintf(command, "pppd %s call %s", dev, provider);
pppd_stream = popen(command, "r");
if(pppd_stream == NULL){
pppd_terminated = TRUE;
return;
}
free(command);
while(fgets(buffer, 128, d->pppd_stream) != NULL){
//handle_output
}
}
CPU usage isnt a problem, the system and the other parts of the program are still responsive and running as expected.
Any thoughts on what could be causing this slow down?

Ensure that your command is null terminated string:
#define COMMAND_BUFFER_SIZE 256 /* Modify this if you need */
snprintf(command, COMMAND_BUFFER_SIZE, "pppd %s call %s", dev, provider);
command[COMMAND_BUFFER_SIZE - 1] = '\0';
pppd_stream = popen(command, "r");
EDIT:
Check your fgets:
while(fgets(buffer, 128, d->pppd_stream) != NULL){
You may want this:
while(fgets(buffer, 128, pppd_stream) != NULL){

Related

Linux popen and system can not be used in Multithreading?

I want to use popen or system in multithreading to execute my program, pass a parameter to the program and get its output. But I found that I use multithreading to call the same time (very long) as a single-threaded call. Is there any good way?
void * _multi_threaded_call(void *arg){
char *param = (char *)arg;
call_my_program(param);
return NULL;
}
//buff is the parameter that my
if(pthread_create(&tid[thread_id], NULL, &_multi_threaded_call, &buff) == -1){
printf("fail to create pthread");
exit(1);
}
if(pthread_join(tid[thread_id] , NULL) == -1){
printf("fail to join pthread");
exit(1);
}
finally, I use popen, I can't stand a long time, Do you have any good ways? thx :)
FILE *fp = popen(cmd, "r");

C - How to pipe to a program that read only from file

I want to pipe a string to a program that read input only from file, but not from stdin. Using it from bash, i can do something like
echo "hi" | program /dev/stdin
and I wanted to replicate this behaviour from C code. What I did is this
#include <unistd.h>
#include <stdio.h>
#include <stdlib.h>
#include <sys/wait.h>
#include <string.h>
int main() {
pid_t pid;
int rv;
int to_ext_program_pipe[2];
int to_my_program_pipe[2];
if(pipe(to_ext_program_pipe)) {
fprintf(stderr,"Pipe error!\n");
exit(1);
}
if(pipe(to_my_program_pipe)) {
fprintf(stderr,"Pipe error!\n");
exit(1);
}
if( (pid=fork()) == -1) {
fprintf(stderr,"Fork error. Exiting.\n");
exit(1);
}
if(pid) {
close(to_my_program_pipe[1]);
close(to_ext_program_pipe[0]);
char string_to_write[] = "this is the string to write";
write(to_ext_program_pipe[1], string_to_write, strlen(string_to_write) + 1);
close(to_ext_program_pipe[1]);
wait(&rv);
if(rv != 0) {
fprintf(stderr, "%s %d\n", "phantomjs exit status ", rv);
exit(1);
}
char *string_to_read;
char ch[1];
size_t len = 0;
string_to_read = malloc(sizeof(char));
if(!string_to_read) {
fprintf(stderr, "%s\n", "Error while allocating memory");
exit(1);
}
while(read(to_my_program_pipe[0], ch, 1) == 1) {
string_to_read[len]=ch[0];
len++;
string_to_read = realloc(string_to_read, len*sizeof(char));
if(!string_to_read) {
fprintf(stderr, "%s\n", "Error while allocating memory");
}
string_to_read[len] = '\0';
}
close(to_my_program_pipe[0]);
printf("Output: %s\n", string_to_read);
free(string_to_read);
} else {
close(to_ext_program_pipe[1]);
close(to_my_program_pipe[0]);
dup2(to_ext_program_pipe[0],0);
dup2(to_my_program_pipe[1],1);
if(execlp("ext_program", "ext_program", "/dev/stdin" , NULL) == -1) {
fprintf(stderr,"execlp Error!");
exit(1);
}
close(to_ext_program_pipe[0]);
close(to_my_program_pipe[1]);
}
return 0;
}
It is not working.
EDIT
I don't get the ext_program output, that should be saved in string_to_read. The program just hangs. I can see that ext_program is executed, but I don't get anything
I would like to know if there is an error, or if what I want cannot be done. Also I know that the alternative is to use named pipes.
EDIT 2: more details
As I still can not get my program working, I post the complete code
#include <unistd.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/wait.h>
int main() {
pid_t pid;
int rv;
int to_phantomjs_pipe[2];
int to_my_program_pipe[2];
if(pipe(to_phantomjs_pipe)) {
fprintf(stderr,"Pipe error!\n");
exit(1);
}
if(pipe(to_my_program_pipe)) {
fprintf(stderr,"Pipe error!\n");
exit(1);
}
if( (pid=fork()) == -1) {
fprintf(stderr,"Fork error. Exiting.\n");
exit(1);
}
if(pid) {
close(to_my_program_pipe[1]);
close(to_phantomjs_pipe[0]);
char jsToExectue[] = "var page=require(\'webpage\').create();page.onInitialized=function(){page.evaluate(function(){delete window._phantom;delete window.callPhantom;});};page.onResourceRequested=function(requestData,request){if((/http:\\/\\/.+\?\\\\.css/gi).test(requestData[\'url\'])||requestData.headers[\'Content-Type\']==\'text/css\'){request.abort();}};page.settings.loadImage=false;page.settings.userAgent=\'Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2228.0 Safari/537.36\';page.open(\'https://stackoverflow.com\',function(status){if(status!==\'success\'){phantom.exit(1);}else{console.log(page.content);phantom.exit();}});";
write(to_phantomjs_pipe[1], jsToExectue, strlen(jsToExectue) + 1);
close(to_phantomjs_pipe[1]);
int read_chars;
int BUFF=1024;
char *str;
char ch[BUFF];
size_t len = 0;
str = malloc(sizeof(char));
if(!str) {
fprintf(stderr, "%s\n", "Error while allocating memory");
exit(1);
}
str[0] = '\0';
while( (read_chars = read(to_my_program_pipe[0], ch, BUFF)) > 0)
{
len += read_chars;
str = realloc(str, (len + 1)*sizeof(char));
if(!str) {
fprintf(stderr, "%s\n", "Error while allocating memory");
}
strcat(str, ch);
str[len] = '\0';
memset(ch, '\0', BUFF*sizeof(ch[0]));
}
close(to_my_program_pipe[0]);
printf("%s\n", str);
free(str);
wait(&rv);
if(rv != 0) {
fprintf(stderr, "%s %d\n", "phantomjs exit status ", rv);
exit(1);
}
} else {
dup2(to_phantomjs_pipe[0],0);
dup2(to_my_program_pipe[1],1);
close(to_phantomjs_pipe[1]);
close(to_my_program_pipe[0]);
close(to_phantomjs_pipe[0]);
close(to_my_program_pipe[1]);
execlp("phantomjs", "phantomjs", "--ssl-protocol=TLSv1", "/dev/stdin" , (char *)NULL);
}
return 0;
}
What I am trying to do is to pass to phantomjs a script to execute through pipe and then read the resulting HTML as a string. I modified the code as told, but phantomjs still does not read from stdin.
I tested the script string by creating a dumb program that writes it to a file and then executed phantomjs normally and that works.
I also tryed to execute execlp("phantomjs", "phantomjs", "--ssl-protocol=TLSv1", "path_to_script_file" , (char *)NULL); and that works too, the output HTML is showed.
It does not work when using pipe.
An Explanation At Last
Some experimentation with PhantomJS shows that the problem is writing a null byte at the end of the JavaScript program sent to PhantomJS.
This highlights two bugs:
The program in the question sends an unnecessary null byte.
PhantomJS 2.1.1 (on a Mac running macOS High Sierra 10.13.3) hangs when an otherwise valid program is followed by a null byte
The code in the question contains:
write(to_phantomjs_pipe[1], jsToExectue, strlen(jsToExectue) + 1);
The + 1 means that the null byte terminating the string is also written to phantomjs. And writing that null byte causes phantomjs to hang. That is tantamount to a bug — it certainly isn't clear why PhantomJS hangs without detecting EOF (there is no more data to come), and without giving an error, etc.
Change that line to:
write(to_phantomjs_pipe[1], jsToExectue, strlen(jsToExectue));
and the code works as expected — at least with PhantomJS 2.1.1 on a Mac running macOS High Sierra 10.13.3.
Initial analysis
You aren't closing enough file descriptors in the child.
Rule of thumb: If you
dup2()
one end of a pipe to standard input or standard output, close both of the
original file descriptors returned by
pipe()
as soon as possible.
In particular, you should close them before using any of the
exec*()
family of functions.
The rule also applies if you duplicate the descriptors with either
dup()
or
fcntl()
with F_DUPFD
The child code shown is:
} else {
close(to_ext_program_pipe[1]);
close(to_my_program_pipe[0]);
dup2(to_ext_program_pipe[0],0);
dup2(to_my_program_pipe[1],1);
if(execlp("ext_program", "ext_program", "/dev/stdin" , NULL) == -1) {
fprintf(stderr,"execlp Error!");
exit(1);
}
close(to_ext_program_pipe[0]);
close(to_my_program_pipe[1]);
}
The last two close() statements are never executed; they need to appear before the execlp().
What you need is:
} else {
dup2(to_ext_program_pipe[0], 0);
dup2(to_my_program_pipe[1], 1);
close(to_ext_program_pipe[0]);
close(to_ext_program_pipe[1]);
close(to_my_program_pipe[0]);
close(to_my_program_pipe[1]);
execlp("ext_program", "ext_program", "/dev/stdin" , NULL);
fprintf(stderr, "execlp Error!\n");
exit(1);
}
You can resequence it splitting the close() calls, but it is probably better to regroup them as shown.
Note that there is no need to test whether execlp() failed. If it returns, it failed. If it succeeds, it does not return.
There could be another problem. The parent process waits for the child to exit before reading anything from the child. However, if the child tries to write more data than will fit in the pipe, the process will hang, waiting for some process (which will have to be the parent) to read the pipe. Since they're both waiting for the other to do something before they will do what the other is waiting for, it is (or, at least, could be) a deadlock.
You should also revise the parent process to do the reading before the waiting.
if (pid) {
close(to_my_program_pipe[1]);
close(to_ext_program_pipe[0]);
char string_to_write[] = "this is the string to write";
write(to_ext_program_pipe[1], string_to_write, strlen(string_to_write) + 1);
close(to_ext_program_pipe[1]);
char *string_to_read;
char ch[1];
size_t len = 0;
string_to_read = malloc(sizeof(char));
if(!string_to_read) {
fprintf(stderr, "%s\n", "Error while allocating memory");
exit(1);
}
while (read(to_my_program_pipe[0], ch, 1) == 1) {
string_to_read[len] = ch[0];
len++;
string_to_read = realloc(string_to_read, len*sizeof(char));
if (!string_to_read) {
fprintf(stderr, "%s\n", "Error while allocating memory\n");
exit(1);
}
string_to_read[len] = '\0';
}
close(to_my_program_pipe[0]);
printf("Output: %s\n", string_to_read);
free(string_to_read);
wait(&rv);
if (rv != 0) {
fprintf(stderr, "%s %d\n", "phantomjs exit status ", rv);
exit(1);
}
} …
I'd also rewrite the code to read in big chunks (1024 bytes or more). Just don't copy more data than the read returns, that's all. Repeatedly using realloc() to allocate one more byte to the buffer is ultimately excruciatingly slow. It won't matter much if there's only a few bytes of data; it will matter if there are kilobytes or more data to process.
Later: Since the PhantomJS program generates over 90 KiB of data in response to the message it was sent, this was a factor in the problems — or would have been were it not for the hang-on-null-byte bug in PhantomJS.
Still having problems 2018-02-03
I extracted the code, as amended, into a program (pipe89.c, compiled to pipe89). I got inconsistent crashes when the space allocated changed. I eventually realized that you're reallocating one byte too little space — it took a lot longer than it should have done (but it would help if Valgrind was available for macOS High Sierra — it isn't yet).
Here's the fixed code with debugging information commented output:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/wait.h>
#include <unistd.h>
int main(void)
{
pid_t pid;
int rv;
int to_ext_program_pipe[2];
int to_my_program_pipe[2];
if (pipe(to_ext_program_pipe))
{
fprintf(stderr, "Pipe error!\n");
exit(1);
}
if (pipe(to_my_program_pipe))
{
fprintf(stderr, "Pipe error!\n");
exit(1);
}
if ((pid = fork()) == -1)
{
fprintf(stderr, "Fork error. Exiting.\n");
exit(1);
}
if (pid)
{
close(to_my_program_pipe[1]);
close(to_ext_program_pipe[0]);
char string_to_write[] = "this is the string to write";
write(to_ext_program_pipe[1], string_to_write, sizeof(string_to_write) - 1);
close(to_ext_program_pipe[1]);
char ch[1];
size_t len = 0;
char *string_to_read = malloc(sizeof(char));
if (string_to_read == 0)
{
fprintf(stderr, "%s\n", "Error while allocating memory");
exit(1);
}
string_to_read[len] = '\0';
while (read(to_my_program_pipe[0], ch, 1) == 1)
{
//fprintf(stderr, "%3zu: got %3d [%c]\n", len, ch[0], ch[0]); fflush(stderr);
string_to_read[len++] = ch[0];
char *new_space = realloc(string_to_read, len + 1); // KEY CHANGE is " + 1"
//if (new_space != string_to_read)
// fprintf(stderr, "Move: len %zu old %p vs new %p\n", len, (void *)string_to_read, (void *)new_space);
if (new_space == 0)
{
fprintf(stderr, "Error while allocating %zu bytes memory\n", len);
exit(1);
}
string_to_read = new_space;
string_to_read[len] = '\0';
}
close(to_my_program_pipe[0]);
printf("Output: %zu (%zu) [%s]\n", len, strlen(string_to_read), string_to_read);
free(string_to_read);
wait(&rv);
if (rv != 0)
{
fprintf(stderr, "%s %d\n", "phantomjs exit status ", rv);
exit(1);
}
}
else
{
dup2(to_ext_program_pipe[0], 0);
dup2(to_my_program_pipe[1], 1);
close(to_ext_program_pipe[0]);
close(to_ext_program_pipe[1]);
close(to_my_program_pipe[0]);
close(to_my_program_pipe[1]);
execlp("ext_program", "ext_program", "/dev/stdin", NULL);
fprintf(stderr, "execlp Error!\n");
exit(1);
}
return 0;
}
It was tested on a program which wrote 5590 byte out for 27 bytes of input. That isn't as massive a multiplier as in your program, but it proves a point.
I still think you'd do better not reallocating a single extra byte at a time — the scanning loop should use a buffer of, say, 1 KiB and read up to 1 KiB at a time, and allocate the extra space all at once. That's a much less intensive workout for the memory allocation system.
Problems continuing on 2018-02-05
Taking the code from the Edit 2 and changing only the function definition from int main() { to int main(void) { (because the compilation options I use don't allow old-style non-prototype function declarations or definitions, and without the void, that is not a prototype), the code is
working fine for me. I created a surrogate phantomjs program (from another I already have lying around), like this:
#include <stdio.h>
int main(int argc, char **argv, char **envp)
{
for (int i = 0; i < argc; i++)
printf("argv[%d] = <<%s>>\n", i, argv[i]);
for (int i = 0; envp[i] != 0; i++)
printf("envp[%d] = <<%s>>\n", i, envp[i]);
FILE *fp = fopen(argv[argc - 1], "r");
if (fp != 0)
{
int c;
while ((c = getc(fp)) != EOF)
putchar(c);
fclose(fp);
}
else
fprintf(stderr, "%s: failed to open file %s for reading\n",
argv[0], argv[argc-1]);
return(0);
}
This code echoes the argument list, the environment, and then opens the file named as the last argument and copies that to standard output. (It is highly specialized because of the special treatment for argv[argc-1], but the code before that is occasionally useful for debugging complex shell scripts.)
When I run your program with this 'phantomjs', I get the output I'd expect:
argv[0] = <<phantomjs>>
argv[1] = <<--ssl-protocol=TLSv1>>
argv[2] = <</dev/stdin>>
envp[0] = <<MANPATH=/Users/jleffler/man:/Users/jleffler/share/man:/Users/jleffler/oss/share/man:/Users/jleffler/oss/rcs/man:/usr/local/mysql/man:/opt/gcc/v7.3.0/share/man:/Users/jleffler/perl/v5.24.0/man:/usr/local/man:/usr/local/share/man:/usr/share/man:/opt/gnu/share/man>>
envp[1] = <<IXH=/opt/informix/12.10.FC6/etc/sqlhosts>>
…
envp[49] = <<HISTFILE=/Users/jleffler/.bash.jleffler>>
envp[50] = <<_=./pipe31>>
var page=require('webpage').create();page.onInitialized=function(){page.evaluate(function(){delete window._phantom;delete window.callPhantom;});};page.onResourceRequested=function(requestData,request){if((/http:\/\/.+?\\.css/gi).test(requestData['url'])||requestData.headers['Content-Type']=='text/css'){request.abort();}};page.settings.loadImage=false;page.settings.userAgent='Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2228.0 Safari/537.36';page.open('https://stackoverflow.com',function(status){if(status!=='success'){phantom.exit(1);}else{console.log(page.content);phantom.exit();}});
At this point, I have to point the finger at phantomjs in your environment; it doesn't seem to behave as expected when you do the equivalent of:
echo "$JS_PROG" | phantomjs /dev/stdin | cat
Certainly, I cannot reproduce your problem any more.
You should take my surrogate phantomjs code and use that instead of the real phantomjs and see what you get.
If you get output analogous to what I showed, then the problem is with the real phantomjs.
If you don't get output analogous to what I showed, then maybe there is a problem with your code from the update to the question.
Later: Note that because the printf() uses %s to print the data, it would not notice the extraneous null byte being sent to the child.
In the pipe(7) man it is written that you should read from pipe ASAP:
If a process attempts to write to a
full pipe (see below), then write(2) blocks until sufficient data has
been read from the pipe to allow the write to complete. Nonblocking
I/O is possible by using the fcntl(2) F_SETFL operation to enable the
O_NONBLOCK open file status flag.
and
A pipe has a limited capacity. If the pipe is full, then a write(2)
will block or fail, depending on whether the O_NONBLOCK flag is set
(see below). Different implementations have different limits for the
pipe capacity. Applications should not rely on a particular
capacity: an application should be designed so that a reading process
consumes data as soon as it is available, so that a writing process
does not remain blocked.
In your code you write, wait and only then read
write(to_ext_program_pipe[1], string_to_write, strlen(string_to_write) + 1);
close(to_ext_program_pipe[1]);
wait(&rv);
//...
while(read(to_my_program_pipe[0], ch, 1) == 1) {
//...
Maybe the pipe is full or ext_program is waiting for the data to be read, you should wait() only after the read.

Test for standard streams piping failed

After creating a function to grab stdin, stdout & stderr, I wanted to test it ..
Here is the test code:
int fd[3];
char *buf = calloc(200, sizeof(char));
FILE *stream;
pid_t pid;
pid = opencmd(fd, "/bin/echo", (char *[]){"/bin/echo", "hello!"});
stream = fdopen(fd[2], "r");
while (fgets(buf, 200, stream) != NULL)
printf("stderr: %s\n", buf);
fclose(stream);
stream = fdopen(fd[1], "r");
while (fgets(buf, 200, stream) != NULL)
printf("stdout: %s\n", buf);
fclose(stream);
free(buf);
closecmd(pid, fd);
This does not manage to work as intended. I spent an hour debugging and could not manage to trace the problem, but as far as I managed to go, I realized that using fdopen to start using the descriptors' streams does not work (for some reason), but using functions that work directly with file descriptors (such as write(2) & read(2)) works fine.
What might be the possible reason for this ?
this excerpt: (char *[]){"/bin/echo", "hello!"}); is missing the final NULL parameter – user3629249

Buffer is not reading in string properly

void download(char *file)
{
int size = getsize(file);
printf("Got size %d\n", size);
sprintf(buff, "GET %s\n", file);
send(sockfd, buff, strlen(buff), 0);
rsize = recv(sockfd, buff, 1000, 0);
sscanf(buff, "%d", &resultcode);
printf("%s", buff);
if (strcmp(buff, "+OK\n") != 0)
{
printf("download failed\n");
}
FILE *dlfile = NULL;
if ((dlfile = fopen(file, "r")) != NULL)
{
dlfile = fopen(file, "w");
do
{
rsize = recv(sockfd, buff, 1000, 0);
for (int i = 0; i < rsize; i++)
{
fprintf(dlfile, "%c", buff[i]);
}
size = size - rsize;
} while (size != 0);
}
fclose(dlfile);
}
I am trying to make the download function print out contents of file user typed, then save it to their current directory. I did a debug line printf("%s", buff); and it prints out +OK\n(filename). It is supposed to print out +OK\n. It also prints out download failed then a segmentation fault error. What am I missing?
Several things going on here. First, recv and send basically operate on arrays of bytes so they do not know about line endings and such. Also note that recv is not guaranteed to fill the buffer - it generally reads what is available up to the limit of the buffer. For your strcmp against "+OK\n", you could use strncmp with a length of 4 but that is a bit direct (see below). Next note that the buff string is not null terminated by recv so your printf could easily crash.
When you go in to your loop, the buffer already has part of the rest of your I/O in it. May include other fields or parts of the file. You need to process it as well. It is not clear to me what getsize does - but using that size to drive your loop seems off. Also, your loop to fprintf the values can be replaced by a call to fwrite.
Overall, you need to properly buffer and then parse the incoming stream of data. If you want to do it yourself, you could look at fdopen to get a FILE object.

Run a shell script from C application

I am writing an application (CLI Based) in C and I want to be able to run a shell script to do system level commands, (its an OSX Specific app). Is there a way to do this?
I tried system() but it says its not valid as of c99.
if (response == 'Y' || response == 'y') {
system("Support/script.sh");
system("Support/deps.sh");
printf("Success");
} else {
printf("Good Bye!\n\n");
}
Check for your current working directory.Looks like te Support folder doesnt exist in the pwd. Mac OS X, an objective-C based, should work with the system calls.
Here is my sample program using popen, if at all you require it. (Just a snippet of my code.. not complete)
char unix_script[1000];
memset(unix_script,'\0',sizeof(unix_script));
snprintf(unix_script,
sizeof(unix_script),
"ksh /usr/mahesh/sessioN.ksh %s %s %s %s %s",
userId,
password,
database,
sbcr_id,
session_id);
char *COMMAND = unix_script,*readLine, *tmp, *commandResult = "";
FILE * fp;
int status;
fp = popen(COMMAND, "w");
if (fp == NULL) {
perror("Command execution failed");
exit(1);
}
//printf("Printing the command output....");
while ((fscanf(fp, "%s", &readLine)) != EOF) {
tmp = (char *) realloc(commandResult, strlen(readLine));
commandResult = tmp;
strcpy(commandResult, readLine);
}
printf("\n output =\n %s\n",commandResult);
status = pclose(fp);
//printf ("Command %s exit status code = %d\n", COMMAND, status);
return status;

Resources