Nodejs exec for a C compiled binary displays stderr on stdout? - c

I have basically a C compiled binary wherein if an error is encountered during the execution, the error is dumped out to stderr. This C Binary is wrapped around NodeJS, where the binary is invoked via child process exec. But upon error, even though C code dumps out the information to stderr, I still seem to get the information in Nodejs on stdout, and not on stderr. So, essentially running console.log(stdout); dumps out the error information but console.log(stderr); dumps nothing. Does anyone have any idea on this, and if I need to redirect this information through a different medium so I get appropriate information on stdout and stderr on NodeJS script?
I created a test version of the code and it seems to display the information correctly on stderr and stdout:
#include <stdio.h>
int main(){
fprintf(stderr, "Whoops, this is stderr");
fprintf(stdout, "Whoops, this is stdout");
return 0;
}
and corresponding NodeJS Code:
#!/usr/bin/env node
var exec = require('child_process').exec,
path = require('path');
var bin = path.join(__dirname, 'a.out');
var proc = exec(bin, function (error, stdout, stderr) {
console.log('stdout:', stdout);
console.log('stderr:', stderr);
});
proc.stdout.on('data', function (dat) { console.log(dat) });
and this is the output I get:
Whoops, this is stdout
stdout: Whoops, this is stdout
stderr: Whoops, this is stderr
Not sure why it would happen so in my code, May be because I am dumping a lot of information to stdout and stderr simultaneously or there is some buggy module I have included that may be causing this to happen. The actual code is quite big to be written here, but seems like I have to investigate where it might be going wrong.

I seem to have figured out the problem. The legacy C Code that dumps out the information was never referring to the FILE * being passed onto it. That was the reason all the information appeared on stdout and not on stderr. Fixed the API to take FILE * as an argument and dump out the information to correct FILE pointer and now it works.

Related

Writing in the executable while running the program

I'm writing a C program and I would like to be able to store data inside the executable file.
I tried making a function to write a single byte at the end of the file but it looks like it can't open the file because it reaches the printf and then gives "segmentation fault".
void writeByte(char c){
FILE *f;
f = fopen("game","wb");
if(f == 0)
printf("\nFile not found\n");
fseek(f,-1,SEEK_END);
fwrite(&c,1,sizeof(char),f);
fclose(f);
}
The file is in the correct directory and the name is correct. When I try to read the last byte instead of writing it works without problems.
Edit: I know I should abort the program instead of trying to write anyway but my main problem is that the program can't open the file despite being in the same directory.
There are several unrelated problems in your code and the problem you're trying to solve.
First you lack proper error handling. If any function that can fail (like e.g. fopen) fails, you should act accordingly. If, for example you did
#include <error.h>
#include <errno.h>
...
f = fopen("game","wb");
if ( f == NULL ) {
error(1,errno,"File could not be opened");
}
...
You would have recieved an useful error message like
./game: File could not be opened: Text file busy
You printed a message, which is not even correct (the file not beeing able to be opened is somthing different, than not beeing found) and continued the program which resulted in a segmentation fault because you dereferenced the NULL pointer stored in f after the failure of fopen.
Second As the message tells us (at least on my linux machine), the file is busy. That means, that my operating system does not allow me to open the executable I'm running in write mode. The answers to this question lists numerous source of the explanation of this error message. There might be ways to get around this and open a running executable in write mode, but I doubt this is easy and I doubt that this would solve your problem because:...
Third Executable files are stored in a special binary format (usually ELF on Linux). They are not designed to be manually modified. I don't know what happens if you just append data to it, but you could run into serious problems if your not very careful and know what you're doing.
If you just try to store data, use another plain and fresh file. If you're hoping to append code to an executable, you really should gather some background information about ELF files (e.g. from man elf) before continuing.

fprintf() is not working in ubuntu

I'm trying to learn File I/O concepts in C programming language. I'm using GNU / Linux ( Ubuntu 16.04 LTS ) and my IDE is eclipse 3.8. when I try to write in a file through fprintf() method, it doesn't create any files or if the file is even created, it doesn't write in it. I tried to fix the problem by using fflush() or setbuf(file_pointer, NULL) methods as is suggested here but still no change. I guess I'm writing the address of the file in a wrong way.
Here is the code:
#include <stdio.h>
int main(void){
FILE *file_pointer;
file_pointer=fopen("~/.textsfiless/test.txt","w+");
setbuf(file_pointer,NULL);
fprintf(file_pointer,"Testing...\n");
fclose(file_pointer);
return EXIT_SUCCESS;
}
Can someone explain what's wrong here?
On Linux, the ~ in ~/.textsfiless/test.txt is not expanded by the C library fopen... When you use ~ on the command line, it is expanded by your shell (but not by the program using it, started by the shell doing some execve(2)...) into your home directory; the expansion is called globbing. Read glob(7). You are very unlikely to have a directory named ~.
You should read Advanced Linux Programming
So you should check if fopen failed (it is very likely that it did fail). If you want to get a file in the home directory, you'll better use getenv(3) with "HOME" (or perhaps getpwuid(3) & getuid(2)...). See environ(7)
Perhaps a better code might be:
char*homedir = getenv("HOME");
if (!homedir) { perror("getenv HOME"); exit(EXIT_FAILURE); };
char pathbuf[512]; /// or perhaps PATH_MAX instead of 512
snprintf(pathbuf, sizeof(pathbuf),
"%s/.textsfiless/test.txt", homedir);
FILE *file_pointer = fopen(pathbuf, "r");
if (!file_pointer) { perror(pathbuf); exit(EXIT_FAILURE); };
and so on.
Notice that you should check against failures most C standard library (& POSIX) functions. The perror(3) function is useful to report errors to the user on stderr.
(pedantically, we should even test that snprintf(3) returns a length below sizeof(pathbuf) or use and test against failure asprintf(3) instead; I leave that test as an exercise to the reader)
More generally, read the documentation of every external function that you are using.
Beware of undefined behavior (your code is probably having some, e.g. fprintf to a NULL stream). Compile your code with all warnings & debug info (so gcc -Wall -g) and use the gdb debugger. Read What every C programmer should know about undefined behavior.
BTW, look into strace(1) and try it on your original (faulty) program. You'll learn a lot about the system calls used in it.
Most likely your call to fopen() fails. You don't have any checking in your program to ensure fopen even worked. It may not have, and this could be due to a variety of things, like you spelling the path wrong, wrong file or process permissions, etc.
To see what really happened, you should check fopen's return value:
#include <stdio.h>
int main(void){
FILE *file_pointer;
file_pointer=fopen("~/.textsfiless/test.txt","w+");
if (file_pointer == NULL) {
printf("Opening the file failed.");
return EXIT_FAILURE;
}
setbuf(file_pointer,NULL);
fprintf(file_pointer,"Testing...\n");
fclose(file_pointer);
return EXIT_SUCCESS;
}
Edit: Since your comment, you getting the path wrong is most certainly what happened. If you're executing your program from the current directory, and your file is in a folder called "textfiless" in your current directory and your file is called "test.txt", then you'd call fopen like this:
file_pointer=fopen("/textsfiless/test.txt","w+");

Stdout of process not redirected properly via dll injection

I'm injecting a C dll (as a thread) into a running console process, and trying to redirect its stdout to a file. My dll also writes to the console via wprintf.
As a simple redirection test I made the dll call TestRedirect at the start:
FILE *file = NULL;
BOOL WINAPI Test1(void *param);
void TestRedirect()
{
file = freopen("C:\\temp\\test1.txt", "w", stdout);
_beginthreadex(NULL, 0, Test1, (void*)file, NULL, 0, NULL);
}
BOOL WINAPI Test1(void *param)
{
FILE *file = (FILE*)param;
while (1)
{
wprintf(L"Test1\n");
fflush(file);
Sleep(200);
}
}
Every time my dll writes to stdout via wprintf, regardless of the thread, the text appears in the file. However, my test console program (also a C program) doesn't write to the file. All the console program does is call wprintf every second to print something. When my dll is injected in the process, the exe's text stops appearing in the console window but doesn't get written to the file. Same if I try it on other programs, e.g. ping.exe.
However if I put the same code directly into my test console program and make that program call TestRedirect, then the program's output is written to the file (without injecting the dll).
What am I doing wrong?
I was able to do this by making the injected dll call FreeConsole and then AttachConsole, passing in the PID of the process with the console where I wanted the output to be redirected to.
(I know the question mentioned redirecting it to a file, but that was really just for testing purposes as an ostensibly easier thing to do; ultimately, I wanted it to be redirected to a console. At the end of the day, I can make my own console redirect its stdout (which it receives from the "remote" process) to a file if I want, which should still satisfy the actual question.)

writing STDIN of bash application from c program

I would like to open an bash application(prog1) and send command to that application with C program.
I tried and wrote the following code.
#include <stdlib.h>
#include <stdio.h>
#include <unistd.h>
int main()
{
FILE * fp;
char message[10] = "command1\n";
fp = popen("sudo prog1","r");
write(STDIN_FILENO, message, 10);
pclose(fp);
execlp("sudo prog1", "sudo prog1", "-l", NULL);
return 0;
}
This code gives an output:
Linux:~$ ./ prog 2 // running the code
command // prints "command"
prog1-> // Then opens "prog1" (prog1 waits for user commands)
But I want it to be
Linux:~$ ./ prog 2 // running the code
prog1-> command // open "prog1" and write command (instead of getting it from user)
It either writes after quitting prog1 or before starting prog1. Please let me know how to write the "command" in prog1, after opening the application prog1. Thank you.
Note: If I make
fp = popen("sudo prog1","w"); //to write
It throws the following error
tcgetattr : Inappropriate ioctl for device
Your main bug is thinking that popen() somehow associates your child process with your STDIN_FILENO. It doesn't. STDIN_FILENO is not associated with your "sudo prog1" child. You'd have to create a pipe, dup the descriptors to stdin/stdout, and fork+exec to do that. But you used popen() so don't do that either.
Instead, you should be writing and reading from fp.
Something like:
fprintf(fp, message);
fgets(response, 100, fp);
Since fprintf() is buffered, you should use \n at the end of the line, or fflush().
Also, there is no point is using exec/execlp at the end when you've already called popen(). Looks like you may be mixing two approaches that you've seen by example.
popen() essentially does a combination of (pipe, dup stdin/stdout/stderr, fork, execl) to take care of redirecting a child process to a file stream connected to the parent. So no need to reimplement unless you need different semantics than popen().
You technically are implementing "expect" functionality, so you might want to look into expect, or expect modules for different languages. Expect is included with Linux distributions but is usually optional.
http://expect.sourceforge.net/
http://search.cpan.org/~rgiersig/Expect-1.21/Expect.pod
And not to mention, Perl has a Sudo module already.
http://search.cpan.org/~landman/Sudo-0.21/lib/Sudo.pm

using STDOUT from within gdb

I have a function that pretty prints a data structure, its function prototype is:
void print_mode(FILE *fp, Mode *mode);
the FILE* allows you to redirect the output to anywhere you want, e.g. stdout, stderr, a file etc. Mode is the data structure
I am trying to call this function from within gdb and want the output to be directed to the gdb console window, stdout?
I have tried:
(gdb) p print_mode(STDOUT,fragment_mode)
No symbol "STDOUT" in current context.
(gdb) p print_mode(stdout,fragment_mode)
$17 = void
neither of which work
any ideas how i can get the output of the function to display in the gdb console?
should add - I am using gdb within emacs 24.2.1 under linux
STDOUT seems to be macro, which is not know to GDB, as handled prior to compilation by the pre-preprocessor.
Using stdout should do the job.
However the function print_mode() simply does not seem to print out anything.
In terms what's being printed to the console by the program being debugged, GDB's commands printand call should not make a difference.
For details on this you might like to read here: https://sourceware.org/gdb/onlinedocs/gdb/Calling.html
An issue might be that stdout by default is line buffered, so output would not occur before detecting a linefeed and print_mode() perhaps does not send a linefeed (\n).
To test this just use stderr as output file, as the latter isn't buffered:
p print_mode(stderr, fragment_mode)
Oh dear - silly mistake. You're right, stdout does do the job.
I forgot that having upgraded from emacs 23 to 24, the way gdb works has changed in as much as it now opens a separate buffer *input/output of program-name* to which it redirects the output of the program being debugged. In the prior version of emacs it was all displayed in the same, single gdb buffer.
So my second attempt was actually working, I was just looking in the wrong place so didn't see the output

Resources