I have some code, where I do redirect stdout to a logfile (the usual, dup, open, and dup2) and then revert back stdout. This works fine as long as I have some C code
in between, when I call system() and execute some shell/perl scripts, I see that the logfile gets removed at the end of execution! (the scripts being invoked don't have the logfile name, and don't do any unlink)
I can see the logfile being written while the scripts are getting executed.
The code block is like this:
/redirect-stdout-to-logfile/
system(scripts);
/reset-stdout/
I want to capture all the messages to stdout into logfile.
Any help to debug further, or hints greatly appreciated.
It's hard to tell what's wrong without having the source code.
It seems that your "reverting back" the stdout is not complete before you call system().
What comes to my mind is using the exec family instead of system because exec is aware of the file descriptors that your process holds when invoking an external program. (Though in this case the file descriptor is intended to be closed.)
Related
I've got a cmd line app in C under Linux that has to run another process, the problem is that the child process prints a lot in a comand line and the whole app gets messy.
Is it possible to disallow child process to print anything in cmd line from parent process? It would be very helpful to for example being able to define a command that allows or disallows printing by a child process.
There's the time-honoured tradition of just redirecting the output to the bit bucket(a), along the lines of:
system("runChild >/dev/null 2>&1");
Or, if you're doing it via fork/exec, simply redirect the file handles using dup2 between the fork and exec.
It won't stop a determined child from outputting to your standard output but it will have to be very tricky to do that.
(a) I'm not usually a big fan of that, just in case something goes wrong. I'd prefer to redirect it to a real file which can be examined later if need be (and deleted eventually if not).
Read Advanced Linux Programming then syscalls(2).
On recent Linux, every executable is in ELF format (except init or systemd; play with pstree(1) or proc(5)) is running in a process started by fork(2) (or clone(2)...) and execve(2).
You might use cleverly dup2(2) with open(2) to redirect STDOUT_FILENO to /dev/null (see null(4), stdout(3), fileno(3))
I've got a cmd line app in C under Linux that has to run another process, the problem is that the child process prints a lot in a comand line
I would instead provide a way to selectively redirect the child process' output. You could use program arguments or environment variables (see getenv(3) and/or environ(7)) to provide such an option to your user.
An example of such a command program starting and redirecting subprocesses and redirecting them is your GCC compiler (see gcc(1); it runs cc1 and as(1) and ld(1)...). Consider downloading and studying its source code.
Study also -for inspiration- the source code of some shell (e.g. sash), or write your own one.
My program launches a helper program using fork() / execvp() and I'd like to show the helper program's output in my program's GUI. The helper's output should be shown line by line in a listview widget embedded in my program's GUI. Of course, I could just redirect the output to a file, wait until the helper has finished, and then read the whole file and show it. But that's not an optimal solution. Ideally, I'd like to show the helper's output as it is sent to stdout, i.e. line by line, while the helper is still working.
What is the suggested way of doing this?
From the top of my head, what comes to mind is the following solution but I'm not sure whether it will work at all because one process will write to the file while the other is trying to read from it:
1) Start the helper like this using execvp() after a fork():
./helper > tmpfile
2) After that, my program tries to open "tmpfile" using open() and then uses select() to wait until there's something to read from that file. Once my program has obtained a line of output, it sends it to my GUI's listview widget.
Is this how it should be done or am I totally on the wrong track here?
Thanks!
You should open a pipe and monitor the progress of the child process using select. You can also use popen if you only need a one way communication, in that case you will get the file descriptor by a call to fileno on the returned FILE*.
See:
pipe
popen
select
fileno
I wanted to know like where does the stderr dumps its content.
I have a question like whether it dumps to syslog?
Please explain.
stderr is just another output stream, just like stdout.
Where it's hooked up depends on how the application is called.
For example if I run foo.exe 2> errors.txt then stderr will write to the specified file.
Stderr output is dumped whenever you decide to redirect it.
If you run a program in a GUI enviroment, by clicking on an icon or something, look for .xsession-errors in your $HOME.
If you run a program from a shell and don't redirect stderr, you just see it on your terminal (and it is not saved anywhere else).
That depends on the environment.
By default, stderr is typically connected to the same place as stdout, i.e. the current terminal.
Otherwise, you wouldn't see the errors which would be kind of annoying.
Here is a blog post about redirecting stderr to the system's logging mechanism.
stderr is a stream. It's managed by and output to by the owning process. Where it 'goes' is defined by how the process is invoked. The parent process can gather it and write it to a file, ignore it, redirect it to /dev/null (esssentially binning it) etc.
Whilst stderr can be redirected elsewhere, it typically outputs to the same place as stdout. To make it into syslog, you'd definitely have to work a bit. This link:
With bash, how can I pipe standard error into another process?
shows how you pipe stderr to a pipe, and if the other process is a process that writes to "syslog", it would do what you want. But for most cases, it's probably easier to just add syslog functionality to your own program.
I'm trying to write a shell that will eventually take advantage of concurrency. Right now I've got a working shell parser but I'm having trouble figuring out how to execute commands. I've looked a bit at exec (execvp etc.) and it looks promising but I have some questions.
Can exec handle file input/output redirection? Can I set up pipes using exec?
I'm also wondering about subshells. What should subshells return; the exit status of its last statement? Can subshells be a part of a pipe?
These might seem like really dumb questions but please bear with my inexperience.
Can exec handle file input/output redirection?
No, you do that with open() and dup() or dup2() (and close()).
Can I set up pipes using exec?
No, you do that with pipe(), dup() or dup2() and lots of close() calls.
I'm also wondering about subshells. What should subshells return, the exit status of its last statement?
That's the normal convention, yes.
Can subshells be a part of a pipe?
Yes. In a normal shell, you can write something like:
(cd /some/where; find . -name '*.png') | sed 's/xyz/prq/' > mapped.namelist
If you want to get scared, you could investigate posix_spawn() and its support functions. Search for 'spawn' at the POSIX 2008 site, and be prepared to be scared. I think it is actually easier to do the mapping work ad hoc than to codify it using posix_spawn() and its supporters.
The standard technique for a shell is to use fork-exec. In this model, to execute an application the shell uses fork to create a new process that is a copy of itself, and then uses one of the exec variants to replace its own code, data, etc. with the information specified by an executable file on disk.
The nice thing about this model is that the shell can do a little extra work with file descriptors (which are not thrown away by exec) before it changes out its address space. So to implement redirection, it changes out the file descriptors 0, 1, and 2 (stdin, stdout, and stderr, respectively) to point to another open file, instead of console I/O. You use dup2 to change out the meaning of one of these file descriptors, and then exec to start the new process.
I am trying to redirect the output of a c program to file, even when it generates some errors because of problems with the input data. I can send the output but the error messages to a file.
Does somebody know how to do it?
From within C source code, you can redirect outputs using freopen():
General outputs:
freopen("myfile.txt", "w", stdout);
Errors:
freopen("myfile_err.txt", "w", stderr);
(This answer applies to bash shell, and similar flavors. You didn't specify your environment and this sort of question needs that detail.)
I assume you know about basic redirection with ">". To also capture STDERR in addition to STDOUT, use the following syntax:
command > file-name 2>&1
For some more background on standard streams and numbers:
http://en.wikipedia.org/wiki/Standard_streams#Standard_input_.28stdin.29
This depends on what you mean and what platform you are using. Very often you can accomplish this from the command line, which has been covered in another answer. If you use this method to accomplish this you should be aware that FILE * stderr is typically written immediately (unbuffered) while FILE * stdout may be buffered (usually line buffered) so you could end up with some of your error messages appearing to have been printed earlier than some other messages, but actually the other messages are just being printed late.
From within a C program you can also do something similar within the stdio system using freopen, which will effect the FILE *, so you could make fprintf(stderr, "fungus"); print to something besides what stderr normally would print to.
But if you want to know how to make a program redirect the actual file descriptors under a unix like system you need to learn about the dup and dup2 system calls. They allow you to duplicate a file descriptor.
int fd = open("some_file", O_WRONLY);
dup2(2,fd);
close(fd);
This code will make "some_file" the new stderr at the OS level. The dup2 call will close and replace file descriptor 2 (stderr, which is usually used by FILE * stderr but not necessarily if you call freopen(x,y,stderr) since that may make FILE *stderr use a different file descriptor).
This is how shell programs redirect input and output of programs. The open all of the files that the new program will need, fork, then the child uses dup2 to set up the files descriptors for the new program, then it closes any files that the new program won't need (usually just leaving 0, 1, and 2 open), and then uses one of the exec functions to become the program that the shell was told to run. (some of this isn't entirely accurate because some shells may rely on close on exe flags)
Using a simple linux command you can save the output into the file. here is a simple linux terminal command.
ls > file.txt
The output of this command will be stored into the file.
same as you can store the output of the program like this suppose, object file name is a, run the following command to save output in a file:
./a > file.txt