Executing more with "exec()" function corrupts line breaking in bash - c

I had an exercise to write a program that will do the following pipe processing:
ls -la | grep "^d" | more
After executing my program however, the bash interpreter would not break line nor display commands correctly, however after executing them the result is showed, it looks like the input for the console is not getting on stdout but somewhere else and i cant find the reason of this behavior.
I am using 3 child process with stdio redirected to connect the pipe between them.
The program finishes successfully it shows the good result, no errors are showed or whatever, also when i am using the cat instead of more everything works normally after execution, is it possible that more changes some system values and does not change them back?

It's likely that more is turning off echo and canonical mode on your TTY (see man 3 termios), and never switching them back on before it exits (either because it gets killed without a chance to, or because it doesn't think it's attached to a TTY). You can attach to more with gdb to find out why that's ahppening, or you could simply reset the terminal yourself before exiting.

Related

How to avoid C thread blocking by a child process opened with io.popen from lua?

I have a C pthread running a non-blocking event loop which executes lua scripts. A lua script runs io.popen to execute a shell script which takes some time to complete.
I am only interested in capturing the first few characters from the output of the script even though the script takes time to complete. As the lua script reads those first few characters and reaches its end it returns execution to C as expected. The problem is that the shell script that was opened with io.popen keeps running in the background until it completes. Calling f.close() in lua after reading the first few characters does not seem to halt its execution (it appears to only close the read side of the pipe?). Unfortunately, it starts blocking my C thread and displays "write error: broken pipe" messages until the shell script ends. This would make sense if the read end of the pipe is closed (and garbage collected) as the lua script ends execution, and the write end is still being written to by the shell script. The question is, how do I either kill the shell script since i am no longer interested in the rest of its output/execution, or silence it in a way that will not block my parent process?
I have tried redirecting 2>/dev/null on the io.popen command but that only silenced the "write error: broken pipe" messages - the blocking is still there.
If there is a way to get the child process (from C) of what lua openes with io.popen and kill that? (I'm thinking that might also help).
Any thoughts/suggestions are greatly appreciated!

Disallowing printf in child process

I've got a cmd line app in C under Linux that has to run another process, the problem is that the child process prints a lot in a comand line and the whole app gets messy.
Is it possible to disallow child process to print anything in cmd line from parent process? It would be very helpful to for example being able to define a command that allows or disallows printing by a child process.
There's the time-honoured tradition of just redirecting the output to the bit bucket(a), along the lines of:
system("runChild >/dev/null 2>&1");
Or, if you're doing it via fork/exec, simply redirect the file handles using dup2 between the fork and exec.
It won't stop a determined child from outputting to your standard output but it will have to be very tricky to do that.
(a) I'm not usually a big fan of that, just in case something goes wrong. I'd prefer to redirect it to a real file which can be examined later if need be (and deleted eventually if not).
Read Advanced Linux Programming then syscalls(2).
On recent Linux, every executable is in ELF format (except init or systemd; play with pstree(1) or proc(5)) is running in a process started by fork(2) (or clone(2)...) and execve(2).
You might use cleverly dup2(2) with open(2) to redirect STDOUT_FILENO to /dev/null (see null(4), stdout(3), fileno(3))
I've got a cmd line app in C under Linux that has to run another process, the problem is that the child process prints a lot in a comand line
I would instead provide a way to selectively redirect the child process' output. You could use program arguments or environment variables (see getenv(3) and/or environ(7)) to provide such an option to your user.
An example of such a command program starting and redirecting subprocesses and redirecting them is your GCC compiler (see gcc(1); it runs cc1 and as(1) and ld(1)...). Consider downloading and studying its source code.
Study also -for inspiration- the source code of some shell (e.g. sash), or write your own one.

How can I handle _popen() errors in C?

Good morning;
Right now, I'm writing a program which makes a Montecarlo simulation of a physical process and then pipes the data generated to gnuplot to plot a graphical representation. The simulation and plotting work just fine; but I'm interested in printing an error message which informs the user that gnuplot is not installed. In order to manage this, I've tried the following code:
#include <stdio.h>
#include <stdlib.h>
FILE *pipe_gnuplot;
int main()
{
pipe_gnuplot = _popen("gnuplot -persist", "w");
if (pipe_gnuplot==NULL)
{
printf("ERROR. INSTALL gnuplot FIRST!\n");
exit (1);
}
return 0;
}
But, instead of printing my error message, "gnuplot is not recognized as an internal or external command, operable program or batch file" appears (the program runs on Windows). I don't understand what I'm doing wrong. According to _popen documentation, NULL should be returned if the pipe opening fails. Can you help me managing this issue? Thanks in advance and sorry if the question is very basic.
Error handling of popen (or _popen) is difficult.
popen creates a pipe and a process. If this fails, you will get a NULL result, but this occurs only in rare cases. (no more system resources to create a pipe or process or wrong second argument)
popen passes your command line to a shell (UNIX) or to the command processor (Windows). I'm not sure if you would get a NULL result if the system cannot execute the shell or command processor respectively.
The command line will be parsed by the shell or command processor and errors are handled as if you entered the command manually, e.g. resulting in an error message and/or a non-zero exit code.
A successful popen means nothing more than that the system could successfully start the shell or command processor. There is no direct way to check for errors executing the command or to get the exit code of the command.
Generally I would avoid using popen if possible.
If you want to program specifically for Windows, check if you can get better error handling from Windows API functions like CreateProcess.
Otherwise you could wrap your command in a script that checks the result and prints specific messages you can read and parse to distinguish between success and error. (I don't recommend this approach.)
Just to piggy-back on #Bodo's answer, on a POSIX-compatible system you can use wait() to wait for a single child process to return, and obtain its exit status (which would typically be 127 if the command was not found).
Since you are on Windows you have _cwait(), but this does not appear to be compatible with how _popen is implemented, as it requires a handle to the child process, which _popen does not return or give any obvious access to.
Therefore, it seems the best thing to do is to essentially manually re-implemented popen() by creating a pipe manually and spawning the process with one of the spawn[lv][p][e] functions. In fact the docs for _pipe() give an example of how one might do this (although in your case you want to redirect the child process's stdin to the write end of your pipe).
I have not tried writing an example though.

Redirect printf/printk message to file

I want to duplicate messages called via printf/printk to a file, keeping the original behavior of printf/printk the same. Environment contains multiple processes running and printf/printk functional calls called.
I want to achieve the above said, with minimal change to each binary as possible.
Don't do it in your program, do it in the console when you run your program instead.
Then you can use the tee program to redirect standard output to a file:
./your_program | tee some_file
That will cause the output of your program to be written to both the file and standard output.

Logic to determine whether a "prompt" should be printed out

Seems like a basic idea: I want to print out a prompt for a mini shell I am making in C. Here is an example of what I mean for the prompt:
$ ls
The $ being the "prompt". This little mini shell I am making supports backgrounding a process via the normal bash notation of putting a & symbol on the end of a line. Like $ ls &.
My logic currently is that in my main command loop, if the process is not going to be backgrounded then print out the prompt:
if(isBackground == 0)
prompt();
And then in my signal handler I print out the prompt using write() which covers the case of it being a background process.
This works fine if the background command returns right away like with a quick $ ls &, but in the case of something like $ sleep 10 & the shell will look like it is being blocked as the prompt will not be printed out until it hits the signal handler.
I can't figure out how to fix this because I don't know when the background process will end which means that the signal handler somehow needs to be saying when to print the new prompt because if the background process happened to have an output, it would output and then there would no longer be a prompt.
How can I resolve this problem? Is there a better way to do this that I'm not thinking of that could resolve my problem?

Resources