How to avoid C thread blocking by a child process opened with io.popen from lua? - c

I have a C pthread running a non-blocking event loop which executes lua scripts. A lua script runs io.popen to execute a shell script which takes some time to complete.
I am only interested in capturing the first few characters from the output of the script even though the script takes time to complete. As the lua script reads those first few characters and reaches its end it returns execution to C as expected. The problem is that the shell script that was opened with io.popen keeps running in the background until it completes. Calling f.close() in lua after reading the first few characters does not seem to halt its execution (it appears to only close the read side of the pipe?). Unfortunately, it starts blocking my C thread and displays "write error: broken pipe" messages until the shell script ends. This would make sense if the read end of the pipe is closed (and garbage collected) as the lua script ends execution, and the write end is still being written to by the shell script. The question is, how do I either kill the shell script since i am no longer interested in the rest of its output/execution, or silence it in a way that will not block my parent process?
I have tried redirecting 2>/dev/null on the io.popen command but that only silenced the "write error: broken pipe" messages - the blocking is still there.
If there is a way to get the child process (from C) of what lua openes with io.popen and kill that? (I'm thinking that might also help).
Any thoughts/suggestions are greatly appreciated!

Related

How can I handle _popen() errors in C?

Good morning;
Right now, I'm writing a program which makes a Montecarlo simulation of a physical process and then pipes the data generated to gnuplot to plot a graphical representation. The simulation and plotting work just fine; but I'm interested in printing an error message which informs the user that gnuplot is not installed. In order to manage this, I've tried the following code:
#include <stdio.h>
#include <stdlib.h>
FILE *pipe_gnuplot;
int main()
{
pipe_gnuplot = _popen("gnuplot -persist", "w");
if (pipe_gnuplot==NULL)
{
printf("ERROR. INSTALL gnuplot FIRST!\n");
exit (1);
}
return 0;
}
But, instead of printing my error message, "gnuplot is not recognized as an internal or external command, operable program or batch file" appears (the program runs on Windows). I don't understand what I'm doing wrong. According to _popen documentation, NULL should be returned if the pipe opening fails. Can you help me managing this issue? Thanks in advance and sorry if the question is very basic.
Error handling of popen (or _popen) is difficult.
popen creates a pipe and a process. If this fails, you will get a NULL result, but this occurs only in rare cases. (no more system resources to create a pipe or process or wrong second argument)
popen passes your command line to a shell (UNIX) or to the command processor (Windows). I'm not sure if you would get a NULL result if the system cannot execute the shell or command processor respectively.
The command line will be parsed by the shell or command processor and errors are handled as if you entered the command manually, e.g. resulting in an error message and/or a non-zero exit code.
A successful popen means nothing more than that the system could successfully start the shell or command processor. There is no direct way to check for errors executing the command or to get the exit code of the command.
Generally I would avoid using popen if possible.
If you want to program specifically for Windows, check if you can get better error handling from Windows API functions like CreateProcess.
Otherwise you could wrap your command in a script that checks the result and prints specific messages you can read and parse to distinguish between success and error. (I don't recommend this approach.)
Just to piggy-back on #Bodo's answer, on a POSIX-compatible system you can use wait() to wait for a single child process to return, and obtain its exit status (which would typically be 127 if the command was not found).
Since you are on Windows you have _cwait(), but this does not appear to be compatible with how _popen is implemented, as it requires a handle to the child process, which _popen does not return or give any obvious access to.
Therefore, it seems the best thing to do is to essentially manually re-implemented popen() by creating a pipe manually and spawning the process with one of the spawn[lv][p][e] functions. In fact the docs for _pipe() give an example of how one might do this (although in your case you want to redirect the child process's stdin to the write end of your pipe).
I have not tried writing an example though.

Executing more with "exec()" function corrupts line breaking in bash

I had an exercise to write a program that will do the following pipe processing:
ls -la | grep "^d" | more
After executing my program however, the bash interpreter would not break line nor display commands correctly, however after executing them the result is showed, it looks like the input for the console is not getting on stdout but somewhere else and i cant find the reason of this behavior.
I am using 3 child process with stdio redirected to connect the pipe between them.
The program finishes successfully it shows the good result, no errors are showed or whatever, also when i am using the cat instead of more everything works normally after execution, is it possible that more changes some system values and does not change them back?
It's likely that more is turning off echo and canonical mode on your TTY (see man 3 termios), and never switching them back on before it exits (either because it gets killed without a chance to, or because it doesn't think it's attached to a TTY). You can attach to more with gdb to find out why that's ahppening, or you could simply reset the terminal yourself before exiting.

log file removed while redirecting stdout

I have some code, where I do redirect stdout to a logfile (the usual, dup, open, and dup2) and then revert back stdout. This works fine as long as I have some C code
in between, when I call system() and execute some shell/perl scripts, I see that the logfile gets removed at the end of execution! (the scripts being invoked don't have the logfile name, and don't do any unlink)
I can see the logfile being written while the scripts are getting executed.
The code block is like this:
/redirect-stdout-to-logfile/
system(scripts);
/reset-stdout/
I want to capture all the messages to stdout into logfile.
Any help to debug further, or hints greatly appreciated.
It's hard to tell what's wrong without having the source code.
It seems that your "reverting back" the stdout is not complete before you call system().
What comes to my mind is using the exec family instead of system because exec is aware of the file descriptors that your process holds when invoking an external program. (Though in this case the file descriptor is intended to be closed.)

Batch fork bomb? [duplicate]

If you run a .bat or .cmd file with %0|%0 inside, your computer starts to use a lot of memory and after several minutes, is restarted. Why does this code block your Windows? And what does this code programmatically do? Could it be considered a "bug"?
This is the Windows version of a fork bomb.
%0 is the name of the currently executing batch file. A batch file that contains just this line:
%0|%0
Is going to recursively execute itself forever, quickly creating many processes and slowing the system down.
This is not a bug in windows, it is just a very stupid thing to do in a batch file.
This is known as a fork bomb.
It keeps splitting itself until there is no option but to restart the system.
http://en.wikipedia.org/wiki/Fork_bomb
What it is:
%0|%0 is a fork bomb. It will spawn another process using a pipe | which runs a copy of the same program asynchronously. This hogs the CPU and memory, slowing down the system to a near-halt (or even crash the system).
How this works:
%0 refers to the command used to run the current program. For example, script.bat
A pipe | symbol will make the output or result of the first command sequence as the input for the second command sequence. In the case of a fork bomb, there is no output, so it will simply run the second command sequence without any input.
Expanding the example, %0|%0 could mean script.bat|script.bat. This runs itself again, but also creating another process to run the same program again (with no input).
%0 will never end, but it never creates more than one process because it instantly transfers control to the 2nd batch script (which happens to be itself).
But a Windows pipe creates a new process for each side of the pipe, in addition to the parent process. The parent process can't finish until each side of the pipe terminates. So the main program with a simple pipe will have 3 processes. You can see how the bomb quickly get's out of control if each side of the pipe recursively calls the parent batch!
It's a logic bomb, it keeps recreating itself and takes up all your CPU resources. It overloads your computer with too many processes and it forces it to shut down. If you make a batch file with this in it and start it you can end it using taskmgr. You have to do this pretty quickly or your computer will be too slow to do anything.

create process independent of bash

I have written a program which calculates the amount of battery level available in my laptop. I have also defined a threshold value in the program. Whenever the battery level falls below threshold i would like to call another process. I have used system("./invoke.o") where invoke.o is the program that i have to run. I am running a script which runs the battery level checker program for every 5 seconds. Everything is working fine but when i close the bash shell the automatic invocation of invoke.o is not happening. How should i make the invoke.o to be invoked irrespective of whether bash is closed or not??. I am using UBUNTU LINUX
Try running it as: nohup ./myscript.sh, where the nohup command allows you to close the shell without terminating the process.
You could run your script as a cron job. This lets cron set up standard input and output for you, reschedule the job, and it will send you email if it fails.
The alternative is to run a script in the background with all input and output, including standard error output, redirected.
While you could make a proper daemon out of your program that kind of effort is probably not necessary.
man nohup
man upstart
man 2 setsid (more complex, leads to longer trail of breadcrumbs on daemon launching).

Resources