Batch fork bomb? [duplicate] - batch-file

If you run a .bat or .cmd file with %0|%0 inside, your computer starts to use a lot of memory and after several minutes, is restarted. Why does this code block your Windows? And what does this code programmatically do? Could it be considered a "bug"?

This is the Windows version of a fork bomb.
%0 is the name of the currently executing batch file. A batch file that contains just this line:
%0|%0
Is going to recursively execute itself forever, quickly creating many processes and slowing the system down.
This is not a bug in windows, it is just a very stupid thing to do in a batch file.

This is known as a fork bomb.
It keeps splitting itself until there is no option but to restart the system.
http://en.wikipedia.org/wiki/Fork_bomb

What it is:
%0|%0 is a fork bomb. It will spawn another process using a pipe | which runs a copy of the same program asynchronously. This hogs the CPU and memory, slowing down the system to a near-halt (or even crash the system).
How this works:
%0 refers to the command used to run the current program. For example, script.bat
A pipe | symbol will make the output or result of the first command sequence as the input for the second command sequence. In the case of a fork bomb, there is no output, so it will simply run the second command sequence without any input.
Expanding the example, %0|%0 could mean script.bat|script.bat. This runs itself again, but also creating another process to run the same program again (with no input).

%0 will never end, but it never creates more than one process because it instantly transfers control to the 2nd batch script (which happens to be itself).
But a Windows pipe creates a new process for each side of the pipe, in addition to the parent process. The parent process can't finish until each side of the pipe terminates. So the main program with a simple pipe will have 3 processes. You can see how the bomb quickly get's out of control if each side of the pipe recursively calls the parent batch!

It's a logic bomb, it keeps recreating itself and takes up all your CPU resources. It overloads your computer with too many processes and it forces it to shut down. If you make a batch file with this in it and start it you can end it using taskmgr. You have to do this pretty quickly or your computer will be too slow to do anything.

Related

How to avoid C thread blocking by a child process opened with io.popen from lua?

I have a C pthread running a non-blocking event loop which executes lua scripts. A lua script runs io.popen to execute a shell script which takes some time to complete.
I am only interested in capturing the first few characters from the output of the script even though the script takes time to complete. As the lua script reads those first few characters and reaches its end it returns execution to C as expected. The problem is that the shell script that was opened with io.popen keeps running in the background until it completes. Calling f.close() in lua after reading the first few characters does not seem to halt its execution (it appears to only close the read side of the pipe?). Unfortunately, it starts blocking my C thread and displays "write error: broken pipe" messages until the shell script ends. This would make sense if the read end of the pipe is closed (and garbage collected) as the lua script ends execution, and the write end is still being written to by the shell script. The question is, how do I either kill the shell script since i am no longer interested in the rest of its output/execution, or silence it in a way that will not block my parent process?
I have tried redirecting 2>/dev/null on the io.popen command but that only silenced the "write error: broken pipe" messages - the blocking is still there.
If there is a way to get the child process (from C) of what lua openes with io.popen and kill that? (I'm thinking that might also help).
Any thoughts/suggestions are greatly appreciated!

Why does vim crash when it becomes an orphan process?

#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <sys/wait.h>
int main()
{
int pid = fork();
if (pid) {
sleep(5);
// wait(NULL); // works fine when waited for it.
} else {
execlp("vim", "vim", (char *)NULL);
}
}
When I run this code, vim runs normally then crashes after the 5 seconds (i.e. when its parent exits). When I wait for it (i.e. not letting it become an orphan process), the code works totally fine.
Why does becoming an orphan process become a problem here? Is it something specific to vim?
Why is this even a thing that's visible to vim? I thought that only the parent knows when its children die. But here, I see that somehow, the child notices when it gets adopted, something happens and crashes somehow. Do the children processes get notified when their parent dies as well?
When I run this code, I get this output after the crash:
Vim: Error reading input, exiting...
Vim: preserving files...
Vim: Finished.
This actually happens because of the shell that is executing the binary that forks Vim!
When the shell runs a foreground command, it creates a new process group and makes it the foreground process group of the terminal attached to the shell. In bash 5.0, you can find the code that transfers this responsibility in give_terminal_to(), which uses tcsetpgrp() to set the foreground process group.
It is necessary to set the foreground process group of a terminal correctly, so that the program running in foreground can get signals from the terminal (for example, Ctrl+C sending an interrupt signal, Ctrl+Z sending a terminal stop signal to suspend the process) and also change terminal settings in ways that full-screen programs such as Vim typically do. (The subject of foreground process group is a bit out of scope for this question, just mentioning it here since it plays part in the response.)
When the process (more precisely, the pipeline) executed by the shell terminates, the shell will take back the foreground process group, using the same give_terminal_to() code by calling it with the shell's process group.
This is usually fine, because at the time the executed pipeline is finished, there's usually no process left on that process group, or if there are any, they typically don't hold on to the terminal (for example, if you're launching a background daemon from the shell, the daemon will typically close the stdin/stdout/stderr streams to relinquish access to the terminal.)
But that's not really the case with the setup you proposed, where Vim is still attached to the terminal and part of the foreground process group. When the parent process exits, the shell assumes the pipeline is finished and it will set the foreground process group back to itself, "stealing" it from the former foreground process group which is where Vim is. Consequently, the next time Vim tries to read from the terminal, the read will fail and Vim will exit with the message you reported.
One way to see by yourself that the parent processing exiting does not affect Vim by itself is running it through strace. For example, with the following command (assuming ./vim-launcher is your binary):
$ strace -f -o /tmp/vim-launcher.strace ./vim-launcher
Since strace is running with the -f option to follow forks, it will also start tracing Vim when it's launched. The shell will be executing strace (not vim-launcher), so its foreground pipeline will only end when strace stops running. And strace will not stop running until Vim exits. Vim will work just fine past the 5 seconds, even though it's been reparented to init.
There also used to be an fghack tool, part of daemontools, that accomplished the same task of blocking until all forked children would exit. It would accomplish that by creating a new pipe and have the pipe inherited by the process it spawned, in a way that would get automatically inherited by all other forked children. That way, it could block until all copies of that pipe file descriptor were closed, which typically only happens when all processes exit (unless a background process goes out of its way to close all inherited file descriptors, but that's essentially stating that they don't want to be tracked, and they would most probably have relinquished their access to the terminal by that point.)

Disallowing printf in child process

I've got a cmd line app in C under Linux that has to run another process, the problem is that the child process prints a lot in a comand line and the whole app gets messy.
Is it possible to disallow child process to print anything in cmd line from parent process? It would be very helpful to for example being able to define a command that allows or disallows printing by a child process.
There's the time-honoured tradition of just redirecting the output to the bit bucket(a), along the lines of:
system("runChild >/dev/null 2>&1");
Or, if you're doing it via fork/exec, simply redirect the file handles using dup2 between the fork and exec.
It won't stop a determined child from outputting to your standard output but it will have to be very tricky to do that.
(a) I'm not usually a big fan of that, just in case something goes wrong. I'd prefer to redirect it to a real file which can be examined later if need be (and deleted eventually if not).
Read Advanced Linux Programming then syscalls(2).
On recent Linux, every executable is in ELF format (except init or systemd; play with pstree(1) or proc(5)) is running in a process started by fork(2) (or clone(2)...) and execve(2).
You might use cleverly dup2(2) with open(2) to redirect STDOUT_FILENO to /dev/null (see null(4), stdout(3), fileno(3))
I've got a cmd line app in C under Linux that has to run another process, the problem is that the child process prints a lot in a comand line
I would instead provide a way to selectively redirect the child process' output. You could use program arguments or environment variables (see getenv(3) and/or environ(7)) to provide such an option to your user.
An example of such a command program starting and redirecting subprocesses and redirecting them is your GCC compiler (see gcc(1); it runs cc1 and as(1) and ld(1)...). Consider downloading and studying its source code.
Study also -for inspiration- the source code of some shell (e.g. sash), or write your own one.

create process independent of bash

I have written a program which calculates the amount of battery level available in my laptop. I have also defined a threshold value in the program. Whenever the battery level falls below threshold i would like to call another process. I have used system("./invoke.o") where invoke.o is the program that i have to run. I am running a script which runs the battery level checker program for every 5 seconds. Everything is working fine but when i close the bash shell the automatic invocation of invoke.o is not happening. How should i make the invoke.o to be invoked irrespective of whether bash is closed or not??. I am using UBUNTU LINUX
Try running it as: nohup ./myscript.sh, where the nohup command allows you to close the shell without terminating the process.
You could run your script as a cron job. This lets cron set up standard input and output for you, reschedule the job, and it will send you email if it fails.
The alternative is to run a script in the background with all input and output, including standard error output, redirected.
While you could make a proper daemon out of your program that kind of effort is probably not necessary.
man nohup
man upstart
man 2 setsid (more complex, leads to longer trail of breadcrumbs on daemon launching).

Semantics of Windows batch programming redirection

I have a Windows batch script (my.bat) which has the following line:
DTBookMonitor.exe 2>&1 > log\cmdProcessLog.txt
So, from my understanding, this runs DTBookMonitor, redirects STDERR to STDOUT and then redirects STDOUT to the file log\cmdProcessLog.txt.
I then run my.bat. DTBookMonitor runs for a significant amount of time, and when I run my.bat a second time (while it is already running), it immediately exits from the second instance of my.bat.
Is this purely because of the redirection to cmdProcessLog?
Better late then never :)
Windows redirection locks the output file so that no other process can open the file for writing at the same time. That is why the second instance fails when it tries to redirect output to the same file.
I'd guess it's either due to that, or because DTBookMonitor only allows one instance of it to run at a time. The following test should shed some light on the situation:
Run the first (long) instance of DTBookMonitor
Run a second instance without redirecting any of its output
Alternatively, run a second instance, but redirect the output to a file other than log\cmdProcessLog.txt
Do you get similar results? Different results?

Resources