ConEmu:Send SIGINT to running application - conemu

as Ctrl+C copies the current selection rather than killing the current application in ConEmu, I wonder how to do the latter now. I know that there is Ctrl+Alt+Break (Terminate (kill) active process in the current console: Close(1)), but does this behave the same as pressing Ctrl+C in a plain old cmd.exe window?
AFAIK Ctrl+C usually sends SIGINT (or whatever windows has instead) prior to killing the window so that the application can exit voluntarily.
Thanks!

The solution is to assign the hotkey Ctrl+C to an arbitrary macro (01-32) that is configured to run "Break(1)":
The existing binding of Ctrl+C to the "Copy" command must be removed.

Related

Why does vim crash when it becomes an orphan process?

#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <sys/wait.h>
int main()
{
int pid = fork();
if (pid) {
sleep(5);
// wait(NULL); // works fine when waited for it.
} else {
execlp("vim", "vim", (char *)NULL);
}
}
When I run this code, vim runs normally then crashes after the 5 seconds (i.e. when its parent exits). When I wait for it (i.e. not letting it become an orphan process), the code works totally fine.
Why does becoming an orphan process become a problem here? Is it something specific to vim?
Why is this even a thing that's visible to vim? I thought that only the parent knows when its children die. But here, I see that somehow, the child notices when it gets adopted, something happens and crashes somehow. Do the children processes get notified when their parent dies as well?
When I run this code, I get this output after the crash:
Vim: Error reading input, exiting...
Vim: preserving files...
Vim: Finished.
This actually happens because of the shell that is executing the binary that forks Vim!
When the shell runs a foreground command, it creates a new process group and makes it the foreground process group of the terminal attached to the shell. In bash 5.0, you can find the code that transfers this responsibility in give_terminal_to(), which uses tcsetpgrp() to set the foreground process group.
It is necessary to set the foreground process group of a terminal correctly, so that the program running in foreground can get signals from the terminal (for example, Ctrl+C sending an interrupt signal, Ctrl+Z sending a terminal stop signal to suspend the process) and also change terminal settings in ways that full-screen programs such as Vim typically do. (The subject of foreground process group is a bit out of scope for this question, just mentioning it here since it plays part in the response.)
When the process (more precisely, the pipeline) executed by the shell terminates, the shell will take back the foreground process group, using the same give_terminal_to() code by calling it with the shell's process group.
This is usually fine, because at the time the executed pipeline is finished, there's usually no process left on that process group, or if there are any, they typically don't hold on to the terminal (for example, if you're launching a background daemon from the shell, the daemon will typically close the stdin/stdout/stderr streams to relinquish access to the terminal.)
But that's not really the case with the setup you proposed, where Vim is still attached to the terminal and part of the foreground process group. When the parent process exits, the shell assumes the pipeline is finished and it will set the foreground process group back to itself, "stealing" it from the former foreground process group which is where Vim is. Consequently, the next time Vim tries to read from the terminal, the read will fail and Vim will exit with the message you reported.
One way to see by yourself that the parent processing exiting does not affect Vim by itself is running it through strace. For example, with the following command (assuming ./vim-launcher is your binary):
$ strace -f -o /tmp/vim-launcher.strace ./vim-launcher
Since strace is running with the -f option to follow forks, it will also start tracing Vim when it's launched. The shell will be executing strace (not vim-launcher), so its foreground pipeline will only end when strace stops running. And strace will not stop running until Vim exits. Vim will work just fine past the 5 seconds, even though it's been reparented to init.
There also used to be an fghack tool, part of daemontools, that accomplished the same task of blocking until all forked children would exit. It would accomplish that by creating a new pipe and have the pipe inherited by the process it spawned, in a way that would get automatically inherited by all other forked children. That way, it could block until all copies of that pipe file descriptor were closed, which typically only happens when all processes exit (unless a background process goes out of its way to close all inherited file descriptors, but that's essentially stating that they don't want to be tracked, and they would most probably have relinquished their access to the terminal by that point.)

Creating my own shell. Handling Ctrl-Z and then sending SIGCONT closes the process instead of continue it

I am creating my own shell in C language. So far I implemented many features but the thing I am having problems with is CTRL-Z handling(SIGTSTP). Let me specify the problem over successful attempts:
When I execute a program in my shell (like gedit), and then press Ctrl-Z it executes kill(p_id, SIGTSTP) and stops that process. The shell also adds the process id in background_processes array so we can reach it in further. Then if I type "fg" in my shell, it brings the process to the foreground and executes kill(p_id, SIGCONT) so we can continue to use the program. Also the shell waits for the process to complete by executing waitpid function. We close the program by clicking X button or pressing Ctrl-C. Exact same thing in Linux shell. SUCCESFULL!!!
If I execute a program in my shell (like gedit) in background by specifying & (ampersand), it automatically starts this process in backgrounds by not waiting the process. But it adds the process id in background_processes array so we can reach it in further. Then when I type "fg" in my shell, it brings the process to the foreground. It actually waits for the process to complete by executing waitpid function. Also it doesn't matter if have more than one process in background, they will be bring to the foreground one by one. We close the programs by clicking X button or pressing Ctrl-C. Exact same thing in Linux shell. SUCCESFULL!!!
Lets execute a process in the foreground and then send it to the background by Ctrl-Z, and execute a process in background. We have 2 processes in the background. If I type "fg" it brings the first background process to the foreground and waits it. If I press X button (close button) which closes the program the shell brings the second process to the foreground and waits for it. Going very well right, thats what we want. So this scenario also worked very well.
The problem scenario is the same as the previous scenario in creating processes. When I type "fg" it brings the first background process to the foreground and waits it. But then if I press Ctrl-C it closes both processes!!!!!! It should only closed the first process and should have wait for the second process!!!
I searched everywhere, tried everything but couldn't figure it out. But the problem seems like with line 525. When I send SIGCONT signal it closes the process. But if comment that line it doesn't close but also I can't use the process since it is stopped!!!
I have the code in my GitHub repo here : https://github.com/EmreKumas/Myshell
Thanks for reading...
It seems like the problem is caused because of process groups. I did only create different process groups for background jobs but since you cannot change the process group of a child after it executed exec command, you better do it at the beginning before exec call. Now, the problem is solved thanks to "#that other guy" and "#John Bollinger".

Terminating a Process from CMD: Softest to Hardest

Terminating a Process from CMD: Softest to Hardest
I was wondering if anyone had experience using command line to terminate processes using taskkill and WIMC.
I was wondering if anyone knew the order of how "hard" of a close/terminate these commands are from the command that is the "softest" (least forceful) close to the command that is the "hardest" (most forceful):
My guess would be:
Least/Softest
1) taskkill /im processname.exe
2) wmic process where name="processname.exe" call terminate
3) wmic process where name='processname.exe' delete
4) taskkill /f /im processname.exe
Most/Hardest
I am trying to create a batch command file and wanted to just know the difference between these, to see which I should use.
I prefer to use a softer close, check to see if the process is still running, and then try a harder close, and then repeat this until the program is successfully closed. Any info on the difference between any of these would be helpful, especially between using terminate and delete via CMD: WMIC would be helpful, as I cannot find documentation anywhere on them.
As CatCat mentioned, there are two main ways to terminate a process : WM_CLOSE and TerminateProcess(). I've included two more for completeness sake.
Sending window message WM_CLOSE to the main window of the process. This is the same message an application receives when user clicks X button to close the window. The app may then gracefully shutdown or ask user for confirmation - for example if some work is unsaved.
taskkill without /f appears to attempt doing that but seems to not always succeed in finding the correct window to close. If the app is supposed to not have a visible window (such as if it only displays an icon in system tray or is a windowless server) it may ignore this message entirely.
If taskkill does not work for you, it is possible NirCmd: does better job: NirCmd.exe closeprocess iexplore.exe
There is also WM_ENDSESSION message - it it sent by the OS when shutting down (after WM_QUERYENDSESSION). It works pretty much the same way except it is sent to whole application rather then a specific window. Depending on parameters, apps may be requested to save the work into temporary files because the system needs to restart to apply some updates. Some applications react to this message, some don't.
It should be possible to send these messages manually, but I have not seen it done other than to test how app reacts to shutdown without actually shutting down OS.
WM_QUIT message suggests the application as a whole needs to shut down (more specifically, it is sent to a thread). An application should normally post it to itself after its window is done closing and now it is time to end the process.
It is possible to manually post the message to every thread of another process but this is hackish and rare, it may crash processes not expecting to be issued this message from outside. I'm not sure if it's a better option than just terminating the process or not.
TerminateProcess() tells the OS to forcefully terminate the process. This is what happens when you click End process button on processes tab in the task manager. The process does not get notified it is being closed - it is just stopped where it was and removed from the memory - no questions, no shutdown, etc.
This may cause corruption if some files were being written at that time or data transferred.
That is what taskkill /f command does. Both wmic process call terminate and wmic process delete appear to also do this although I'm not sure.
using wmic:
print all running process where name of process is cmd.exe
wmic process where name="cmd.exe" GET ProcessId, CommandLine,CreationClassName
then terminate the specific instance of process by processId (PID)
WMIC PROCESS WHERE "ProcessID=13800" CALL TERMINATE

what's mean “&” in the parameter of command line? [duplicate]

I am a system administrator and I have been asked to run a linux script to clean the system.
The command is this:
perl script.pl > output.log &
so this command is ending with a & sign, is there any special significance of it?
I have basic knowledge of shell but I have never seen this before.
The & makes the command run in the background.
From man bash:
If a command is terminated by the control operator &, the shell
executes the command in the background in a subshell. The shell does
not wait for the command to finish, and
the return status is 0.
When not told otherwise commands take over the foreground. You only have one "foreground" process running in a single shell session. The & symbol instructs commands to run in a background process and immediately returns to the command line for additional commands.
sh my_script.sh &
A background process will not stay alive after the shell session is closed. SIGHUP terminates all running processes. By default anyway. If your command is long-running or runs indefinitely (ie: microservice) you need to pr-pend it with nohup so it remains running after you disconnect from the session:
nohup sh my_script.sh &
EDIT: There does appear to be a gray area regarding the closing of background processes when & is used. Just be aware that the shell may close your process depending on your OS and local configurations (particularly on CENTOS/RHEL):
https://serverfault.com/a/117157.
In addition, you can use the "&" sign to run many processes through one (1) ssh connections in order to to keep minimum number of terminals. For example, I have one process that listens for messages in order to extract files, the second process listens for messages in order to upload files: Using the "&" I can run both services in one terminal, through single ssh connection to my server.
These processes running through the "&" will also "stay alive" after ssh session is closed. Pretty neat and useful if the ssh connection to the server is interrupted and no terminal multiplexer (screen, tmux, byobu) was used.
I don’t know for sure but I’m reading a book right now and what I am getting is that a program need to handle its signal ( as when I press CTRL-C). Now a program can use SIG_IGN to ignore all signals or SIG_DFL to restore the default action.
Now if you do $ command & then this process running as background process simply ignores all signals that will occur. For foreground processes these signals are not ignored.
If you have a command which executes and doesn't return status 0(control of prompt) quickly.
For example:
command gedit launches the default editor gedit UI.
commandeclipse launches eclipse IDE.
Such commands keep throwing the logs of activities in the terminal and don't return the command prompt.
Question is, how to run such commands in background so that, we will get back command terminal and we can use terminal for other tasks.
Answer is: by appending & after such command.
user#mymachine:~$ <command> &
Examples:
user#mymachine:~$ edit &
user#mymachine:~$ eclipse &

why system(cmd) function need to set the command string with the & background parameter in Linux-arm

I'm going to run a regular program on a Linux-arm embedded device.
I tried to use system(cmd) function to run linux shell cmd in my program.
cmd would be a audio playing command "aplay -N sound.wav"
If cmd is as above, there will be no sound come out of my linux device, and the process of the program will in the T state (traced or stopped).
If cmd is set as "aplay -N sound.wav &", things will work just fine.
My question is what caused that, why does the "&" background parameter matter in this case.
Thanks.
If aplay allows for STDIN to act as a controller, running it forground may not provide the control input it expects. The backgrounding may detach STDIN and have aplay revert to default "play once until finished" mode. Do you have a man page for aplay?
I think i got why.
I'm running my qt program in the '&' mode, so I guess in any system(cmd), that cmd must contains a '&'.
I tried to run my qt program without the '&', after that, the cmd without '&' would be working fine.
So I guess the cause is you cannot run fork a foreground child process from a background father process.

Resources