Logic to determine whether a "prompt" should be printed out - c

Seems like a basic idea: I want to print out a prompt for a mini shell I am making in C. Here is an example of what I mean for the prompt:
$ ls
The $ being the "prompt". This little mini shell I am making supports backgrounding a process via the normal bash notation of putting a & symbol on the end of a line. Like $ ls &.
My logic currently is that in my main command loop, if the process is not going to be backgrounded then print out the prompt:
if(isBackground == 0)
prompt();
And then in my signal handler I print out the prompt using write() which covers the case of it being a background process.
This works fine if the background command returns right away like with a quick $ ls &, but in the case of something like $ sleep 10 & the shell will look like it is being blocked as the prompt will not be printed out until it hits the signal handler.
I can't figure out how to fix this because I don't know when the background process will end which means that the signal handler somehow needs to be saying when to print the new prompt because if the background process happened to have an output, it would output and then there would no longer be a prompt.
How can I resolve this problem? Is there a better way to do this that I'm not thinking of that could resolve my problem?

Related

Linux compiled binary getting wrong exit code if Ctrl+C entered from a shell script launched by the binary

I've got what I think is a strange one here. I have the following environment.
A Linux compiled binary which sets up a signal handler to disable things like Ctrl+C, Ctrl+z, etc. This is done by calling signal on: SIGINT, SITTSTP and SIGQUIT. The signal handler simply prints an error message that user is not allowed to abort the program.
After setting up the signal handler, the binary calls an interactive ash script.
This interactive ash script ALSO disables all methods of breaking out of the script. It does this with "trap '' INT TSTP" at the very beginning. This works and if one enters Ctrl+C, etc it simply echoes the control character to the terminal but does not exit.
Individually both the binary and ash script prevent user from exiting.
However, notice what happens below:
Allow control to be returned to the binary by normal completion of the interactive shell script. Once control returns to the binary, entering Ctrl+C works and does not allow user to break out of the program. This is proper behavior.
Where it is wrong is:
Type a few Ctrl+C's during the time the interactive shell script is running and once control returns to the binary, the exit code is changed to something other than what the shell script is doing.
Here is an example:
In C code, let's say I have:
void sigintHandler(int sig_num)
{
fprintf(stderr, "You are not allowed to exit this program.\n");
return;
}
void main(void)
{
signal(SIGINT, sigintHandler);
int ret = system("/etc/scripts/test.sh");
printf("test.sh returned: %d exit status.\n", ret);
}
And in test.sh I have:
#!/bin/ash
# Disable interrupts so that one cannot exit shell script.
trap '' INT TSTP
echo -n "Do you want to create abc file? (y/n): "
read answer
if [ $answer == "y" ];then
touch /tmp/abc
fi
if [ -f /tmp/abc ]; then
echo "Returning 1"
exit 1
else
echo "Returning 2"
exit 2
fi
If I run the C binary normally I get the correct exit status (1 or 2) depending on whether file exists. Actually I get 256 or 512 which indicates it is storing the exit code in the 2nd byte. Point is this works consistently every time.
But now if I hit Ctrl+C while the shell script is running (before answering the question presented) and say I answer "n" which is exit code of 2. In the C binary the code I get back is sometimes 2 (not 512, indicating the exit code is now in the LOWER byte) but MORE often I get back a code of 0! This happens even though I see the message "Returning 2" which is echoed by the shell script.
This is driving me nuts trying to figure out why a simple exit code is being messed up.
Can anyone provide some suggestions?
Thanks much
Allen
I found the issue.
Previously I was using trap '' INT TSTP to disable interrupts in the shell script. Though this works to prevent shell script from being aborted it led to the issue in this post. I suspect that in disabling the ability to abort the shell script in this way, the upper level shell framework was not aware of this and all it knew is that Ctrl+C or whatever was pressed and returned SIGINT as the exit code despite what the shell script itself was exiting with.
The solution is to use:
stty -isig
at the beginning of the shell script.
This not only disables interrupts but ALSO lets the upper level framework know that this is what you've done so that it ignores the fact that Ctrl+C was pressed.
I found this information on the following page:
https://unix.stackexchange.com/questions/80975/preventing-propagation-of-sigint-to-parent-process
Thanks everyone,
Allen

Executing more with "exec()" function corrupts line breaking in bash

I had an exercise to write a program that will do the following pipe processing:
ls -la | grep "^d" | more
After executing my program however, the bash interpreter would not break line nor display commands correctly, however after executing them the result is showed, it looks like the input for the console is not getting on stdout but somewhere else and i cant find the reason of this behavior.
I am using 3 child process with stdio redirected to connect the pipe between them.
The program finishes successfully it shows the good result, no errors are showed or whatever, also when i am using the cat instead of more everything works normally after execution, is it possible that more changes some system values and does not change them back?
It's likely that more is turning off echo and canonical mode on your TTY (see man 3 termios), and never switching them back on before it exits (either because it gets killed without a chance to, or because it doesn't think it's attached to a TTY). You can attach to more with gdb to find out why that's ahppening, or you could simply reset the terminal yourself before exiting.

Create two processes when another terminal window is opened?

The topic might sound weird, but here's what I want to achieve:
In Terminal A, type command line as following:
./create proA
The first process proA is created. It outputs something like
This is process A.
Open another terminal window (called Terminal B). In Terminal B, type the following line:
./create proB
The second process proB is created. It outputs:
This is process B.
UPDATED:
I'm trying to create two processes that communicate with each other. Before going into more details, I just want to try if I can create another process that has some relationship with first process when another terminal window is opened.
Is it possible to achieve something like this? If so, can someone give any tip for how to start in c? Thanks!
The terminals don't matter for inter process communication. There are so many ways to communicate between processes that it doesn't make sense to highlight any of them here.
About having a own terminal for each process. Well:
(xterm -e "${COMMANDLINE1}" &) ; (xterm -e "${COMMANDLINE2}" &)
if you want to see only errors, you should use:
./process > /dev/null 2>&1
if you did't understood 2>&1, read below)
possible numbers:
0 — STDIN, 1 — STDOUT and 2 — STDERR
that means, all std errors will be printed in std out.

Is it possible to run a program from terminal and have it continue to run after you close the terminal?

I have written a program which I run after connecting to the box over SSH. It has some user interaction such as selecting options after being prompted, and usually I wait for the processes it carries out to finish before logging out which closes the terminal and ends the program. But now the process is quite lengthy and I don't want to wait whilst being logged in, so how could I implement a workaround for this in C please?
You can run a program in the background by following the command with "&"
wget -m www.google.com &
Or, you could use the "screen" program, that allows you to attach-deattach sessions
screen wget -m www.google.com
(PRESS CTRL+D)
screen -r (TO RE ATTACH)
http://linux.die.net/man/1/screen
The process is sent the HUP signal when the shell exits. All you have to do is install a signal handler that ignores SIGHUP.
Or just run the program using nohup.
The traditional way to do this is using the nohup(1) command:
nohup mycmd < /dev/null >& output.log &
Of course if you don't care about the output you can send it to /dev/null too, or you could take input from a file if you wanted.
Doing it this way will protect your process from a SIGHUP that would normally cause it to exit. You'll also want to redirect stdin/stdout/stderr like above, as you'll be ending your ssh session.
Syntax shown above is for bash.
you can use screen command. here is a tutorial. note you might need to install it to your systems.
There are many options :-) TIMTOWTDI… However, for your purposes, you might look into running a command-line utility such as dtach or GNU screen.
If you actually want to implement something in C, you could re-invent that wheel, but from your description of the problem, I doubt it should be necessary…
The actual C code to background a process is trivial:
//do interactive stuff...
if(fork())
exit(0);
//cool, I've been daemonized.
If you know the code will never wind up on a non-linux-or-BSD machine, you could even use daemon()
//interactive...
daemon(0, 0);
//background...

Opening a new terminal window & executing a command

I've been trying to open a new terminal window from my application and execute a command on this second window as specified by the user. I've built some debugging software and I would like to execute the user's program on a separate window so my debugging output doesn't get intermixed with the programs output.
I am using fork() and exec(). The command I am executing is gnome-terminal -e 'the program to be executed'.
I have 2 questions:
Calling gnome-terminal means the user has to be running a gnome graphical environment. Is there a more cross-platform command to use (I am only interested in Linux machines though)?
After the command finishes executing the second terminal also finishes executing and closes. Is there any way to pause it, or just let it continue normal operation by waiting for input?
You probably want something like xterm -hold.
1) gnome-terminal should work reasonably also without the whole gnome environonment, anyway the old plain "xterm" is enough.
2) you can execute a short bash script that launch your program and at the end reads a line:
bash -c 'my program ... ; read a'
(or also 'xterm -e ...')

Resources