what's mean “&” in the parameter of command line? [duplicate] - c

I am a system administrator and I have been asked to run a linux script to clean the system.
The command is this:
perl script.pl > output.log &
so this command is ending with a & sign, is there any special significance of it?
I have basic knowledge of shell but I have never seen this before.

The & makes the command run in the background.
From man bash:
If a command is terminated by the control operator &, the shell
executes the command in the background in a subshell. The shell does
not wait for the command to finish, and
the return status is 0.

When not told otherwise commands take over the foreground. You only have one "foreground" process running in a single shell session. The & symbol instructs commands to run in a background process and immediately returns to the command line for additional commands.
sh my_script.sh &
A background process will not stay alive after the shell session is closed. SIGHUP terminates all running processes. By default anyway. If your command is long-running or runs indefinitely (ie: microservice) you need to pr-pend it with nohup so it remains running after you disconnect from the session:
nohup sh my_script.sh &
EDIT: There does appear to be a gray area regarding the closing of background processes when & is used. Just be aware that the shell may close your process depending on your OS and local configurations (particularly on CENTOS/RHEL):
https://serverfault.com/a/117157.

In addition, you can use the "&" sign to run many processes through one (1) ssh connections in order to to keep minimum number of terminals. For example, I have one process that listens for messages in order to extract files, the second process listens for messages in order to upload files: Using the "&" I can run both services in one terminal, through single ssh connection to my server.
These processes running through the "&" will also "stay alive" after ssh session is closed. Pretty neat and useful if the ssh connection to the server is interrupted and no terminal multiplexer (screen, tmux, byobu) was used.

I don’t know for sure but I’m reading a book right now and what I am getting is that a program need to handle its signal ( as when I press CTRL-C). Now a program can use SIG_IGN to ignore all signals or SIG_DFL to restore the default action.
Now if you do $ command & then this process running as background process simply ignores all signals that will occur. For foreground processes these signals are not ignored.

If you have a command which executes and doesn't return status 0(control of prompt) quickly.
For example:
command gedit launches the default editor gedit UI.
commandeclipse launches eclipse IDE.
Such commands keep throwing the logs of activities in the terminal and don't return the command prompt.
Question is, how to run such commands in background so that, we will get back command terminal and we can use terminal for other tasks.
Answer is: by appending & after such command.
user#mymachine:~$ <command> &
Examples:
user#mymachine:~$ edit &
user#mymachine:~$ eclipse &

Related

Can a process request a running shell process to run some command and receives the standard output of the command?

In terms of Linux API, we can use exec* functions or system functions to start running a shell which is then to run some command.
Is it possible that a process requests an already running shell process (e.g. a bash process) to run a command and then receives the standard output of the command? For example, commands that I would like to run in a running shell process are those which generate shell-specific state information, e.g. dirs and jobs.
Can the above be done in C, and in bash using some utilities (see here and here)?
For example I would like to have a C program or shell script which can get the output of running dirs and jobs in an existing shell process.
Thanks.
If you evaluate the following during shell startup:
dump_shell_data() {
mkdir -p -- "$HOME/.shell-state" || return
jobs >"$HOME/.shell-state/$$.jobs" 2>&1
dirs >"$HOME/.shell-state/$$.dirs" 2>&1
}
trap dump_shell_data SIGUSR1
...then sending a SIGUSR1 to shells which have run the above code will instruct those shells to jump jobs and dirs output to files named after the PID.
Note that there are substantial caveats. The user of a shell may run a 3rd-party script that redefines this trap; and a shell which is blocking waiting for a command to exit will not execute the trap until after that command has in fact completed.

Cloning command `script` and PTY background job problems: terminal messed up

I'm trying to recode the UNIX command script (as it is on OSX). This is part of an exercise for school to help students learn UNIX APIs. We are only allowed to use system calls, more specifically, only those available on MAN(2) pages on Mac OSX (since that's our OS at school).
I have a 'first version' that kind of works. Running a program such as ls prints the right output to the screen and in an output file.
The problem scenario
I run bash from within the script-clone. First issue is I get the following error:
bash: no job control in this shell
I have tried forcing the bash process into foreground with setpgrp and setpgid but that din't change anything so I concluded that was not the problem.
I also tried to understand why the real script command uses cfmakeraw (at least on Linux), as seen here, but I don't get it. The MAN page is not very helpful.
The real script also dup2s STDIN on the slave, as seen here, but when I do that, it seems like input isn't read anymore.
However, the bash still runs, and I can execute commands inside of it.
But if I run vim inside it, and then hit Ctrl-Z to put vim to the background, the terminal is messed up (which does not happen when I'm in my regular terminal).
So I guess I must have done something wrong. I'd appreciate any advice/help.
Here's the source code:
https://github.com/conradkleinespel/unix-command-script/tree/2587b07e7a36dc74bf6dff0e82c9fdd33cb40411
You can compile by doing: make (it builds on OSX 10.9, hopefully on Linux as well)
And run by doing: ./ft_script
Don't know it it makes more sense to have all the source code in StackOverflow as it would crowd the page with it. If needed, I can replace the Git link with the source.
I don't use OS X, so I can't directly test your code, but I'm currently writing a toy terminal emulator and had similar troubles.
about "bash: no job control in this shell"
In order to perform job control, a shell needs to be a session leader and the controlling process of its terminal. By default, your program inherits the controlling terminal of your own shell which runs your script program and which is also a session leader. Here is how to make your new slave process a session leader after fork:
/* we don't need the inherited master fd */
close(master);
/* discard the previous controlling tty */
ioctl(0, TIOCNOTTY, 0);
/* replace existing stdin/out/err with the slave pts */
dup2(slave, 0);
dup2(slave, 1);
dup2(slave, 2);
/* discard the extra file descriptor for the slave pts */
close(slave);
/* make the pts our controlling terminal */
ioctl(0, TIOCSCTTY, 0);
/* make a new session */
setsid()
At this point, the forked process has stdin/out/err bound to the new pts, the pts became its controlling terminal, and the process is a session leader. The job control should now work.
about raw tty
When you run a program inside a normal terminal, it looks like this:
(term emulator, master side) <=> /dev/pts/42 <=> (program, slave side)
If you press ^Z, the terminal emulator will write the ascii character 0x1A to the pts. It is a control character, so it won't be sent to the program, but instead the kernel will issue SIGSTP to the program and suspend it. The process of transforming characters into something else is called "line cooking" and has various settings that can be adjusted for each tty.
Now let's look at the situation with script:
term emulator <=> /dev/pts/42 <=> script <=> /dev/pts/43 <=> program
With normal line settings, what happens when you press ^Z? It will be transformed into SIGSTP by /dev/pts/42 and script will be suspended. But that's not what we want, instead we'd like the 0x1A character produced by our ^Z to go as-is through /dev/pts/42, then be passed by script to /dev/pts/43 and only then be transformed into SIGSTP to suspend the program.
This is the reason why the pts between your terminal and script must be configured as "raw", so that all control characters reach the pts between script and the program, as if you were directly working with it.

why system(cmd) function need to set the command string with the & background parameter in Linux-arm

I'm going to run a regular program on a Linux-arm embedded device.
I tried to use system(cmd) function to run linux shell cmd in my program.
cmd would be a audio playing command "aplay -N sound.wav"
If cmd is as above, there will be no sound come out of my linux device, and the process of the program will in the T state (traced or stopped).
If cmd is set as "aplay -N sound.wav &", things will work just fine.
My question is what caused that, why does the "&" background parameter matter in this case.
Thanks.
If aplay allows for STDIN to act as a controller, running it forground may not provide the control input it expects. The backgrounding may detach STDIN and have aplay revert to default "play once until finished" mode. Do you have a man page for aplay?
I think i got why.
I'm running my qt program in the '&' mode, so I guess in any system(cmd), that cmd must contains a '&'.
I tried to run my qt program without the '&', after that, the cmd without '&' would be working fine.
So I guess the cause is you cannot run fork a foreground child process from a background father process.

Is it possible to run a program from terminal and have it continue to run after you close the terminal?

I have written a program which I run after connecting to the box over SSH. It has some user interaction such as selecting options after being prompted, and usually I wait for the processes it carries out to finish before logging out which closes the terminal and ends the program. But now the process is quite lengthy and I don't want to wait whilst being logged in, so how could I implement a workaround for this in C please?
You can run a program in the background by following the command with "&"
wget -m www.google.com &
Or, you could use the "screen" program, that allows you to attach-deattach sessions
screen wget -m www.google.com
(PRESS CTRL+D)
screen -r (TO RE ATTACH)
http://linux.die.net/man/1/screen
The process is sent the HUP signal when the shell exits. All you have to do is install a signal handler that ignores SIGHUP.
Or just run the program using nohup.
The traditional way to do this is using the nohup(1) command:
nohup mycmd < /dev/null >& output.log &
Of course if you don't care about the output you can send it to /dev/null too, or you could take input from a file if you wanted.
Doing it this way will protect your process from a SIGHUP that would normally cause it to exit. You'll also want to redirect stdin/stdout/stderr like above, as you'll be ending your ssh session.
Syntax shown above is for bash.
you can use screen command. here is a tutorial. note you might need to install it to your systems.
There are many options :-) TIMTOWTDI… However, for your purposes, you might look into running a command-line utility such as dtach or GNU screen.
If you actually want to implement something in C, you could re-invent that wheel, but from your description of the problem, I doubt it should be necessary…
The actual C code to background a process is trivial:
//do interactive stuff...
if(fork())
exit(0);
//cool, I've been daemonized.
If you know the code will never wind up on a non-linux-or-BSD machine, you could even use daemon()
//interactive...
daemon(0, 0);
//background...

Opening a new terminal window & executing a command

I've been trying to open a new terminal window from my application and execute a command on this second window as specified by the user. I've built some debugging software and I would like to execute the user's program on a separate window so my debugging output doesn't get intermixed with the programs output.
I am using fork() and exec(). The command I am executing is gnome-terminal -e 'the program to be executed'.
I have 2 questions:
Calling gnome-terminal means the user has to be running a gnome graphical environment. Is there a more cross-platform command to use (I am only interested in Linux machines though)?
After the command finishes executing the second terminal also finishes executing and closes. Is there any way to pause it, or just let it continue normal operation by waiting for input?
You probably want something like xterm -hold.
1) gnome-terminal should work reasonably also without the whole gnome environonment, anyway the old plain "xterm" is enough.
2) you can execute a short bash script that launch your program and at the end reads a line:
bash -c 'my program ... ; read a'
(or also 'xterm -e ...')

Resources