create process independent of bash - c

I have written a program which calculates the amount of battery level available in my laptop. I have also defined a threshold value in the program. Whenever the battery level falls below threshold i would like to call another process. I have used system("./invoke.o") where invoke.o is the program that i have to run. I am running a script which runs the battery level checker program for every 5 seconds. Everything is working fine but when i close the bash shell the automatic invocation of invoke.o is not happening. How should i make the invoke.o to be invoked irrespective of whether bash is closed or not??. I am using UBUNTU LINUX

Try running it as: nohup ./myscript.sh, where the nohup command allows you to close the shell without terminating the process.

You could run your script as a cron job. This lets cron set up standard input and output for you, reschedule the job, and it will send you email if it fails.
The alternative is to run a script in the background with all input and output, including standard error output, redirected.
While you could make a proper daemon out of your program that kind of effort is probably not necessary.

man nohup
man upstart
man 2 setsid (more complex, leads to longer trail of breadcrumbs on daemon launching).

Related

Launching program w/posix_openpt() call from script

I have a program that relies on a psuedo terminal that uses
term = posix_openpt()
grantpt(term)
unlockpt(term)
open(term)
to open a psuedo terminal. I'll call this program psuedoTerminal.bin
I want to open psuedoTerminal.bin from a bash script, in the following fashion (or similar)
#!/bin/bash
/bin/sh -c psuedoTerminal.bin &
The problem is when my program arrives to the posix_openpt() call, the behaviour is erratic, sometimes CPU consumption arrives to 100%, but it never works. I believe ssh, telnet, etc suffer from the same problems on opening psuedo terminals from inside scripts.
How can I run this program from inside a script? Thanks for your help.

Is it possible to run a program from terminal and have it continue to run after you close the terminal?

I have written a program which I run after connecting to the box over SSH. It has some user interaction such as selecting options after being prompted, and usually I wait for the processes it carries out to finish before logging out which closes the terminal and ends the program. But now the process is quite lengthy and I don't want to wait whilst being logged in, so how could I implement a workaround for this in C please?
You can run a program in the background by following the command with "&"
wget -m www.google.com &
Or, you could use the "screen" program, that allows you to attach-deattach sessions
screen wget -m www.google.com
(PRESS CTRL+D)
screen -r (TO RE ATTACH)
http://linux.die.net/man/1/screen
The process is sent the HUP signal when the shell exits. All you have to do is install a signal handler that ignores SIGHUP.
Or just run the program using nohup.
The traditional way to do this is using the nohup(1) command:
nohup mycmd < /dev/null >& output.log &
Of course if you don't care about the output you can send it to /dev/null too, or you could take input from a file if you wanted.
Doing it this way will protect your process from a SIGHUP that would normally cause it to exit. You'll also want to redirect stdin/stdout/stderr like above, as you'll be ending your ssh session.
Syntax shown above is for bash.
you can use screen command. here is a tutorial. note you might need to install it to your systems.
There are many options :-) TIMTOWTDI… However, for your purposes, you might look into running a command-line utility such as dtach or GNU screen.
If you actually want to implement something in C, you could re-invent that wheel, but from your description of the problem, I doubt it should be necessary…
The actual C code to background a process is trivial:
//do interactive stuff...
if(fork())
exit(0);
//cool, I've been daemonized.
If you know the code will never wind up on a non-linux-or-BSD machine, you could even use daemon()
//interactive...
daemon(0, 0);
//background...

Opening a new terminal window & executing a command

I've been trying to open a new terminal window from my application and execute a command on this second window as specified by the user. I've built some debugging software and I would like to execute the user's program on a separate window so my debugging output doesn't get intermixed with the programs output.
I am using fork() and exec(). The command I am executing is gnome-terminal -e 'the program to be executed'.
I have 2 questions:
Calling gnome-terminal means the user has to be running a gnome graphical environment. Is there a more cross-platform command to use (I am only interested in Linux machines though)?
After the command finishes executing the second terminal also finishes executing and closes. Is there any way to pause it, or just let it continue normal operation by waiting for input?
You probably want something like xterm -hold.
1) gnome-terminal should work reasonably also without the whole gnome environonment, anyway the old plain "xterm" is enough.
2) you can execute a short bash script that launch your program and at the end reads a line:
bash -c 'my program ... ; read a'
(or also 'xterm -e ...')

Semantics of Windows batch programming redirection

I have a Windows batch script (my.bat) which has the following line:
DTBookMonitor.exe 2>&1 > log\cmdProcessLog.txt
So, from my understanding, this runs DTBookMonitor, redirects STDERR to STDOUT and then redirects STDOUT to the file log\cmdProcessLog.txt.
I then run my.bat. DTBookMonitor runs for a significant amount of time, and when I run my.bat a second time (while it is already running), it immediately exits from the second instance of my.bat.
Is this purely because of the redirection to cmdProcessLog?
Better late then never :)
Windows redirection locks the output file so that no other process can open the file for writing at the same time. That is why the second instance fails when it tries to redirect output to the same file.
I'd guess it's either due to that, or because DTBookMonitor only allows one instance of it to run at a time. The following test should shed some light on the situation:
Run the first (long) instance of DTBookMonitor
Run a second instance without redirecting any of its output
Alternatively, run a second instance, but redirect the output to a file other than log\cmdProcessLog.txt
Do you get similar results? Different results?

System call time out?

I'm using unix system() calls to gunzip and gzip files. With very large files sometimes (i.e. on the cluster compute node) these get aborted, while other times (i.e. on the login nodes) they go through. Is there some soft limit on the time a system call may take? What else could it be?
The calling thread should block indefinitely until the task you initiated with system() completes. If what you are observing is that the call returns and the file operation as not completed it is an indication that the spawned operation failed for some reason.
What does the return value indicate?
Almost certainly not a problem with use of system(), but with the operation you're performing. Always check the return value, but even more so, you'll want to see the output of the command you're calling. For non-interactive use, it's often best to write stdout and stderr to log files. One way to do this is to write a wrapper script that checks for the underlying command, logs the commandline, redirects stdout and stderr (and closes stdin if you want to be careful), then execs the commandline. Run this via system() rather than the OS command directly.
My bet is that the failing machines have limited disk space, or are missing either the target file or the actual gzip/gunzip commands.
I'm using unix system() calls to
gunzip and gzip files.
Probably silly question: why not use zlib directly from your application?
And system() isn't a system call. It is a wrapper for fork()/exec()/wait(). Check the system() man page. If it doesn't unblock, it might be that your application interferes somehow with wait() - e.g. do you have a SIGCHLD handler?
If it's a Linux system I would recommend using strace to see what's going on and which syscall blocks.
You can even attach strace to already running processes:
# strace -p $PID
Sounds like I'm running into the same intermittent issue indicating a timeout of some kind. My script runs every day. I'm starting to believe GZIP has a timeout.
gzip -vd filename.txt.gz 2>> tmp/errorcatch.txt 1>> logfile.log
stderr: Error for filename.txt.gz
Moves to next command 'cp filename* new/directory/', resulting in zipped version of filename in new directory
stdout from earlier gzip showing successful unzip of SAME file:
filename.txt.gz: 95.7% -- replaced with filename.txt
Successful out file from gzip is not there in source or new directory.
Following alerts, manual run of 'gzip -vd filename.txt.gz' never fails.
Details:
Only one call in script to unzip that file
Call for unzip is inside a function (for more rebust logging and alerting)
Unable to strace in production
Unable to replicate locally
In occurences over last month, found no consistency among file size, only
I'll simply be working around it with a retry logic and general scripting improvements, but I want the next google-er to know they're not crazy. This is happening to other people!

Resources