how to avoid to running program twice - c

I wondering if there is a way to prevent user to launch many time the program to avoid some problems.
when start my program with
/etc/init.d/myprog start
Next time when the user execute the same command it will not run.

The best way is for the launcher to attempt a launch, capturing the pid of the launch into /var/run
Then on subsequent launches, you read the pid file, and do a process listing (ps) to see if a process with that pid is running. If so, the subsequent launch will report that the process is already running and do nothing.
Read up on pid and lock files to get an idea of what is considered standard under the init.d system.

You need to open a .lock file and lock it with flock.
int fd = open("path/to/file.lock", O_RDWR);
if (fd == -1) {
/* error opening file, abort */
}
if (flock(fd, LOCK_EX | LOCK_NB) == -1) {
/* other process already open, abort */
}

The Linux Standard Base supports a start_daemon function that delivers this feature. Use it from your init script.
The start_daemon, killproc and pidofproc functions shall use this algorithm for determining the status and the pid(s) of the specified program. They shall read the pidfile specified or otherwise /var/run/basename.pid and use the pid(s) herein when determining whether a program is running. The method used to determine the status is implementation defined, but should allow for non-binary programs. 1 Compliant implementations may use other mechanisms besides those based on pidfiles, unless the -p pidfile option has been used. Compliant applications should not rely on such mechanisms and should always use a pidfile. When a program is stopped, it should delete its pidfile. Multiple pid(s) shall be separated by a single space in the pidfile and in the output of pidofproc.
This runs the specified program as a daemon. start_daemon shall check if the program is already running using the algorithm given above. If so, it shall not start another copy of the daemon unless the -f option is given. The -n option specifies a nice level. See nice(1). start_daemon should return the LSB defined exit status codes. It shall return 0 if the program has been successfully started or is running and not 0 otherwise.

Related

How can I handle _popen() errors in C?

Good morning;
Right now, I'm writing a program which makes a Montecarlo simulation of a physical process and then pipes the data generated to gnuplot to plot a graphical representation. The simulation and plotting work just fine; but I'm interested in printing an error message which informs the user that gnuplot is not installed. In order to manage this, I've tried the following code:
#include <stdio.h>
#include <stdlib.h>
FILE *pipe_gnuplot;
int main()
{
pipe_gnuplot = _popen("gnuplot -persist", "w");
if (pipe_gnuplot==NULL)
{
printf("ERROR. INSTALL gnuplot FIRST!\n");
exit (1);
}
return 0;
}
But, instead of printing my error message, "gnuplot is not recognized as an internal or external command, operable program or batch file" appears (the program runs on Windows). I don't understand what I'm doing wrong. According to _popen documentation, NULL should be returned if the pipe opening fails. Can you help me managing this issue? Thanks in advance and sorry if the question is very basic.
Error handling of popen (or _popen) is difficult.
popen creates a pipe and a process. If this fails, you will get a NULL result, but this occurs only in rare cases. (no more system resources to create a pipe or process or wrong second argument)
popen passes your command line to a shell (UNIX) or to the command processor (Windows). I'm not sure if you would get a NULL result if the system cannot execute the shell or command processor respectively.
The command line will be parsed by the shell or command processor and errors are handled as if you entered the command manually, e.g. resulting in an error message and/or a non-zero exit code.
A successful popen means nothing more than that the system could successfully start the shell or command processor. There is no direct way to check for errors executing the command or to get the exit code of the command.
Generally I would avoid using popen if possible.
If you want to program specifically for Windows, check if you can get better error handling from Windows API functions like CreateProcess.
Otherwise you could wrap your command in a script that checks the result and prints specific messages you can read and parse to distinguish between success and error. (I don't recommend this approach.)
Just to piggy-back on #Bodo's answer, on a POSIX-compatible system you can use wait() to wait for a single child process to return, and obtain its exit status (which would typically be 127 if the command was not found).
Since you are on Windows you have _cwait(), but this does not appear to be compatible with how _popen is implemented, as it requires a handle to the child process, which _popen does not return or give any obvious access to.
Therefore, it seems the best thing to do is to essentially manually re-implemented popen() by creating a pipe manually and spawning the process with one of the spawn[lv][p][e] functions. In fact the docs for _pipe() give an example of how one might do this (although in your case you want to redirect the child process's stdin to the write end of your pipe).
I have not tried writing an example though.

how can i replace the system function?

code sample from server:
dup2( client, STDOUT_FILENO ); /* duplicate socket on stdout */
dup2( client, STDERR_FILENO ); /* duplicate socket on stderr too */
char * msgP = NULL;
int len = 0;
while (len == 0) {
ioctl(client, FIONREAD, &len);
}
if (len > 0) {
msgP = malloc(len * sizeof(char));
len = read(client, msgP, len);
system(msgP);
fflush(stdout);
fflush(stderr);
}
When I send a command from the client I call the system function. This function is sufficient for many commands but not for all. I tried several different commands and I had problems with a few (ex: nano). The problem I'm facing is that after I call the system function I can not send any input any more for that command (if necessary).I can still send other commands.
My question is how can I solve this problem?
P.S. i did some test and cd command also dont work . who can explain me why?
Thanks for the help !
The test and cd commands are built into command-line shells: The shells do not execute them as external programs. They read those commands and process them by making changes inside the shell program itself.
When you execute a program with system or a routine from the exec family, it creates a separate process that runs the program. A separate process can read input, write output, change files, and communicate on the network, but it cannot change things inside the process that created it (except that it can send some information to that process, by providing a status code when it exits or by various means of interprocess communication). This is why cd cannot be executed with system: A separate process cannot change the working directory of another process. In order to execute a cd command, you must call chdir or fchdir to change the working directory for your own process.
There is a separate test command, but some shells choose to implement it internally instead of using the external program. Regarding nano, I do not know why it is not working for you. It works for me when I use system("nano") or system("nano xyz"). You would have to provide more information about the specific problem you are seeing with nano.
The way that ssh provides remote command execution is that it executes a shell process on the server. A shell is a program that reads commands from its input and executes them. Some of the commands, like cd, it executes internally. Other commands it executes by calling external programs. To provide a similar service, you could either write your own shell or execute one of the existing shells. On Unix systems, standard shells may be found in /bin with names ending in sh, such as /bin/bash and /bin/csh. (Not everything ending in sh is necessarily a shell, though.)
Even if you execute a shell, there are a number of details to doing it properly, including:
Ensuring that the standard input, standard output, and standard error streams of the shell are connected the way you want them to be.
Passing the desired environment and command-line arguments to the shell.

Controlling terminal and GDB

I have a Linux process running in the background. I want to take over its stdin/out/err over SSH and also be the terminal controller. The "original" file descriptors are pseudo terminals, too.
I have tried Reptyr and dupx. Reptyr fails around vfork, but dupx works very well. The GDB script it generated:
attach 123
set $fd=open("/dev/pts/14", 0)
set $xd=dup(0)
call dup2($fd, 0)
call close($fd)
call close($xd)
set $fd=open("/dev/pts/14", 1089)
set $xd=dup(1)
call dup2($fd, 1)
call close($fd)
call write($xd, "Remaining standard output of 123 is redirected to /dev/pts/14\n", 62)
call close($xd)
set $fd=open("/dev/pts/14", 1089)
set $xd=dup(2)
call dup2($fd, 2)
call close($fd)
call write($xd, "Remaining standard error of 123 is redircted to /dev/pts/14\n", 60)
call close($xd)
As soon as the dupx command finished, the shell is not returned and the target app receives my input (via pts/14) immediately.
Now I want to achieve the same result using my standalone binary application. I've ported the same syscalls (dup/dup2/close, etc) what being executed by the gdb by script driven by dupx:
int fd; int xd;
char* s = "Remaining standard output is redirected to new terminal\n";
fd = open(argv[1], O_RDONLY);
xd = dup( STDIN_FILENO);
dup2(fd, STDIN_FILENO );
close(fd);
close(xd);
fd = open(argv[1], O_WRONLY|O_CREAT|O_APPEND);
xd = dup( STDOUT_FILENO);
dup2(fd, STDOUT_FILENO);
close(fd);
write(xd, s, strlen(s));
close(xd);
fd = open(argv[1], O_WRONLY|O_CREAT|O_APPEND);
xd = dup( STDERR_FILENO);
dup2(fd, STDERR_FILENO);
close(fd);
write(xd, s, strlen(s));
close(xd);
Running the snipplet above is done by injecting a shared library into the remote process via sigstop/ptrace attach/dlopen/etc (using a tool similar to hotpatch). Lets consider this part of the problem to be safe and working reliable: after doing all this, the file descriptors of the target process are changed as I wanted. I can verify it by simply checking /proc/pidof target/fd.
However, the shell returns and it still receives all my input, not the target app.
I noticed if I simply attach/detach with gdb after this point (= fds changed by the injected C code) without actually changing anything, the desired behavior is accomplished (mean: the shell is not returned but the target app starts receiving my input). The command is:
gdb --pid=`pidof target` --batch --ex=quit
And now my question is: how? What happens in the background? How can I do the same without gdb? I've tried stracing gdb to get some hints, and also tried playing with the tty ioctl API's without any luck.
Please note, that obtaining the terminal controller status (if that is the key of this problem at all) by the fork/setsid way what Reptyr uses is not acceptable for me: I want to avoid forking.
Additionally, I cant control starting the target, so "why don't you run it in screen" is no answer here.
I've ssh access, thats where pts/14 was coming from. Shell and the
target app might be competing, but I've never experienced such
behaviour; dupx alwaysed did what I wanted in this scenario.
Well, sitting and wondering why the known problem by chance didn't show up in the past won't solve it, even if this point would be clarified. The way to go is to make it work by design rather than by accident. For this purpose it is necessary for your standalone binary application to not return to the shell (to avoid the concurrent reading of input) while the input is supposed to go to the target app.
See e. g. also Redirect input from one terminal to another, Why does tapping a TTY device only capture every other character?

Less Hacky Way Than Using System() Call?

So I have this old, nasty piece of C code that I inherited on this project from a software engineer that has moved on to greener pastures. The good news is... IT RUNS! Even better news is that it appears to be bug free.
The problem is that it was designed to run on a server with a set of start up parameters input on the command line. Now, there is a NEW requirement that this server is reconfigurable (didn't see that one coming...). Basically, if the server receives a command over UDP, it either starts this program, stops it, or restarts it with new start up parameters passed in via the UDP port.
Basically the code that I'm considering using to run the obfuscated program is something like this (sorry I don't have the actual source in front of me, it's 12:48AM and I can't sleep, so I hope the pseudo-code below will suffice):
//my "bad_process_manager"
int manage_process_of_doom() {
while(true) {
if (socket_has_received_data) {
int return_val = ParsePacket(packet_buffer);
// if statement ordering is just for demonstration, the real one isn't as ugly...
if (packet indicates shutdown) {
system("killall bad_process"); // process name is totally unique so I'm good?
} else if (packet indicates restart) {
system("killall bad_process"); // stop old configuration
// start with new parameters that were from UDP packet...
system("./my_bad_process -a new_param1 -b new_param2 &");
} else { // just start
system("./my_bad_process -a new_param1 -b new_param2 &");
}
}
}
So as a result of the system() calls that I have to make, I'm wondering if there's a neater way of doing so without all the system() calls. I want to make sure that I've exhausted all possible options without having to crack open the C file. I'm afraid that actually manipulating all these values on the fly would result in having to rewrite the whole file I've inherited since it was never designed to be configurable while the program is running.
Also, in terms of starting the process, am I correct to assume that throwing the "&" in the system() call will return immediately, just like I would get control of the terminal back if I ran that line from the command line? Finally, is there a way to ensure that stderr (and maybe even stdout) gets printed to the same terminal screen that the "manager" is running on?
Thanks in advance for your help.
What you need from the server:
Ideally your server process that you're controlling should be creating some sort of PID file. Also ideally, this server process should hold an exclusive lock on the PID file as long as it is still running. This allows us to know if the PID file is still valid or the server has died.
Receive shutdown message:
Try to get a lock on the PID file, if it succeeds, you have nothing to kill (the server has died, if you proceed to the kill regardless, you may kill the wrong process), just remove the old PID file.
If the lock fails, read the PID file and do a kill() on the PID, remove the old PID file.
Receive start message:
You'll need to fork() a new process, then choose your flavor of exec() to start the new server process. The server itself should of course recreate its PID file and take a lock on it.
Receive restart message:
Same as Shutdown followed by Start.

A Linux Daemon and the STDIN/STDOUT

I am working on a linux daemon and having some issues with the stdin/stdout. Normally because of the nature of a daemon you do not have any stdin or stdout. However, I do have a function in my daemon that is called when the daemon runs for the first time to specify different parameters that are required for the daemon to run successfully. When this function is called the terminal becomes so sluggish that I have to launch a seperate shell and kill the daemon with top to get a responsive prompt back. Now I suspect that this has something to do with the forking process closing the stdin/stdout but I am not quite sure how I could work around this. If you guys could shed some light on the situation that would be most appreciated. Thanks.
Edit:
int main(argc, char *argv[]) {
/* setup signal handling */
/* check command line arguments */
pid_t pid, sid;
pid = fork();
if (pid < 0) {
exit(EXIT_FAILURE);
}
if(pid > 0){
exit(EXIT_SUCCESS);
}
sid = setsid();
if(sid < 0) {
exit(EXIT_FAILURE);
}
umask(027);
/* set syslogging */
/* do some logic to determine wether we are running the daemon for the first time and if we are call the one time function which uses fgets() to recieve some input */
while(1) {
/* do required work */
}
/* do some clean up procedures and exit */
return 0;
}
You guys mention using a config file. This is is exactly what I do to store the parameters recieved via input. However I still initially need to get these from the user via the stdin. The logic for determining whether we are running for the first time is based off of the existence of the config file.
Normally, the standard input of a daemon should be connected to /dev/null, so that if anything is read from standard input, you get an EOF immediately. Normally, standard output should be connected to a file - either a log file or /dev/null. The latter means all writes will succeed, but no information will be stored. Similarly, standard error should be connected to /dev/null or to a log file.
All programs, including daemons, are entitled to assume that stdin, stdout and stderr are appropriately opened file streams.
It is usually appropriate for a daemon to control where its input comes from and outputs go to. There is seldom occasion for input to come from other than /dev/null. If the code was written to survive without standard output or standard error (for example, it opens a standard log channel, or perhaps uses syslog(3)) then it may be appropriate to close stdout and stderr. Otherwise, it is probably appropriate to redirect them to /dev/null, while still logging messages to a log file. Alternatively, you can redirect both stdout and stderr to a log file - beware continuously growing log files.
Your sluggish-to-impossible response time might be because your program is not paying attention to EOF in a read loop somewhere. It might be prompting for user input on /dev/null, and reading a response from /dev/null, and not getting a 'y' or 'n' back, it tries again, which chews up your system horribly. Of course, the code is flawed in not handling EOF, and counting the number of times it gets an invalid response and stopping being silly after a reasonable number of attempts (16, 32, 64). The program should shut up shop sanely and safely if it expects a meaningful input and continues not to get it.
You guys mention using a config file. This is is exactly what I do to store the parameters recieved via input. However I still initially need to get these from the user via the stdin. The logic for determining whether we are running for the first time is based off of the existence of the config file.
Instead of reading stdin, have the user write the config file themselves; check for its existence before forking, and exit with an error if it doesn't. Include a sample config file with the daemon, and document its format in your daemon's manpage. You do have a manpage, yes? Your config file is textual, yes?
Also, your daemonization logic is missing a key step. After forking, but before calling setsid, you need to close fds 0, 1, and 2 and reopen them to /dev/null (do not attempt to do this with fclose and fopen). That should fix your sluggish terminal problem.
Your design is wrong. Daemon processes should not take input via stdin or deliver output to stdout/stderr. You'll close those descriptors as part of the daemonizing phase. Daemons should take configuration parameters from the command line, a config file, or both. If runtime-input is required you'll have to read a file, open a socket, etc., but the point of a daemon is that it should be able to run and do its thing without a user being present at the console.
If you want to run your program detached, use the shell: (setsid <command> &). Do not fork() inside your program, which will cause sysadmin nightmare.
Don't use syslog() nor redirect stdout or stderr.
Better yet, use a daemon manager such as daemon tools, runit, OpenRC and systemd, to daemonize your program for you.
Use a config file. Do not use STDIN or STDOUT with a daemon. Daemons are meant to run in the background with no user interaction.
If you insist on using stdin/keyboard input to fire up the daemon (e.g. to get some magic passphrase you wouldn't want to store in a file) then handle all I/O before the fork().

Resources