How to release Linux lock files for unique daemon process and multiple users - c

I have a deamon of which only one instance should be running at a time. The daemon is part of a larger application. I made this happen this way:
open() /tmp/prog.pid with O_CREAT | O_RDWR, permissions 0666. The permissions actually become 0664, probably because of umask (?)
flock() on the file descriptor returned by open(), with LOCK_EX | LOCK_NB
This is all I had first. My daemon exits on SIGTERM and SIGINT, but it turned out that the lock was not released upon exit. I realized with help of man 1 flock (strangely not in man 2 flock) that manually unlocking might be necessary if "the enclosed command group may have forked a background process which should not be holding the lock". This is exaclty the case since I am working on a daemon, so I now unlock manually at exit.
Now to my problem: there are several users who might be running the daemon.
If user1 is running the daemon, I want user2 to be able to kill it and restart it as themselves.
The locked file /tmp/prog.pid has permissions 0664, owner user1, group user1.
A stop script prog_stop kills all the processes involved in the application (it requires superuser rights, I'm ok with that). It also kills the daemon. When user2 runs prog_stop, the lock is released (I believe), but user2 cannot start its own daemon process, because it is neither owner of the lock file, nor in its group.
Several possible solutions:
make the lock file 0666, writeable to all. Dangerous.
create a group in which users need to be in order to run the application. This requires that all users start the application with this group, probably with help of newgrp. Easy to forget, not easy to enforce that people do this. Possibly set the current group in the scripts used to start the application?
completely delete the lock file in prog_stop. Drawback: I open the file from a C file, where the path string is defined. I need to write (and maintain!) the exact same file name with path in the stop script.
Lock files for daemons must be very common. What is the standard way to deal with this problem?

The standard way for lock files is to turn the daemon into a service and require sudo (or becoming root by other means) to start and stop it.
Now you can give the file a certain group; users in this group can then modify it. They can use newgrp but it's better to add them to the group with usermod --append --groups=foo bar (to add user bar to the group foo; the user keeps her original GID and all other groups they had). After a relog, you can validate this with id bar.
This is all very tedious. When I need something like that, I create a socket. Sockets are killed with the process that created them (so no cleanup necessary). Sockets can also be used to communicate with the running daemon (give me your status, shutdown, maybe even restart, ...).
I'm using a default port number which I compile into the application but I also use an environment variable to override the default.
Just make sure that you create a socket which listens on localhost; otherwise anyone on the Internet might be able talk to it.

Related

Capabilities to run start-stop-daemon

I would like to stop process (proc2) which was started by root
by my unprivileged process (proc1).
My process proc1 calls execl("/bin/sh","sh","-c","/etc/init.d/proc2 restart",nullptr).
and /etc/init.d/proc2 restart calls start-stop-daemon
which fails because of lack of capabilities to kill proc2 (suid root)
What kind of capabilities have to be set to unprivileged process proc1 in order it could run start-stop-daemon (kill proc2)?
I will rewrite your question as how is it possible to trigger an administrative task (requiring root priviledges) from a user lever process?
The common way if to set a priviledged relay that will accept to be activated from a non priviledged task. There are two classical ways to do that in Unix/Linux world:
legacy way: an executale owned by root with the seuid bit set and executable only by a group of users allowed to execute the priviledged task. But setuid executables come with a high risk because any bug can lead to serious consequences. The well know sudo is just an example of such a root seutid executable but it has been extensively tested
the daemon way: a priviledged daemon waits for some event and executes the priviledged task. The interface with the unpriviledged world is only the event, so the risk is commonly seen as lower. The event is commonly the presence of a file in a directory, or a message written in a fifo file, or a network packet.
In either way, you must considere the security question: how to ensure only legitimate triggering of the priviledged task.

Purpose of the saved user ID

Running a program with the setuid() bit set is the same thing as running it as the owner of that program. After execution programs usually exit, so why do we have to switch back to the real user ID?
Also, the wikipedia article states that:
The saved user ID (suid) is used when a program running with elevated privileges needs to do some unprivileged work temporarily.
Why is that though? Why would a privileged process lower its privileges, I can't wrap my head around this.
Expanding on the purpose of the saved user ID, it also allows a process that was running with elevated privileges and subsequently dropped them to go back to the elevated privileges if needed.
Suppose a user with ID 1001 runs a setuid-root program. At program startup, the various user IDs are set as follows:
real UID: 1001
effective UID: 0
saved UID: 0
Setting the saved user ID to the effective user ID at startup allows the user to go back to this user ID whenever it is needed.
At this point the program has root priviliges. The program can then do the following:
// perform privileged commands
seteuid(1001); // drop privileges, euid is now 1001
// perform unprivileged commands
seteuid(0); // raise privileges, euid is now 0, allowed because saved UID is 0
// perform more privileged commands
seteuid(1001); // drop privileges, euid is now 1001
// perform more unprivileged commands
Why would a privileged process lower its privileges, I can't wrap my head around this.
Several reasons:
First and foremost: so that the process knows what user launched it! Otherwise, it'd have no way of knowing -- there is no system call to explicitly get the UID of another process. (There is procfs, but that's nonstandard and unreliable.)
As others have mentioned, for safety. Dropping privileges limits the damage that can be done by a malfunctioning setuid-root program. (For instance, a web server will typically bind to a socket on port 80 while running as root, then set its UID/GID to a service user before serving any content.)
So that files created by the setuid-root process are created as owned by the user that launched the process, not by root.
In some situations, root can have less capabilities than non-root user IDs. One example is on NFS -- UID 0 is typically mapped to "nobody" on NFS servers, so processes running as root do not have the ability to read or modify all files on an NFS share.
It is best to do as much as possible with lower privileges. That way the OS protects you from doing stupid things.
Do as little as possible as root. Many people have logged in as root and really messed things up.

Restarting inetd should effect instances of all inetd controlled processes

When I am sending HUP signal to inetd so that it rereads the new inetd.conf file, what I want is, the processes controlled by the inetd process should also restart, so that it can read the new command line parameters added to the inetd.conf file as part of the change.
I know I can search for the running process and kill it, but is there a standard way to do this. I could not find anything over the Internet.
The standard inetd included in NetBSD does not manage the processes it starts (except for single-threaded services, i.e. those with "wait" flags) -- it just starts them. Each child process services one active connection and then exits when done (i.e. when the connection is closed). In the general case it would be very unwise to kill such processes early without very good reason -- for example consider the case where your current login session (where you tell inetd to reload) was opened to a service controlled by inetd (e.g. sshd).
If you really want to kill processes handling active current connections then you will have to write some helper script of your own to do that, though perhaps pkill will suffice.

How to restrict write access to a Linux directory by process attributes?

We've got a situation where it would be advantageous to limit write access to a logging directory to a specific subset of user processes. These particular processes (say, for example, telnet and the like) have been modified by us to generate a logging record whenever a significant user action takes place (like a remote connection, etc). What we do not want is for the user to manually create these records by copying and editing existing logging records.
syslog comes close but still allows the user to generate spurious records, SELinux seems plausible but has a terrible reputation of being an unmanageable beast.
Any insight is appreciated.
Run a local logging daemon as root. Have it listen on an Unix domain socket (typically /var/run/my-logger.socket or similar).
Write a simple logging library, where event messages are sent to the locally running daemon via the Unix domain socket. With each event, also send the process credentials via an ancillary message. See man 7 unix for details.
When the local logging daemon receives a message, it checks for the ancillary message, and if none, discards the message. The uid and gid of the credentials tell exactly who is running the process that has sent the logging request; these are verified by the kernel itself, so they cannot be spoofed (unless you have root privileges).
Here comes the clever bit: the daemon also checks the PID in the credentials, and based on its value, /proc/PID/exe. It is a symlink to the actual process binary being executed by the process that send the message, something the user cannot fake. To be able to fake a message, they'd have to overwrite the actual binaries with their own, and that should require root privileges.
(There is a possible race condition: a user may craft a special program that does the same, and immediately exec()s a binary they know to be allowed. To avoid that race, you may need to have the daemon respond after checking the credentials, and the logging client send another message (with credentials), so the daemon can verify the credentials are still the same, and the /proc/PID/exe symlink has not changed. I would personally use this to check the message veracity (by the logger asking for confirmation for the event, with a random cookie, and have the requester respond with both the checksum and the cookie whether the event checksum is correct. Including the random cookie should make it impossible to stuff the confirmation in the socket queue before exec().)
With the pid you can do also further checks. For example, you can trace the process parentage to see how the human user has connected by tracking parents till you detect a login via ssh or console. It's a bit tedious, since you'll need to parse /proc/PID/stat or /proc/PID/status files, and nonportable. OSX and BSDs have a sysctl call you can use to find out the parent process ID, so you can make it portable by writing a platform-specific parent_process_of(pid_t pid) function.
This approach will make sure your logging daemon knows exactly 1) which executable the logging request came from, and 2) which user (and how connected, if you do the process tracing) ran the command.
As the local logging daemon is running as root, it can log the events to file(s) in a root-only directory, and/or forward the messages to a remote machine.
Obviously, this is not exactly lightweight, but assuming you have less than a dozen events per second, the logging overhead should be completely neglible.
Generally there's two ways of doing this. One, run these processes as root and write protect the directory (mentioned mainly for historical purposes). Then no one but root can write there. The second, and more secure is to run them as another user (not root) and give that user, but no one else, write access to the log directory.
The approach we went with was to use a setuid binary to allow write access to the logging directory, the binary was executable by all users but would only allow a log record to be written if the parent process path as defined by /proc/$PPID/exe matched the subset of modified binary paths we placed on the system.

How to ensure only one copy of the application is running? [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Preventing multiple process instances on Linux
I have multi-threaded application which can be run as a deamon process or one time with input parameters.
I want to ensure that if the application is running as a deamon process then, user should not be allowed to run this again.
EDIT:After you all suggested to go for flocks, I tried it and put it in server. I know have weird problem, when the servers are bounced, they delete all the files, including lock file :(. How now ?
The easiest way is to bind to a port (could be unix domain, in a "private" directory) Only one process can bind to a port, so if the port is bound, the process is running. If the process exits, the kernel automatically closes the filedescriptor. It does cost your process a (unused?) filedescriptor. Normally a daemon process would need some listen socket anyway.
You can try using file locks. Upon starting the process, you can open a file, lock it, and check for a value (e.g. size of file). If it's not desired value, the process can exit. If desired value, change the file to an undesired value.
I implemented similar thing by using shell scripts to start and stop the daemon.
In the start script before the exe call look if this exe is still running. If it finds it is still running then new process is not started.

Resources