Restrict a process to read "/etc/resolv.conf" file in linux - c

In my process, I am creating a child process and running a binary with execl() API. In parent proces calling waitpid() and waiting for child to exit. This binary opens "/etc/resolv.conf" and try to connect DNS IP. If DNS ip is not reachable, the child process block for long time. Due to that parent process timeout. I do not have source code of binary and I do not want to change anything to /etc/resolve.conf as this file is used by other process.
Is there any way, I can remove or restrict access of '/etc/resolve.conf' to my child process.

It is not easy to prevent the access to /etc/resolv.conf. But you can tell the resolver the number of attempts to perform for DNS name resolving through the environment variable RES_OPTIONS. Even zero attempts are a valid value there and causes name resolution to fail instantly.
See for example:
RES_OPTIONS="attempts:0" telnet www.google.de
telnet: could not resolve www.google.de/telnet: Temporary failure in name resolution
This means, in your prgram, you could do
...
putenv("RES_OPTIONS=attempts:0");
execl(...);
...
This should cause the resolving to fail instantly and your process should proceed.

Related

What are the ways to find the session leader or the controlling TTY of a process group in Linux?

This is not a language specific question, although I am using golang at the moment.
I am writing a command line program, and I wanted to find the real UID of the program.(By realUID, I meant, if the user did a sudo, the effective uid changes, but the real uid would be the same as the user's.)
I've read that finding the owner of the controlling tty is one way to find this, and on linux, we can use "tty" command, that will return the filename of the terminal connected to STDINPUT. Checking its ownership is one way.
Another way is to find the session leader process, and who owns it.
I tried the first way, using
cmdOut []byte
cmdOut, _ = exec.Command("tty").Output()
but it returns the output not a tty, when I run the program from my shell. Chances are that this might be getting executed in a separate forked shell that is detached from a tty (again, just a wild guess).
I tried the second way using os.Getppid() to get the parent pid, but in reality, when running sudo, it forks again, and it is giving the parent pid of the sudo process(16031 in the below case, whereas I am looking to grab 3393 instead.). (Pasting the process hierarchy from pstree output)
/usr/bin/termin(3383)-+-bash(3393)---sudo(16031)---Myprogram(16032), so effectively I am not able to get the session leader process, but just the parent pid.
Can someone guide me on how do I implement this functionality using either of this method?
Edit:
sudo set's $SUDO_USER environment variable, but it will help just with one sudo, i.e. if there was something like sudo sudo -u nobody your-program, $SUDO_USER will be set to "root". And there is $SUDO_UID too.
Old answer: How about exec.Command("who am i").Output() ? (won't work, still needs a tty).

Less Hacky Way Than Using System() Call?

So I have this old, nasty piece of C code that I inherited on this project from a software engineer that has moved on to greener pastures. The good news is... IT RUNS! Even better news is that it appears to be bug free.
The problem is that it was designed to run on a server with a set of start up parameters input on the command line. Now, there is a NEW requirement that this server is reconfigurable (didn't see that one coming...). Basically, if the server receives a command over UDP, it either starts this program, stops it, or restarts it with new start up parameters passed in via the UDP port.
Basically the code that I'm considering using to run the obfuscated program is something like this (sorry I don't have the actual source in front of me, it's 12:48AM and I can't sleep, so I hope the pseudo-code below will suffice):
//my "bad_process_manager"
int manage_process_of_doom() {
while(true) {
if (socket_has_received_data) {
int return_val = ParsePacket(packet_buffer);
// if statement ordering is just for demonstration, the real one isn't as ugly...
if (packet indicates shutdown) {
system("killall bad_process"); // process name is totally unique so I'm good?
} else if (packet indicates restart) {
system("killall bad_process"); // stop old configuration
// start with new parameters that were from UDP packet...
system("./my_bad_process -a new_param1 -b new_param2 &");
} else { // just start
system("./my_bad_process -a new_param1 -b new_param2 &");
}
}
}
So as a result of the system() calls that I have to make, I'm wondering if there's a neater way of doing so without all the system() calls. I want to make sure that I've exhausted all possible options without having to crack open the C file. I'm afraid that actually manipulating all these values on the fly would result in having to rewrite the whole file I've inherited since it was never designed to be configurable while the program is running.
Also, in terms of starting the process, am I correct to assume that throwing the "&" in the system() call will return immediately, just like I would get control of the terminal back if I ran that line from the command line? Finally, is there a way to ensure that stderr (and maybe even stdout) gets printed to the same terminal screen that the "manager" is running on?
Thanks in advance for your help.
What you need from the server:
Ideally your server process that you're controlling should be creating some sort of PID file. Also ideally, this server process should hold an exclusive lock on the PID file as long as it is still running. This allows us to know if the PID file is still valid or the server has died.
Receive shutdown message:
Try to get a lock on the PID file, if it succeeds, you have nothing to kill (the server has died, if you proceed to the kill regardless, you may kill the wrong process), just remove the old PID file.
If the lock fails, read the PID file and do a kill() on the PID, remove the old PID file.
Receive start message:
You'll need to fork() a new process, then choose your flavor of exec() to start the new server process. The server itself should of course recreate its PID file and take a lock on it.
Receive restart message:
Same as Shutdown followed by Start.

Using inotify to track in_open & in_close events for a specific PID

I am writing a program in ANSI C, that takes PID as an argument and needs to print on stdout information about a file name every time, when that given PID opens or closes any file.
Basicly we know, that /proc/PID/fd directory contains symlinks to the files, used by a PID.
By readdir()'ing that directory in a while loop and readlink()'ing each element - I can get file names of all files, currently opened by a PID and print them to stdout.
But that doesn't fully solve my original task - I need to print to STDOUT only events of changes in Opened File Descriptor Table for a PID. Moreover, I need to catch not only when new file is opened, but also when it's FD is closed.
So, I need some mechanism to catch file access events for a given PID in a user-space.
I also tried to use inotify() mechanism to catch IN_OPEN / IN_CLOSE, but that only works for regural directories, not for /proc (procfs) ! When I add inotify_watch for /proc/PID/fd directory - it simply doesn't catch any events (most likely due to the nature of PROCFS)
Could you please suggest mechanism to solve my task ?
P.S. And sorry for my bad english.
If you need Linux-specific solution, you could use fanotify. Check here for example - http://git.infradead.org/users/eparis/fanotify-example.git. You can subscribe for global notifications and then to filter only those, which are for the pid you're interested in.

A Linux Daemon and the STDIN/STDOUT

I am working on a linux daemon and having some issues with the stdin/stdout. Normally because of the nature of a daemon you do not have any stdin or stdout. However, I do have a function in my daemon that is called when the daemon runs for the first time to specify different parameters that are required for the daemon to run successfully. When this function is called the terminal becomes so sluggish that I have to launch a seperate shell and kill the daemon with top to get a responsive prompt back. Now I suspect that this has something to do with the forking process closing the stdin/stdout but I am not quite sure how I could work around this. If you guys could shed some light on the situation that would be most appreciated. Thanks.
Edit:
int main(argc, char *argv[]) {
/* setup signal handling */
/* check command line arguments */
pid_t pid, sid;
pid = fork();
if (pid < 0) {
exit(EXIT_FAILURE);
}
if(pid > 0){
exit(EXIT_SUCCESS);
}
sid = setsid();
if(sid < 0) {
exit(EXIT_FAILURE);
}
umask(027);
/* set syslogging */
/* do some logic to determine wether we are running the daemon for the first time and if we are call the one time function which uses fgets() to recieve some input */
while(1) {
/* do required work */
}
/* do some clean up procedures and exit */
return 0;
}
You guys mention using a config file. This is is exactly what I do to store the parameters recieved via input. However I still initially need to get these from the user via the stdin. The logic for determining whether we are running for the first time is based off of the existence of the config file.
Normally, the standard input of a daemon should be connected to /dev/null, so that if anything is read from standard input, you get an EOF immediately. Normally, standard output should be connected to a file - either a log file or /dev/null. The latter means all writes will succeed, but no information will be stored. Similarly, standard error should be connected to /dev/null or to a log file.
All programs, including daemons, are entitled to assume that stdin, stdout and stderr are appropriately opened file streams.
It is usually appropriate for a daemon to control where its input comes from and outputs go to. There is seldom occasion for input to come from other than /dev/null. If the code was written to survive without standard output or standard error (for example, it opens a standard log channel, or perhaps uses syslog(3)) then it may be appropriate to close stdout and stderr. Otherwise, it is probably appropriate to redirect them to /dev/null, while still logging messages to a log file. Alternatively, you can redirect both stdout and stderr to a log file - beware continuously growing log files.
Your sluggish-to-impossible response time might be because your program is not paying attention to EOF in a read loop somewhere. It might be prompting for user input on /dev/null, and reading a response from /dev/null, and not getting a 'y' or 'n' back, it tries again, which chews up your system horribly. Of course, the code is flawed in not handling EOF, and counting the number of times it gets an invalid response and stopping being silly after a reasonable number of attempts (16, 32, 64). The program should shut up shop sanely and safely if it expects a meaningful input and continues not to get it.
You guys mention using a config file. This is is exactly what I do to store the parameters recieved via input. However I still initially need to get these from the user via the stdin. The logic for determining whether we are running for the first time is based off of the existence of the config file.
Instead of reading stdin, have the user write the config file themselves; check for its existence before forking, and exit with an error if it doesn't. Include a sample config file with the daemon, and document its format in your daemon's manpage. You do have a manpage, yes? Your config file is textual, yes?
Also, your daemonization logic is missing a key step. After forking, but before calling setsid, you need to close fds 0, 1, and 2 and reopen them to /dev/null (do not attempt to do this with fclose and fopen). That should fix your sluggish terminal problem.
Your design is wrong. Daemon processes should not take input via stdin or deliver output to stdout/stderr. You'll close those descriptors as part of the daemonizing phase. Daemons should take configuration parameters from the command line, a config file, or both. If runtime-input is required you'll have to read a file, open a socket, etc., but the point of a daemon is that it should be able to run and do its thing without a user being present at the console.
If you want to run your program detached, use the shell: (setsid <command> &). Do not fork() inside your program, which will cause sysadmin nightmare.
Don't use syslog() nor redirect stdout or stderr.
Better yet, use a daemon manager such as daemon tools, runit, OpenRC and systemd, to daemonize your program for you.
Use a config file. Do not use STDIN or STDOUT with a daemon. Daemons are meant to run in the background with no user interaction.
If you insist on using stdin/keyboard input to fire up the daemon (e.g. to get some magic passphrase you wouldn't want to store in a file) then handle all I/O before the fork().

Query if service is running

How can I query if a service (dnsmasq) is running, in C?
According to the dnsmasq man page, by default it writes a pid file to /var/run/dnsmasq.pid. This file will be a text file that contains an integer representing the process ID. Open the file, read the integer, and call kill(pid, 0) to see whether the process is alive at that pid. (Although checking for PID existence isn't guaranteed to not find some other process running at that PID, it's usually good enough.)

Resources