Nagios After samples are taken run a process - nagios

Trying to set up a nagios process which, after it samples my servers, it runs a process. What I want this process to do is write the resultant data to a log file, something I can post to another process (like splunk, but NOT splunk) Basically, I want to take each sample returned and send it to another URL.
What's the best way to do this?

If you want the events sent to another log in standard syslog format (i.e. for Splunk) just set use_syslog=1 in nagios.conf (http://nagios.sourceforge.net/docs/nagioscore/3/en/configmain.html). You can the configure syslog to send the the messages to a separate logfile and/or another syslog host.

Theres probably a better way to do this, but I would just use some tool like sec [http://simple-evcorr.sourceforge.net/ ] or write your own handler that tails the nagios log file. It has all the raw info you need in a standard format.

Related

How does inetd know which process to send incoming data to?

I'm trying to replace a inetd/xinetd service by a standalone one.
What is the simplest I can do this? Is there some standard code to get started?
How does inetd know which process to send incoming data to
It spawns it (typically using fork() followed by exec*()), so it has the process-id.
Is there some standard code to get started?
What about xinetd's sources?

Apache Camel: How do I signal to other processes that I moved/renamed a file?

I am trying to develop a file receipt process using Camel. what I am trying to do seems simple enough:
receive a file
invoke a web service which will look at that file and generate some metadata
move the file to a new location based on that metadata
invoke subsequent process(es) which will act on the file in it's new location
I have tried several different approaches but none seem to work exactly as I would like. My main issues are that since the file is not moved/renamed until the route is completed, I cannot signal to any downstream process that the file is available within that route.
I need to invoke webservices in order to determine the new name and location, once I do that the body is changed and I cannot use a file producer to move the file from within the route.
I would really appreciate hearing any other solutions.
You can signal the processing routes and then have them poll using the doneFile functionality of the file component.
Your first process will copy the files, signal the processing routes, and when it is done copying the file it will write a done file. Once the done file has been written the file consumers in your processing routes will pick up the file you want to process. This guarantees that the file is written before it is processed.
Check out the "Using done files" section of the file component.
http://camel.apache.org/file2.html
Using other components you could have used the OnCompletion DSL-syntax to trigger a post-route message for futher processing.
However, with the file component, this is not really doable, since the move/done thingy happends in parallell with that "OnCompletion" trigger, and you can't be sure that the file is really done.
You might have some luck with the Unit of Work API which can register post route execution logic (this is how the File component fires of the move when the route is done).
However, do you really need this logic?
I see that you might want to send a wakeup call to some file consumer, but do the file really have to be ready that very millisec? Can't you just start to poll for the file and grab it once ready, once you received the trigger message? That's ususually how you do things with file based protocols (or just ignore the trigger and poll once every now and then).

How to restrict write access to a Linux directory by process attributes?

We've got a situation where it would be advantageous to limit write access to a logging directory to a specific subset of user processes. These particular processes (say, for example, telnet and the like) have been modified by us to generate a logging record whenever a significant user action takes place (like a remote connection, etc). What we do not want is for the user to manually create these records by copying and editing existing logging records.
syslog comes close but still allows the user to generate spurious records, SELinux seems plausible but has a terrible reputation of being an unmanageable beast.
Any insight is appreciated.
Run a local logging daemon as root. Have it listen on an Unix domain socket (typically /var/run/my-logger.socket or similar).
Write a simple logging library, where event messages are sent to the locally running daemon via the Unix domain socket. With each event, also send the process credentials via an ancillary message. See man 7 unix for details.
When the local logging daemon receives a message, it checks for the ancillary message, and if none, discards the message. The uid and gid of the credentials tell exactly who is running the process that has sent the logging request; these are verified by the kernel itself, so they cannot be spoofed (unless you have root privileges).
Here comes the clever bit: the daemon also checks the PID in the credentials, and based on its value, /proc/PID/exe. It is a symlink to the actual process binary being executed by the process that send the message, something the user cannot fake. To be able to fake a message, they'd have to overwrite the actual binaries with their own, and that should require root privileges.
(There is a possible race condition: a user may craft a special program that does the same, and immediately exec()s a binary they know to be allowed. To avoid that race, you may need to have the daemon respond after checking the credentials, and the logging client send another message (with credentials), so the daemon can verify the credentials are still the same, and the /proc/PID/exe symlink has not changed. I would personally use this to check the message veracity (by the logger asking for confirmation for the event, with a random cookie, and have the requester respond with both the checksum and the cookie whether the event checksum is correct. Including the random cookie should make it impossible to stuff the confirmation in the socket queue before exec().)
With the pid you can do also further checks. For example, you can trace the process parentage to see how the human user has connected by tracking parents till you detect a login via ssh or console. It's a bit tedious, since you'll need to parse /proc/PID/stat or /proc/PID/status files, and nonportable. OSX and BSDs have a sysctl call you can use to find out the parent process ID, so you can make it portable by writing a platform-specific parent_process_of(pid_t pid) function.
This approach will make sure your logging daemon knows exactly 1) which executable the logging request came from, and 2) which user (and how connected, if you do the process tracing) ran the command.
As the local logging daemon is running as root, it can log the events to file(s) in a root-only directory, and/or forward the messages to a remote machine.
Obviously, this is not exactly lightweight, but assuming you have less than a dozen events per second, the logging overhead should be completely neglible.
Generally there's two ways of doing this. One, run these processes as root and write protect the directory (mentioned mainly for historical purposes). Then no one but root can write there. The second, and more secure is to run them as another user (not root) and give that user, but no one else, write access to the log directory.
The approach we went with was to use a setuid binary to allow write access to the logging directory, the binary was executable by all users but would only allow a log record to be written if the parent process path as defined by /proc/$PPID/exe matched the subset of modified binary paths we placed on the system.

Request for Hints : Possibilities to log files from a router to a server

here is the situation:
I have written a C program doing some wireless measurements on a WRT54GL Router (OpenWRT White Russian, Busybox 1.00, Dropbear client v0.49). Please note that i can not use a more up to date version of the operating system on the router or install additional packages (just scripts or small programs are allowed).
Up to now, i log my measurements results every 15 minutes from the router to the server via a
cat localfile | ssh target_address cat ">" remotefile
which i call from my C program (system()) for every logfile which is created or present at the moment the log starts. What i don't like is, that the system call opens a new shell for every single call, causing some overhead. The good thing is that in this way the data is encrypted and because i do a connection for every file, i can directly get per file feedback from the server, so that i can remove the logs from the router. (Other approaches calling scripts from the router on the server, which then return values for the logging did not work, as the dropbear ssh client does not support this return).
So what i'm asking for: what could be a more elegant way to do so and to reduce this overhead ? By now, i've read a few tutorials about how to use TLS / TCP Sockets (so i can send the data encrypted to the server). Another possibility could be a HTTP PUT or POST, but there i am not sure how i could get feedback for the data being send. So i would just like to hear your oppions and how you guys would try to tackle this.
Best regards
Since you're talking about log files, this sounds like a job for the syslog protocol.
I am pretty sure OpenWRT supports it out of the box.

Runtime information in C daemon

The user, administrators and support staff need detailed runtime and monitoring information from a daemon developed in C.
In my case these information are e.g.
the current system health, like throughput (MB/s), already written data, ...
the current configuration
I would use JMX in the Java world and the procfs (or sysfs) interface for a kernel module. A log file doesn't seem to be the best way.
What is the best way for such a information interface for a C daemon?
I thought about opening a socket and implementing a bare-metal http or xmlrpc server, but that seems to be overkill. What are alternatives?
You can use a signal handler in your daemon that reacts to, say USR1, and dumps information to the screen/log/net. This way, you can just send the process a USR1 signal whenever you need the info.
You could listen on a UNIX-domain socket, and write regularly write the current status (say once a second) to anyone who connects to it. You don't need to implement a protocol like HTTP or XMLRPC - since the communication will be one-way just regularly write a single line of plain text containing the state.
If you are using a relational database anyway, create another table and fill it with the current status as frequent as necessary. If you don't have a relational database, write the status in a file, and implement some rotation scheme to avoid overwriting a file that somebody reads at that very moment.
Write to a file. Use a file locking protocol to force atomic reads and writes. Anything you agree on will work. There's probably a UUCP locking library floating around that you can use. In a previous life I found one for Linux. I've also implemented it from scratch. It's fairly trivial to do that too.
Check out the lockdev(3) library on Linux. It's for devices, but it may work for plain files too.
I like the socket idea best. There's no need to support HTTP or any RPC protocol. You can create a simple application specific protocol that returns requested information. If the server always returns the same info, then handling incoming requests is trivial, though the trivial approach may cause problems down the line if you ever want to expand on the possible queries. The main reason to use a pre-existing protocol is to leverage existing libraries and tools.
Speaking of leveraging, another option is to use SNMP and access the daemon as a managed component. If you need to query/manage the daemon remotely, this option has its advantages, but otherwise can turn out to be greater overkill than an HTTP server.

Resources