C multiple write access to a log file (linux env) - c

I have a set of independent programs that I wrote in C. I would like all of them to write their log to the same file. Obviously comes the issue of control access. Two or more of them could end up writing simultaneously.
What is the most pragmatic way to achieve this?
I came across solutions using pthread/mutexes/etc but that sounds overkill implementation for something like that.
I am also looking at syslog but wonder if this is really for the purpose of what I need to do?
I feel that I need a daemon service taking the message and control when they are written. I wonder if that already exists.

I am also looking at syslog but wonder if this is really for the purpose of what I need to do?
Yes
I feel that I need a daemon service taking the message and control when they are written. I wonder if that already exists.
It exists in the Unix derivatives (including Linux) and is called... syslogd
More seriously, the syslog function is intended to pass a message to a syslogd daemon that will route it according to its configuration file. Most common uses include writing it down to a file or to the system console (specially for panic level messages when nobody can be sure whether the file system is still accessible). The syslog system may come with more features than what you are asking for, but it is an extremely robust and extensively tested piece of software. In addition, it is certainly already active on your system, so you should have a strong reason to roll your own instead of using it.

You have two way :
First : Using something that already exist.
For the logging part, syslog (and syslog-ng) are well-know and well-used.
From that point, you can parametre syslog-ng to listen to an ip connection, and scan a dir for new file.
Your program can, when they will want to log, either connect to syslogng directly and send the log, and if the connection fail, write a new file in the directory that syslogng watch.
That allow to not have the loss of the log is syslog-ng are interrupted for a reason or another.
Second : Develop something really similar to syslog-ng.
In that case, it's up to you.

Related

get data as pfiles within kernel module code in solaris

When I execute on solaris 11.0:
pfiles /proc/PROCESSID
The result is process information, a small chunk of output is what interest me:
4: S_IFSOCK mode:0666 dev:543,0 ino:46228 uid:0 gid:0 size:0
O_RDWR|O_NONBLOCK
SOCK_STREAM
SO_REUSEADDR,SO_KEEPALIVE,SO_SNDBUF(49152),SO_RCVBUF(128480)
sockname: AF_INET6 ::ffff:10.10.50.28 port: 22
peername: AF_INET6 ::ffff:10.16.6.150 port: 55504
The process I pfiles /proc/PROCESSID is the sshd session itself.
My question is given a process id, how can i get this (pfiles) information from within a kernel code ?
When looking in struct proc i could not find something that give me this data.
Is there a pointer to struct that hold all open files occupied by the process on the proccess' proc?
I also executed truss pfiles /proc/PROCESSID but could not find the exact call
If you look in /usr/include/sys/user.h you'll see the open file information can be found in the p_user.u_finfo structure of the current process.
Walking that structure is not trivial. Just look at what the proc filesystem code has to do to look up the attributes of just one open file descriptor. There's lots of locking needed - you can't simply walk the data structures while things are running.
And, the following is beyond the scope of the question, but it's important...
For what it's worth, what you're doing can't work. It's fundamentally flawed - technically and legally.
What you're trying to do - track users who share a user account - is worthless anyway. You will never be able to prove that just because a certain login session executed some code that the code was executed because the user logged into that session purposely ran that code. Because any of the users with access to that account can modify the environment of the shared account such that malware is run by someone else. And they can make it look just like a typed-in command.
Shared credentials and accounts violate nonrepudiation. That's your insurmountable legal flaw in using any data your custom kernel tracking may produce - even if you manage to produce a system that's foolproof, which isn't likely.
If I'm logged into a shared account, you can never prove that the code I ran was run intentionally.
Well, that's not entirely true - if you have perfect auditing where you can trace every thing a user does down to the bytes modified on disk, you can. And "perfect" in this case means those users have no access whatsoever to change any part of the auditing system.
But if you already have perfect auditing in place, you don't need to write kernel modules to try and implement it.
Of course, it's impossible to prove you have perfect auditing in place because you can't prove that you don't have holes in it.
See the problem?
We're right back to "You CAN'T prove I did it intentionally."
You'd be much better off just using the OS-provided auditing services. Whatever you come up with isn't going to be useful in proving "who did it" for any intelligent bad actor - like someone who figures out a way to insert malicious code into another user's session. And the OS auditing will be sufficient to catch anyone who has no clue in how to cover their tracks.
But you won't be able to provably catch any bad actor who knows what he's doing when shared accounts are involved. And if you can't prove it, you might not even be able to do anything at all to someone you suspect. Because someone who really knows what they're doing will be able to pin the apparent blame on someone who's innocent - if they can't hide or destroy the evidence of the bad act[s] in the first place.
What are you going to do if you find the shared .profile file has a line in it that after a certain date emails sensitive data to a throwaway email account, but only when the login comes from a certain IP address?
Any one of the users who share that one account could have put it in there.
No auditing system in the world can solve that problem unless it's perfect and tracks every file change.
If the data you're trying to protect is important, whoever is tasking you to solve the problem by writing custom kernel modules needs to grow a brain and solve the real problem - shared user accounts. Get rid of them.
There's a reason why every security guide says not to use shared accounts, and every security audit I've ever seen will fail anyone using shared accounts.

What is the correct usage of the console winevents?

While "deploying" the solution from this question on a number of machines, I noticed some single core machines have terrible results (for example, the solution fails spectacularly on an intel atom).
What is happening here is the following. I call SetWinEventHook(...) to get callbacks for console changes. Due to the out-of-contex notification there is no synchronisation between the processing of the events and further changes on the console, so while multi core machines do well (not perfect btw), single core machines make a mess of this.
So I proceeded to turn on the In-Context notification since this should be synchronous according to the msdn. In c#, this is like asking for the Infinite Improbability Drive, so I created a simple dll in C that can do the dirty work, and talk to the dll from c#. So far so good.
As it turns out the callbacks happen in conhost.exe as opposed to the process owning the console. Now this presents a problem since in the callback I can't find a way to access the console output buffer in the context of conhost.exe. Or more precisely, I can't seem to find a way to obtain a handle to it. Here is what is avaliable: Handle to the console window, process id's of both the console application and conhost.exe and a pipe to the console application.
And here is what I tried sofar:
using GetStdHandle(...), results in invalid handles (makes sense in the context of conhost.exe)
using CreateFile("CONOUT$"...), dito
using the pipe to have the console application read from the output buffer, results in deadlock. I suspect a locking mechanism for preventing reads while writing, that would make sense.
duplicating the output buffer handle and passing it via the pipe. No joy because console handles cannot be duplicated to an external process.
attaching the conhost.exe process to the console of the console process it is serving and then do the CreateFile thing. Ok this was my favorite one, but it also doesn't work since AttachConsole(...) blocks, similar to 3.
Someone have any ideas on what to try next? My c/c++/winapi skills are intermediate at best, so it is very possible I overlooked something. Ok an obvious one would be to throw the whole thing overboard and just poll the output buffer for changes, but I would consider that a last resort option. I'm assuming that MS was smart enough to make sure either In Context or Out of Context events would actually be usable and as such I must be missing something.

Wait for file to be unlocked - Windows

I'm writing a TFTP server program for university, which needs exclusive access to the files it opens for reading. Thus it can be configured that if a file is locked by another process that it waits for the file to become unlocked.
Is there any way on Win32 to wait for a file become unlocked without creating a handle for it first?
The reason I ask, is that if another process calls CreateFile() with a dwShareMode that is incompatible to the one my process uses, I won't even be able to get a file handle to use for waiting on the lock using LockFileEx().
Thanks for your help in advance!
If you take a look at the Stack Overflow questions What Win32 API can be used to find the process that has a given file open? and SYSTEM_HANDLE_INFORMATION structure, you will find links to code that can be used to enumerate processes and all open handles of each running process. This information can be used to obtain a HANDLE to the process that has the file open as well as its HANDLE for the file. You would then use DuplicateHandle() to create a copy of the file HANDLE, but in the TFTP process' handle table. The duplicated HANDLE could then be used by the TFTP process with LockFileEx().
This solution relies on an internal function, NtQuerySystemInformation(), and an undocumented system information class value that can be used to enumerate open handles. Note that this feature of NtQuerySystemInformation() "may be altered or unavailable in future versions of Windows". You might want to use an SEH handler to guard against access violations were that to happen.
As tools from MS like OH and Process Explorer do it, it is definitely possible to get all the handles opened by a process. From there to wait on what you'd like the road is still long, but it is a beginning :)
If you have no success with the Win32 API, one place to look at is for sure the NT Native API http://en.wikipedia.org/wiki/Native_API
You can start from here http://msdn.microsoft.com/en-us/library/windows/desktop/ms724509%28v=vs.85%29.aspx and see if it works with the SystemProcessInformation flag.
Look also here for a start http://nsylvain.blogspot.com/2007/09/how-list-all-open-handles.html
The native API is poorly documented, but you can find resources online (like here http://www.osronline.com/article.cfm?id=91)
As a disclaimer, I should add that the Native API is somehow "internal", and therefore subject to change on future versions. Some functions, however, are exposed also publicly in the DDK, at kernel level, so the likelihood of these functions to change is low.
Good luck!

Display processes that access a folder

I am trying to write a simple program, preferably in C, that will watch a given directory. Whenever a process accesses that directory, I just want to print out the name of that process. It seems simple, but I am coming up short for solutions on MSDN. Does anyone know which library calls I will need for this, or any helpful advice? I have considered repeatedly querying for what processes have handles on the given directory and just watching for additions to that list.This approach just seems very intensive and I am hoping there is an easier way. Thanks.
I'm not sure if there's an easier way, but one way is to use a file system filter driver. Or easier a file system minifilter driver.
You can filter, log, track, control, ... all IO.
There is no supported way to do this from user mode. You can use the FindFirstChangeNotification API to tell when a file or directory has changed, but that doesn't tell you who did it. You might be able to hook some things to obtain this information... but that is of course not supported.
If you can use a driver, you can use Event Tracing for Windows for this information. This is what Sysinternals ProcMon uses. But installation of a driver is a very invasive process, bugs in your driver cause BSODs, and installation of a driver requires administrative rights. Something to keep in mind.

Daemon logging in Linux

So I have a daemon running on a Linux system, and I want to have a record of its activities: a log. The question is, what is the "best" way to accomplish this?
My first idea is to simply open a file and write to it.
FILE* log = fopen("logfile.log", "w");
/* daemon works...needs to write to log */
fprintf(log, "foo%s\n", (char*)bar);
/* ...all done, close the file */
fclose(log);
Is there anything inherently wrong with logging this way? Is there a better way, such as some framework built into Linux?
Unix has had for a long while a special logging framework called syslog. Type in your shell
man 3 syslog
and you'll get the help for the C interface to it.
Some examples
#include <stdio.h>
#include <unistd.h>
#include <syslog.h>
int main(void) {
openlog("slog", LOG_PID|LOG_CONS, LOG_USER);
syslog(LOG_INFO, "A different kind of Hello world ... ");
closelog();
return 0;
}
This is probably going to be a was horse race, but yes the syslog facility which exists in most if not all Un*x derivatives is the preferred way to go. There is nothing wrong with logging to a file, but it does leave on your shoulders an number of tasks:
is there a file system at your logging location to save the file
what about buffering (for performance) vs flushing (to get logs written before a system crash)
if your daemon runs for a long time, what do you do about the ever growing log file.
Syslog takes care of all this, and more, for you. The API is similar the printf clan so you should have no problems adapting your code.
One other advantage of syslog in larger (or more security-conscious) installations: The syslog daemon can be configured to send the logs to another server for recording there instead of (or in addition to) the local filesystem.
It's much more convenient to have all the logs for your server farm in one place rather than having to read them separately on each machine, especially when you're trying to correlate events on one server with those on another. And when one gets cracked, you can't trust its logs any more... but if the log server stayed secure, you know nothing will have been deleted from its logs, so any record of the intrusion will be intact.
I spit a lot of daemon messages out to daemon.info and daemon.debug when I am unit testing. A line in your syslog.conf can stick those messages in whatever file you want.
http://www.linuxjournal.com/files/linuxjournal.com/linuxjournal/articles/040/4036/4036s1.html has a better explanation of the C API than the man page, imo.
Syslog is a good option, but you may wish to consider looking at log4c. The log4[something] frameworks work well in their Java and Perl implementations, and allow you to - from a configuration file - choose to log to either syslog, console, flat files, or user-defined log writers. You can define specific log contexts for each of your modules, and have each context log at a different level as defined by your configuration. (trace, debug, info, warn, error, critical), and have your daemon re-read that configuration file on the fly by trapping a signal, allowing you to manipulate log levels on a running server.
As stated above you should look into syslog. But if you want to write your own logging code I'd advise you to use the "a" (write append) mode of fopen.
A few drawbacks of writing your own logging code are: Log rotation handling, Locking (if you have multiple threads), Synchronization (do you want to wait for the logs being written to disk ?). One of the drawbacks of syslog is that the application doesn't know if the logs have been written to disk (they might have been lost).
If you use threading and you use logging as a debugging tool, you will want to look for a logging library that uses some sort of thread-safe, but unlocked ring buffers. One buffer per thread, with a global lock only when strictly needed.
This avoids logging causing serious slowdowns in your software and it avoids creating heisenbugs which change when you add debug logging.
If it has a high-speed compressed binary log format that doesn't waste time with format operations during logging and some nice log parsing and display tools, that is a bonus.
I'd provide a reference to some good code for this but I don't have one myself. I just want one. :)
Our embedded system doesn't have syslog so the daemons I write do debugging to a file using the "a" open mode similar to how you've described it. I have a function that opens a log file, spits out the message and then closes the file (I only do this when something unexpected happens). However, I also had to write code to handle log rotation as other commenters have mentioned which consists of 'tail -c 65536 logfile > logfiletmp && mv logfiletmp logfile'. It's pretty rough and maybe should be called "log frontal truncations" but it stops our small RAM disk based filesystem from filling up with log file.
There are a lot of potential issues: for example, if the disk is full, do you want your daemon to fail? Also, you will be overwriting your file every time. Often a circular file is used so that you have space allocated on the machine for your file, but you can keep enough history to be useful without taking up too much space.
There are tools like log4c that you can help you. If your code is c++, then you might consider log4cxx in the Apache project (apt-get install liblog4cxx9-dev on ubuntu/debian), but it looks like you are using C.
So far nobody mentioned boost log library which has nice and easy way to redirect your
log messages to files or syslog sink or even Windows event log.

Resources