Managing log file size - c

I have a program which logs its activity.
I want to implement a log file mechanism to keep the log file under a certain size, lets say 10 MB.
The log file itself just holds commands the program executed; those commands are variable length.
Right now, the program runs on a windows environment, but I'm likely to port it to UNIX soon.
I've came up with two methods for managing the log files:
1. Keep multiple files of lower size, and if the new command exceeds the current file length, truncate the oldest file to zero size, and start writing there.
2. Keep a header in the file, which holds metadata regarding the first command in the file, and the next place to write to in the file. Also I think, each command should hold metadata about it's length this way.
My questions are as follows:
In terms of efficiency which of these methods would you use, and why?
Is there a unix command / function to this easily?
Thanks a lot for your help,
Nihil.

On UNIX/Linux platforms there's a logrotate program that manages logfiles. Details can be found for example here:
http://linuxcommand.org/man_pages/logrotate8.html

Related

ProFTPD Extended Log - Use a subset of command classes instead of whole command class

I am building a log parser for ProFTPD and have a question regarding the ExtendedLog config directive.
Official ProFTPD documentation has the following ExtendedLog spec:
ExtendedLog [ filename [[command-classes] format-nickname]]
There are a couple of valid command-classes, but they are mostly consisted of groups of commands. For me, this is a problem because if a user uploads large file and if there are many users and many uploads, a WRITE command in extended log occurs for portions of the actual upload, meaning if a file is large, for that file WRITE occurs many times. This may fill up the log space fairly easily for large uploads. In comparison to this, STOR command can be visible only at the end of the actual file upload.
I can't explicitly find WRITE as one of the commands in the write command class but I was wondering if there is a way to omit this specific WRITE command from log as I'm only interested in a portion of commands from the write command class. The commands that I'm particularly and only interested in logging are STOR, DELE and RMD.
Many thanks.
At the end I did not found any flags in ProFTPD that could handle this but rather implemented log rotation.
The log rotation restarts ProFTPD and sends interrupt to the log parser. Log parser then detects the interrupt, reads the current log file and then stops processing. Log rotate program then empties out the original log file.

Unknown Master List .dat file, issues retrieving information

I come to you completely stumped. I do some side work for a company that uses an old DOS based program to input and retrieve data. This is a legacy piece of software, and they have since moved to either QuickBooks or Outlook for all of their address or billing related needs. However there have been some changes made, and they work with this database fairly regularly. Since the computer that this software is on, is running XP (and none of the other computers in the office can run it) they're looking to phase this software out for when the computer inevitably explodes.
TLDR; I have an old .csv file (roughly two years) that has a good chunk of information on it, but again it's two years old. I have another file called ml.dat (I'm assuming masterlist.dat) that's in the same folder as this legacy software. I open it with notepad and excel and am presented with information like this:
S;Û).;PÃS;*p(â'a,µ,
The above chunk of text is recognized much less within notepad or excel. It's a lot more of the unrecognized squares.
Some of the information is actually readable however. I can for example read the occasional town name, or person's name but I'm unable to get all of the information since there's a lot missing. Perhaps the data isn't in unicode or something? I have no idea. Any suggestions? I'm ultimately trying to take this information and toss it into either quickbooks or outlook.
Please help!
Thanks
Edit: I'm guessing the file might be encrypted since .dat's are usually clear text? Any thoughts?
.DAT files can be anything, they are usually just application data. Since there is readable text, then it is very unlikely that this file is encrypted. Instead you are seeing ASCII representations of the bytes of other content. http://www.asciitable.com/ Assuming single byte values, the number 77 might appear in the file somewhere as M.
Your options:
Search for some utility to load and translate the dat file for that application.
Set up an appropriate dos emulator so you can run this application on another box, or even a virtual machine running freedos or something.
Figure out the file format and then write a program to translate the data.
For #3, you can attach a debugger to the application to trace how the file is read and written. Alternatively you can try to figure out record boundaries (if all the records are the same size, then things are a little bit easier.) Then you can use known values to try to find field boundaries. If you can find (or reverse compile) the source code, then that could also give you insight into the file format.
1 is your best bet, and #2 will buy you some time so that you don't need that original machine anymore. #3 would likely be something to outsource.
If you can find the source or file format, then you just recreate whatever data structure was dumped to the file and read the file into it.
To find which exe opens it, you can do something like:
for %f in (*.exe) do find "ml.dat" %f -c
Assuming the original application was written in C then there would be code something like this to read the first record from the file:
struct SecretData
{
int first;
double money;
char city[10];
};
FILE* input;
struct SecretData secretdata;
input = fopen("ml.dat", "rb");
fread(&data, sizeof(data), 1, input);
fclose(input);
(The file would have been written with fwrite.) Basically you need to figure out the innards of the SecretData structure to be able to read the file.
There likely wasn't a separate utility used to make the file, dumping data and reading it back from a file is relatively easy in most languages.

Safely writing to and reading from the same file with multiple processes on Linux and Mac OS X

I have three processes designed to run constantly in both Linux and Mac OS X environments. One process (the Downloader) downloads and stores a local copy of a large XML file every 30 seconds. Two other processes (the Workers) use the stored XML file for input. Each Worker starts and runs at random times. Since the XML file is big, it takes a long time to download. The Workers also take a long time to read and parse it.
What is the safest way to setup the processes so the Downloader doesn't clobber the stored file while the Workers are trying to read it?
For Linux and Mac OS X machines that use inode based file systems, use temporary files to store the data while its being downloaded (and is an incomplete state). Once the download is complete, move the temporary file into its final location with an atomic action.
For a little more detail, there are two main things to watch out for when one process (e.g. Downloader) writes a file that's actively read by other processes (e.g. Workers):
Make sure the Workers don't try to read the file before the Downloader has finished writing it.
Make sure the Downloader doesn't alter the file while the Workers are reading it.
Using temporary files accommodates both of these points.
For a more specific example, when the Downloader is actively pulling the XML file, have it write to a temporary location (e.g. 'data-storage.tmp') on the same device/disk* where the final file will be stored. Once the file is completely downloaded and written, have the Downloader move it to its final location (e.g. 'data-storage.xml') via an atomic (aka linearizable) rename command like bash's mv.
* Note that the reason the temporary file needs to be on the same device as the final file location is to ensure the inode number stays the same and the rename can be done atomically.
This methodology ensures that while the file is being downloaded/written the Workers won't see it since it's in the .tmp location. Because of the way renaming works with inodes, it also make sure that any Worker that opened the file continues to see the old content even if a new version of the data-storage file is put in place.
Downloader will point 'data-storage.xml' to a new inode number when it does the rename, but the Worker will continue to access 'data-storage.xml' from the previous inode number thereby continuing to work with the file in that state. At the same time, any Worker that opens a new copy 'data-storage.xml' after Downloader has done the rename will see contents from the new inode number since it's now what is referenced directly in the file system. So, two Workers can be reading from the same filename (data-storage.xml) but each will see a different (and complete) version of the contents of the file based on which inode the filename was pointed to when the file was first opened.
To see this in action, I created a simple set of example scripts that demonstrate this functionality on github. They can also be used to test/verify that using a temporary file solution works in your environment.
An important note is that it's the file system on the particular device that matters. If you are using a Linux or Mac machine but working with a FAT file system (for example, a usb thumb drive), this method won't work.

Following multiple log files efficiently

I'm intending to create a programme that can permanently follow a large dynamic set of log files to copy their entries over to a database for easier near-realtime statistics. The log files are written by diverse daemons and applications, but the format of them is known so they can be parsed. Some of the daemons write logs into one file per day, like Apache's cronolog that creates files like access.20100928. Those files appear with each new day and may disappear when they're gzipped away the next day.
The target platform is an Ubuntu Server, 64 bit.
What would be the best approach to efficiently reading those log files?
I could think of scripting languages like PHP that either open the files theirselves and read new data or use system tools like tail -f to follow the logs, or other runtimes like Mono. Bash shell scripts probably aren't so well suited for parsing the log lines and inserting them to a database server (MySQL), not to mention an easy configuration of my app.
If my programme will read the log files, I'd think it should stat() the file once in a second or so to get its size and open the file when it's grown. After reading the file (which should hopefully only return complete lines) it could call tell() to get the current position and next time directly seek() to the saved position to continue reading. (These are C function names, but actually I wouldn't want to do that in C. And Mono/.NET or PHP offer similar functions as well.)
Is that constant stat()ing of the files and subsequent opening and closing a problem? How would tail -f do that? Can I keep the files open and be notified about new data with something like select()? Or does it always return at the end of the file?
In case I'm blocked in some kind of select() or external tail, I'd need to interrupt that every 1, 2 minutes to scan for new or deleted files that shall (no longer) be followed. Resuming with tail -f then is probably not very reliable. That should work better with my own saved file positions.
Could I use some kind of inotify (file system notification) for that?
If you want to know how tail -f works, why not look at the source? In a nutshell, you don't need to periodically interrupt or constantly stat() to scan for changes to files or directories. That's what inotify does.

Get `df` to show updated information on FreeBSD

I recently ran out of disk space on a drive on a FreeBSD server. I truncated the file that was causing problems but I'm not seeing the change reflected when running df. When I run du -d0 on the partition it shows the correct value. Is there any way to force this information to be updated? What is causing the output here to be different?
In BSD a directory entry is simply one of many references to the underlying file data (called an inode). When a file is deleted with the rm(1) command only the reference count is decreased. If the reference count is still positive, (e.g. the file has other directory entries due to symlinks) then the underlying file data is not removed.
Newer BSD users often don't realize that a program that has a file open is also holding a reference. The prevents the underlying file data from going away while the process is using it. When the process closes the file if the reference count falls to zero the file space is marked as available. This scheme is used to avoid the Microsoft Windows type issues where it won't let you delete a file because some unspecified program still has it open.
An easy way to observe this is to do the following
cp /bin/cat /tmp/cat-test
/tmp/cat-test &
rm /tmp/cat-test
Until the background process is terminated the file space used by /tmp/cat-test will remain allocated and unavailable as reported by df(1) but the du(1) command will not be able to account for it as it no longer has a filename.
Note that if the system should crash without the process closing the file then the file data will still be present but unreferenced, an fsck(8) run will be needed to recover the filesystem space.
Processes holding files open is one reason why the newsyslog(8) command sends signals to syslogd or other logging programs to inform them they should close and re-open their log files after it has rotated them.
Softupdates can also effect filesystem freespace as the actual inode space recovery can be deferred; the sync(8) command can be used to encourage this to happen sooner.
This probably centres on how you truncated the file. du and df report different things as this post on unix.com explains. Just because space is not used does not necessarily mean that it's free...
Does df --sync work?

Resources