I'm creating a scheduling software, i have hundreds of dataset described in text files.
I'm using dirent.h with a loop to read the texts files, for each file i make a schedule and i append the result to another text file ( like cpu time, dataset name, tardiness ...), this file is common to all schedules.
I'm opening/closing the result file just once ( fopen() before the loop, fclose() after the loop when all the schedules are done).
I've no problem on Windows 7, but under linux, the file seems to be closed by the system due to a kind of timeout, I've just 9-10 dataset that are scheduled (~ 2 hours) and after it is stuck because it can't write into the result file :/
Does anyone already have this kind of trouble and found a solution?
Linux does not close the file automatically. Something is wrong in your code.
Try running your program using "strace" and identify where the close() happens.
strace -f -o 1.txt ./my_best_app_ever
Open the 1.txt file using a text editor (or less) and see what your app is doing.
Related
I have a program running on linux which generates thousand of text files. I want these files to be packed into a single (compressed) file.
The compressed file will later be opened by a C program, which needs to access specific files inside that container, in a random fashion.
The whole thing is working as follows:
Linux program generates thousands of small files
zip -9 out.zip *
C program with libzip accesing specific files inside .zip, depending on what the user requests. These reads are done on memory (no writing decompressed files to disk).
Works great. However, it takes about ~20 minutes for the compression to finish. Because such compression runs on a 40-core server, I have been experimenting with lbzip2 with excellent results in terms of both compression ratio and speed. I have also used zip -0 to pack all the .bz files into a single .zip container, which I assume is a better option than tar because of random access.
So my question is, how can I read .bz files compressed inside a .zip file? As far as I can tell, gzopen takes a file path as first argument.
You could just stick with your current zip format for random access. Run separate zip commands individually on each text file to turn them into many single entry zip files. Launch all those at once, and your 40 cores will be kept busy until done. Once done, use zipmerge to combine them all into a single zip file.
This is on win7.
I got a batch script that executes a C++ program and take all of its output to the file with ">".
The program takes input from servers and display everything. We need all these information so we log all these outputs down to a file. But after a short while, we see that the program stops writing to the file and just stop there while the program continues running.
The file size is also at 0 byte (OS doesn't update until file is closed?) But we can see the content of the file with notepad++, but it does not seem to update any longer.
There are about 250,000 lines long and we see that our data simply got cut off in the end. For example, suppose you should have a table of data that lists out 123 567 436 975, we only see 123 567 43. The whole line isn't even finished in the end.
There are a lot of things to write down and there are lots of network transmission. Does the program simply give up outputting when there are too much data? Is there a way around this?
Try to disable buffering. setbuf(stdout, NULL);.
Anyway, in new versions of windows, when a file is being created and data is being written (the clasic >file scenario), the grow of the file is not always visible.
In this case, dir command shows a 0 bytes file, or stops to show increasing values.
Try to read the file with type file >nul and then dir file. This "should" refresh the file size information. But it is not needed. The file is growing, just not showing it.
After doing tons of research and nor being able to find a solution to my problem i decided to post here on stackoverflow.
Well my problem is kind of unusual so I guess that's why I wasn't able to find any answer:
I have a program that is recording stuff to a file. Then I have another one that is responsible for transferring that file. Finally I have a third one that gets the file and processes it.
My problem is:
The file transfer program needs to send the file while it's still being recorded. The problem is that when the file transfer program reaches end of file on the file doesn't mean that the file actually is complete as it is still being recorded.
It would be nice to have something to check if the recorder has that file still open or if it already closed it to be able to judge if the end of file actually is a real end of file or if there simply aren't further data to be read yet.
Hope you can help me out with this one. Maybe you have another idea on how to solve this problem.
Thank you in advance.
GeKod
Simply put - you can't without using filesystem notification mechanisms, windows, linux and osx all have flavors of this. I forget how Windows does it off the top of my head, but linux has 'inotify' and osx has 'knotify'.
The easy way to handle this is, record to a tmp file, when the recording is done then move the file into the 'ready-to-transfer-the-file' directory, if you do this so that the files are on the same filesystem when you do the move it will be atomic and instant ensuring that any time your transfer utility 'sees' a new file, it'll be wholly formed and ready to go.
Or, just have your tmp files have no extension, then when it's done rename the file to an extension that the transfer agent is polling for.
Have you considered using stream interface between the recorder program and the one that grabs the recorded data/file? If you have access to a stream interface (say an OS/stack service) which also provides a reliable end of stream signal/primitive you could consider that to replace the file interface.
There is no functions/libraries available in C to do this. But a simple alternative is to rename the file once an activity is over. For example, recorder can open the file with name - file.record and once done with recording, it can rename the file.record to file.transfer and the transfer program should look for file.transfer to transfer and once the transfer is done, it can rename the file to file.read and the reader can read that and finally rename it to file.done!
you can check if file is open or not as following
FILE_NAME="filename"
FILE_OPEN=`lsof | grep $FILE_NAME`
// if [ -z $FILE_NAME ] ;then
// "File NOT open"
// else
// "File Open"
refer http://linux.about.com/library/cmd/blcmdl8_lsof.htm
I think an advisory lock will help. Since if one using the file which another program is working on it, the one will get blocked or get an error. but if you access it in force,the action is Okey, but the result is unpredictable, In order to maintain the consistency, all of the processes who want to access the file should obey the advisory lock rule. I think that will work.
When the file is closed then the lock is freed too.Other processes can try to hold the file.
I'm intending to create a programme that can permanently follow a large dynamic set of log files to copy their entries over to a database for easier near-realtime statistics. The log files are written by diverse daemons and applications, but the format of them is known so they can be parsed. Some of the daemons write logs into one file per day, like Apache's cronolog that creates files like access.20100928. Those files appear with each new day and may disappear when they're gzipped away the next day.
The target platform is an Ubuntu Server, 64 bit.
What would be the best approach to efficiently reading those log files?
I could think of scripting languages like PHP that either open the files theirselves and read new data or use system tools like tail -f to follow the logs, or other runtimes like Mono. Bash shell scripts probably aren't so well suited for parsing the log lines and inserting them to a database server (MySQL), not to mention an easy configuration of my app.
If my programme will read the log files, I'd think it should stat() the file once in a second or so to get its size and open the file when it's grown. After reading the file (which should hopefully only return complete lines) it could call tell() to get the current position and next time directly seek() to the saved position to continue reading. (These are C function names, but actually I wouldn't want to do that in C. And Mono/.NET or PHP offer similar functions as well.)
Is that constant stat()ing of the files and subsequent opening and closing a problem? How would tail -f do that? Can I keep the files open and be notified about new data with something like select()? Or does it always return at the end of the file?
In case I'm blocked in some kind of select() or external tail, I'd need to interrupt that every 1, 2 minutes to scan for new or deleted files that shall (no longer) be followed. Resuming with tail -f then is probably not very reliable. That should work better with my own saved file positions.
Could I use some kind of inotify (file system notification) for that?
If you want to know how tail -f works, why not look at the source? In a nutshell, you don't need to periodically interrupt or constantly stat() to scan for changes to files or directories. That's what inotify does.
How can I tell if a file is open in C? I think the more technical question would be how can I retrieve the number of references to a existing file and determine with that info if it is safe to open.
The idea I am implementing is a file queue. You dump some files, my code processes the files. I don't want to start processing until the producer closes the file descriptor.
Everything is being done in linux.
Thanks,
Chenz
Digging out that info is a lot of work(you'd have to search thorugh /proc/*/fd
You'd be better off with any of:
Save to temp then rename. Either write your files to a temporary filename or directory, when you're done writinh, rename it into the directory where your app reads them. Renaming is atomic, so when the file is present you know it's safe to read.
Maybe a variant of the above , when you're done writing the file foo you create an empty file named foo.finished. You look for the presence of *.finished when processing files.
Lock the files while writing, that way reading the file will just block until the writer unlocks it. See the flock/lockf functions, they're advisory locks though so both the reader and writer have to lock , and honor the locks.
I don't think there is any way to do this in pure C (it wouldn't be cross platform).
If you know what files you are using ahead of time, you can use inotify to be notified when they open.
Use the lsof command. (List Open Files).
C has facilities for handling files, but not much for getting information on them. In portable C, about the only thing you can do is try to open the file in the desired way and see if it works.
generally you can't do that for variuos reasons (e.g. you cannot say if the file is opened with another user).
If you can control the processes that open the file and you are try to avoid collisions by locking the file (there are many libraries on linux in order do that)
If you are in control of both producer and consumer, you could use lockf() of flock() to lock the file.
there is lsof command on most distros, which shows all currently open files, you can ofcourse grep its output if your files are in the same directory or have some recognizable name pattern.