I know how to do basic file operations in C, but what I'd like to do if it is possible is somehow create a variable that represents a live running file (for example, an access_log from apache that updates every second). Then I want to be able to read the last line that is entered into the file as it happens regardless of whether any other process currently has the file open or not.
I'm thinking of code like this:
int main(){
FILE *live=fattachto("/path/to/apaches/access_log");
long lastupdated=live->lastwrite();
while(1){
if(live->lastwrite() != lastupdated){printf("File was just updated now\n");}
sleep(1);
}
}
Yes I did put in sleep in my code because I want to ensure my code doesn't oveheat the cpu.
and the code above as-is won't execute because I'm looking for the correct set of functions to use to produce the end result.
Any idea?
Why don't you just set line buffering or use small buffer read size, once you've seeked to the end of the log file? Then simply do blocking reads on the lines from the file as they're added?
Then when the read returns, you can signal, set flags or initiate processing, on any interesting input.
Related
This question already has answers here:
Append to the end of a file in C
(2 answers)
Closed 3 years ago.
I'm trying to write a log file to my code but I have some restrictions and don't know how to surpass them.
My code runs for several days and loops every 1 minute. I want to write in the log file at the end of every loop, so the log file will have thousands of lines. So, my two main points about this are:
I would like to be able to open and close the file at every loop (after I finish the operations, I open the file, write what I want and then close it). This way I can open the log file anytime to check how the code is going.
Each line of the log file will have a different length depending of what happened in the loop. Since the file will have thousands of lines, I would like to be able to go to the next line without having to read all the previous existing lines.
I've tried to use the fseek function like this:
fseek(fp,-1,SEEK_END);
but had no success (I ended up writing over the already existing line).
It's important to say that I'm writing this code in linux but would like it to be portable.
Everything I found here on other questions shows people reading the whole line and I don't need to read or store the existing lines.
I just to want to open the file and write in a new line. Does anyone know how I can do this?
Open the file in append mode ("a" in fopen). That way all writes will go to the end of the file; no seeking required.
Also, there is no point in opening/closing the same file repeatedly. Just open the file once, before the loop starts. Keeping a file open does not prevent others from reading it. If you're concerned with delays caused by buffering, you can just fflush() the handle after every line of output.
I am developing an application in WinCE7. The application includes a serial com port and file IO operations. There is an embedded device connected the serial port. I need to check the status of inputs on the device and save their details in file. Lets say, if input 1 is high then I need to write Input 1 HIGH on the serial port, and save the same in file.
Now to write data in file I am using fprintf & fopen functions. Code looks like below:
main()
{
// some code to initialize serial port
FILE * fp;
fp= fopen ("Logs.txt", "w+"); //-------> create a file named as Logs.txt
while(1)
{
if(Input1 == TRUE)
{
serialPort.Send("Input 1 HIGH");
fprintf(fp,"%s","Input 1 HIGH"); //-------> saving data in file
}
if(Input2 == TRUE)
{
serialPort.Send("Input 2 HIGH");
fprintf(fp,"%s","Input 2 HIGH"); //-------> saving data in file
}
//same goes for rest of the inputs
}
fclose(fp); //----------> closing the file
}
Now after writing data to the file using fprintf, we need to use fclose() to close the file. But as I have to continuously monitor the input status I have used while(1) due to which my control doesn't reaches at fclose(fp). Thus the file is not closed and it becomes corrupted. When I open the file to see the saved data it gave me below error:
How can I properly use flcose() and fprintf() to write data in file.?
There is nothing wrong with opening log file and closing each time you want to do the logging. It might cause problems if your while loop is being executed with very high frequency - it may slow down your application. This is why you should consider a way to somehow close your application, now as I understand you copy your log file while your application is being executed - this can cause all sort of problems - among others that your file will be corrupted.
btw. you can also use windows-ce native logging api like: CeLog, see here:
https://msdn.microsoft.com/en-us/library/ee479601.aspx
https://msdn.microsoft.com/en-us/library/ee488587.aspx
When would your loop start, and when would it end? I mean, not programatically, but theoretically? The simplest thing seems to be to modify the while condition. As of now, it will be endlessly stuck in the while loop!
Lets assume that your program is running, and it keeps on running for as long as your device remains connected. So you need to monitor some message or look for some event that signifies that the device has disconnected or stopped (or whatever is appropriate). This event or situation can be used as a flag (for eg., TRUE while the device is connected and active, FALSE when the device is disconnected or inactive) and then use that as a condition to open/write/close the file.
Further more, instead of a loop, you could use a timer or a periodic event to poll your device input. You can adjust your polling rate as high as you like, depending on the accuracy of your timer. This will ensure that your code doesn't get stuck in a loop. Say, you call your function to monitor the input at every timer tick. So each time the tick event fires, the input will be read. And you can check the condition of the device's connection before calling the polling function.
the flow would go something like:
Start->Device connected, flag TRUE->fOpen->Start timer
=>timer tick->flag == TRUE?->poll device->write to file=> (repeats for each tick unless the below scenario happens.)
Device disconnected, flag FALSE
=>timer tick->flag == TRUE? NO->fClose->Stop timer->Stop
This would assume that you had some way of detecting the device connect/active vs disconnect/inactive situations. Maybe the device SDK has some form of message or event that signifies this.
I have a Fortran code that updates in real time the content of a text file by adding some new real time measuraments at the very bottom of it. This text file is used (reading only) both by a fluid dynamics code (that continuously runs in real time) and by another executable built from matlab code (that performs plotting). I want to add a line in the Fortran code that says: update the text file ONLY IF it is not opened by any other program. I tried using INQUIRE:
do
INQUIRE(FILE = filename,OPENED = ISopen)
if (.not.ISopen) then
ADD NEW MEASUREMENTS HERE
exit
endif
endif
enddo
and before running this fortran program I opened the file with textpad. However, the variable ISopen is false. So I guess maybe INQUIRE only works for testing if file is opened within the fortran program itself. In fact if I add at the beginning of the above snippet of code:
OPEN (33,FILE = filename)
then ISopen is true. I then created an executable from a fortran code containing only:
OPEN (33,FILE = filename)
pause
CLOSE(33)
and I run it and let it in a paused status. I then run the first code I posted above and ISopen is still false. Any idea how to test if a file is open by any other program from within Fortran? My operative system is windows 7.
thanks
At the end I solved in this way. I could not find a way to know if the file is opened, and even if there was a way it could still happen that some other program opens it right between the time I check if its open and then I modify it. Therefore, I just create a temporary copy of the file to modify, I modify this temporary copy and then I move the file back by overwriting the original one. The latter operation is performed only if the file is not locked (i.e. no other program opened the file to read the data), so I keep trying to copy it until it succeeds. I tested in many situations and it works. The code is:
USE IFPORT
IMPLICIT NONE
character*256 :: DOScall
logical :: keepTRYING
integer :: resul
DOScall = 'copy D:\myfile.txt D:\myfile_TMP.txt' !create temporary copy
resul = SYSTEM(DOScall)
open(15,file ='D:\myfile_TMP.txt',form = 'formatted')
!.... perform here some writing operations on myfile_TMP........
close(15)
do
resul = SYSTEM('move D:\myfile_TMP.txt D:\myfile.txt')
if (resul==0) then
exit
else
pause(10)
endif
enddo
Note that works perfectly fine for multiple programs performing reading and one program writing in the file (as in my case where I have 2 programs reading and only one writing). If multiple programs write the same file I guess some other parallel techniques have to be used.
I do not think it is possible to check if a file is opened by an external process. Some programming languages allow you to check if the file is locked or not, and that is still far from telling you if the file is opened or not, both program must acquire and release system lock for it to really work. To my knowledge, the standard fortran does not have that feature directly, but you can use the semaphore from C with the interoperability stuff.
Most user application (editors mostly) however, before updating a file, usually check if the content on the disk has changed since they capture a copy, and alert the user. They also do that if they lost and acquire the focus. If you restrict your goal to updating only if the content has not changed since you opened it, you can do the same or simply open-add and close any time you want to add a new entry. A good editor will notify the user on the other side that the content had been change by another process.
An alternative is to simulate a lock yourself and buffer the data in fortran. By buffering I mean, collect some new data (let say 100, 1000 or whatever number that is convenient) and send them to the file at once. For each update, you open, update and close the file.
Your lock can be made of two simple files (emptys one for example), one created by the reader (matlab) and the other created by writer (fortran program). Let name them reading.ongoing for the reader and writing.ongoing for the writer.
On the fortran side, you do the following anytime you have collected enough data to write:
check for the existence of reading.ongoing (using inquire function), proceed only if it does not exist
create writing.ongoing
check for the existence of reading.ongoing again, if it exists, delete writing.ongoing and go back to step 1. If it does not exist, proceed forward.
open, write the data and close the data file
delete writing.ongoing
On matlab side, do similar thing, inverting the role of reading.ongoing and writing.ongoing.
In an exceptional race condition you could be blocked because they are all trying at the same time. In that case, you could modify the step 1. of matlab to force it to wait for few millisecond before proceeding. This will get you on the road as long as none of the program get killed between step 1 and 5.
You can also use semaphore with the interoperability stuff in fortran. See the following post for example. You can also similar think on the side of matlab, I do not have any example. That will lower your headhache and let the system manage the lock for you.
I have a C program which takes a file as an argument, cleans up the file and writes the cleansed data to a new temp file. It then accepts some stdin, cleans it up and sends it stdout.
I have a second file which performs operations on this temp file and on the stdin again.
./file_cleanse <file1.txt> | ./file_operation <temp.txt>
I either get no or nonsensical stdout from the ./file_operation and I believe this is because it is reading from a file that's still being written/doesn't exist at this point.
Is there any way to make ./file_operation wait until ./file_cleanse has returned a value in bash?
What about:
./file_cleanse <file1.txt> > /tmp/temporaryFile
./file_operation <temp.txt> < /tmp/temporaryFile
As I understand the question, file_operation needs to read the standard output of file_cleanse after processing the temporary file, but it should not process the temporary file until file_cleanse has written some data to its standard output (the standard input of file_operation).
If that's correct, then a simple synchronization is for file_operation to read (a byte or any convenient larger amount of data) from its standard input. When this is successful, the file_cleanse must have finished with the temporary file; file_operation can therefore process the temporary file, and then read the rest of its standard input and process that appropriately.
I'm writing an inotify watcher in C for a Minecraft server. Basically, it watches server.log, gets the latest line, parses it, and if it matches a regex; performs some actions.
The program works fine normally through "echo string matching the regex >> server.log", it parses and does what it should. However, when the string is written to the file automatically via Minecraft server, it doesn't work until I shut down the server or (sometimes) log out.
I would post code, but I'm wondering if it doesn't have something to do with ext4 flushing data to disk or something along those lines; a filesystem problem. It would be odd if that were the case though, because "tail -f server.log" updates whenever the file does.
Solved my own problem. It turned out the server was writing to the log file faster than the watcher could read from it; so the watcher was getting out of sync.
I fixed it by adding a check after it processes the event saying "if the number of lines currently in the log file is more than the recorded length of the log, reprocess the file until the two are equal."
Thanks for your help!
Presumably that is because you are watching for IN_CLOSE events, which may not occur until the server shuts down (and closes the log file handle). See man inotify(7) for valid mask parameters for the inotify_add_watch() call. I expect you'll want to use IN_WRITE.
Your theory is more than likely correct, the log file is being buffered by the OS, and the log writer has no flushing of that buffer, so everything will remain in the buffer till the file is closed or the buffer is full. A fast way to test is to start up the log to the point where you know it would have written events to the log, then forcibly close it so it cannot close the handle, if the log is empty is definitly the buffer. If you can get hold of the file handle/descriptor, you can use setbuf to remove buffering, at the cost of performance.