Write to and replace a file multiple times in Fortran - loops

I'm trying to run a code that takes a particularly long time. In order for it to complete, I've separated the time step loops as such so that the data can be dumped and then re-read for the next loop:
do 10 n1 = 1, 10
OPEN(unit=11,file='Temperature', status='replace')
if (n1.eq.1) then
(set initial conditions)
elseif (n1.gt.1) then
READ(11,*) (reads the T values from 11)
endif
do 20 n = 1, 10000
(all the calculations for new T values)
WRITE(11,*) (overwrites the T values in 11 - the file isn't empty to begin with)
20 continue
10 continue
My issue then is that this only works for 2 time n1 time steps - after it has replace file 11 once, it no longer replaces and just reiterates the values in there.
Is there something wrong with the open statement? Is there a way to be able to replace file 11 more than once in the same code?

Your program will execute the open statement 10 times, each time with status = 'replace'. On the first go round presumably the file does not exist so the open statement causes the creation of a new, empty, file. On the second go round the file does exist so the open statement causes the file to be deleted and a new, empty, file of the same name to be created. Any attempt to read from that file is likely to cause issues.
I would lift the initial file opening out of the loop and restructure the code along these lines:
open(unit=11,file='Temperature', status='replace')
(set initial conditions)
(write first data set into file)
do n1 = 2, 10
rewind(11)
read(11,*) (reads the T values from 11)
! do stuff
close(11) ! Not strictly necessary but aids comprehension of intent
! Now re-open the file and replace it
open(unit=11,file='Temperature', status='replace')
do n = 1, 10000
(all the calculations for new T values)
write(11,*) (overwrites the T values in 11 - the file isn't empty to begin with)
end do
end do
but there is any number of other ways to restructure the code; choose one that suits you.
In passing, passing data from one iteration to the next by writing/reading a file is likely to be very slow, I'd only use it for checkpointing to support restarting a failed execution.

Related

Is it possible to scan a file in reverse from the last line in C?

I am running a simulation and want to add an option to continue evolving from the last iteration of a previous run. In order to do so, I need to read the last 2 lines of data from a file. Is there any way to do this without using fscanf to scan from the beginning of the file?
Have the previous run record in another file the ftell() values of the last few lines and other info to note the meta data of the file (e.g. date-time-modified).
A subsequent run can use this info to begin where the prior run left off.
If this side file is missing or does not agree with the current state of things, walk the files with fgetc(), fgets(), etc. to find where to begin again.

External Merge Sort

I am trying to implement External merge sort for my DBMS project. I have a 3 file each with 20 pages and my buffer size is 20 pages .
Each of these i have sorted now . So all three files of 20 pages are sorted. Now while merging i need to bring 6 pages of each files (6x3=18 pages ) and 1 page to write the sorted output . And this has to be done 4 times to get whole file complete sorted .
But i am finding difficult to merge all these files ? any steps how to perform merge of 3 files making sure that every pages is brought in buffer size .Any recursive function ?
All the files content are stored in array a[fileno][pageno] format
eg a[1][20] =5 mean i have a data of 5 in the page no 20 of File 1 .
Assuming the page of a file hold an integer .
Assuming you do a 3 way merge, that's 3 inputs and 1 output, and it only has to be done once. Divide buffer into 4 parts, 5 pages each. Start by reading the first 5 pages of the 3 files, each into it's on 5 page buffer. Start a 3 way merge by comparing the first records in each of the 3 buffers and move the smallest to the output buffer. When the output buffer is filled (5 pages), write it out and continue. When an input buffer is emptied, read in the next 5 pages for that file.
When the end of one of the three input files is reached, the code switches to a 2 way merge. To simplify the code, copy the file related parameters into the parameters for file 0 and file 1. If file 2 goes empty first, nothing needs to be done. If file 1 goes empty first, copy file 2 parameters to file 1. If file 0 goes empty first, copy file 1 parameters to file 0, then file 2 parameters to file1. Then do the 2 way merge using file 0 and file 1.
When the end of of the two input files is reaches, the code switches to just copy the remaining file. Again, if file 0 goes empty first, then copy file 1 parameters to file 0, so that the copy code always works with file 0.

Limiting # of Lines in a Log/Buffer File

I've got a small program running on an OpenWRT router that logs to a remote MySQL database. In the event that the database becomes unavailable the program writes to a buffer file (/var/buffer) to protect against data loss. Thing is, since it's being stored on the router itself, there's a chance of running out of room fairly quickly if the database is down for too long.
I figure that if I keep the file to a max of 20,000 lines, discarding the oldest ones as new ones are written (once the max size has been reached), I can minimize my data losses and not have to worry about running out of storage space (a bit of loss isn't the end of the world, and I'd rather keep the newest stuff than the oldest).
From my research I understand that the first line of a file cannot be removed without rewriting the entire file (no good, too time-consuming), and every time I think I'm close to another solution it falls apart.
Is there a better way? Or is re-writing the 20k-line file every time I have a new line to add my only option?
You can have a log_LastLineNo variable which will store line number of last line written in log at that instant (For the first time, at the very start it will be 0).
Keep writing to file until you write 20,000 lines, and keep updating log_LastLineNo.
After that start overwriting the file from start, and set a variable log_full = 1.
Now
Case 1: log_full = 0 & log_LastLineNo = [some value < 20000]
In this case read from start till log_LastLineNo
Case 2: log_full = 1 & log_LastLineNo = [some value < 20000]
In this case start reading from log_LastLineNo + 1 till line 20000 and again from start to log_LastLineNo.

How to read from a specific line from a text file in VHDL

I am doing a program in VHDL to read and write data. My program has to read data from a line, process it, and then save the new value in the old position. My code is somewhat like:
WRITE_FILE: process (CLK)
variable VEC_LINE : line;
file VEC_FILE : text is out "results";
begin
if CLK='0' then
write (VEC_LINE, OUT_DATA);
writeline (VEC_FILE, VEC_LINE);
end if;
end process WRITE_FILE;
If I want to read line 15, how can I specify that? Then I want to clear line 15 and have to write a new data there. The LINE is of access type, will it accept integer values?
Russell's answer - using two files - is the answer.
There isn't a good way to find the 15th line (seek) but for VHDL's purpose, reading and discarding the first 14 lines is perfectly adequate. Just wrap it in a procedure named "seek" and carry on!
If you're on the 17th line already, you can't seek backwards, or rewind to the beginning. What you can do is flush the output file (save the open line, copy the rest of the input file to it, close both files and reopen them. Naturally, this requires VHDL-93 not VHDL-87 syntax for file operations). Just wrap that in a procedure called "rewind", and carry on!
Keep track of the current line number, and now you can seek to line 15, wherever you are.
It's not pretty and it's not fast, but it'll work just fine. And that's good enough for VHDL's purposes.
In other words you can write a text editor in VHDL if you must, (ignoring the problem of interactive input, though reading stdin should work) but there are much better languages for the job. One of them even looks a lot like an object-oriented VHDL...
Use 2 files, an input file and an output file.
file_open(vectors, "stimulus/input_vectors.txt", read_mode);
file_open(results, "stimulus/output_results.txt", write_mode);
while not endfile(vectors) loop
readline(vectors, iline);
read(iline, a_in);
etc for all your input data...
write(oline, <output data>
end loop;
file_close(vectors);
file_close(results);

Is there a way to clear a file context after you've written data in it?

I have this program where I open a file and write data points in it but the problem is I have to do that inside a loop. it goes:
file1=importdata('myfile.txt','%s')
for k=1:1:128
fid=fopen('myfile2.txt','w+') % I write input to that file and pass it to my exe file
fprintf(fid,'input1')
fprintf(fid,'input2')
fprintf(fid,'input3')
the 4th input (input4) is being taken from a diff file.txt and
input4=sscanf(file1{k},'%s')
Val=str2double(input4)
fprintf(fid,'%.3f',Val)
fclose(fid)
[status,result]=system('command<myfile2.txt')
M= sscanf(result,'%s')
more_result=[ Val M]
Fid2=fopen(myfile3.txt,'w+')
frpintf(Fid2,'%s', more_result)
end
This is a vague idea of the code.
Then I sscanf the results to get the a specific value (M) that I want.
I want to write Val and Z in another file but I only get the last value of each in the file because fopen(fid,'w+') keeps updating inside the loop. Using a+ plus doesn't help and it keeps appending and never updates after the program is done running.
RIght now I am using a+ then I manually delete the content of that file after i'm done running..writing outside the loop gives me error.
Is there a way I can clear the file after each run?
I think if you open just once with w+, you can write multiple times. The first time will be the beginning of the file and after that data will be appended.
Put opening and closing outside the loop. Before the loop you put
fid=fopen('myfile2.txt','w+')
Fid2=fopen('myfile3.txt','w+')
And you can put closing after the loop
fclose(fid)
fclose(Fid2)
If this really really does not work, then I suggest opening the file once before the loop with w' which will empty the contents. Then you can use a+ inside the loop to append data. You will have a clean file each time.

Resources