How does fread() in C work inside a for loop? - c

I am new to C programming, but I need it to read some binary file which I describe below.
The India Meteorological Department (IMD) has provided historical weather data in .GRD files in their website. They have also provided sample C code to read those files. From their sample C code, I have written the following code that extracts the daily minimum temperatures on 15 April 1980 recorded on a 31x31 grid over India.
/* This program reads binary data for 365/366 days and writes in ascii file. */
#include <stdio.h>
int main() {
float t[31][31];
int i,j ,k;
FILE *fin,*fout;
fin = fopen("C:\\New folder\\Mintemp_MinT_1980.GRD","rb"); // Input file
fout = fopen("C:\\New folder\\MINT15APR1980.TXT","w"); // Output file
fprintf(fout,"Daily Minimum Tempereture for 15 April 1980\n");
if(fin == NULL) {
printf("Can't open file");
return 0;
}
if(fout == NULL) {
printf("Can't open file");
return 0;
}
for(k=0 ; k<366 ; k++) {
fread(&t,sizeof(t),1,fin);
if(k == 105) {
for(i=0 ; i < 31 ; i++) {
fprintf(fout,"\n") ;
for(j=0 ; j < 31 ; j++)
fprintf(fout,"%6.2f",t[i][j]);
}
}
}
fclose(fin);
fclose(fout);
return 0;
}
/* end of main */
The file Mintemp_MinT_1980.GRD can be downloaded from the IMD website by selecting the year as 1980 against Minimum Temperature.
What I don't understand is that how the fread() function actually works in the line fread(&t,sizeof(t),1,fin) within the loop for(k=0 ; k<366 ; k++). In plain sight, the arguments of fread() here do not depend on the looping variable k, and so it should read the same data to the matrix t[31][31] for every k. However, I have checked that, surprisingly, the data extracted by this program are different for different values of k in the line if(k == 105), i.e., the data extracted for k == 105 and k == 32 are different, for example.
I would very much appreciate if one can please explain the above.

Files contain sequential data. All the file operators are based on the premise that whatever you do to a file, you'll generally be doing it in a sequential way.
So when you read data, and then read more data, you will be getting sequential chunks of the file. The both the FILE datatype and the operating system itself do a number of things for you, including keeping track of your current position in the file and doing block buffering in memory to improve performance.
If you wanted to reread the same data over, or skip around in the file, you would need to use fseek() to change positions in the file before doing your next read.

Related

Pack file names and unpack from archive?

I was asked to create an archive file which would be used to pack in multiple files into and unpack them as well. I noticed that for file names that that went over the allotted buffer size of 20 would give me problems. While I did find a working solution? I feel like there were better ways to go about it.
This is the method that I am using to store the file name in the archive and pad to the right with nulls up to 20 bytes
/*
copies the original name using memcpy to preserve it for deletion purposes
calculates the length of string in an if-else statement for padding purposes
truncates the file name in argv[i] if file name is too long ( > 20 bytes )
if the file is not long enough ( != 20 bytes) pads with '\0'
*/
memcpy(original_name,argv[i],strlen(argv[i]));
original_name[strlen(argv[i])] = '\0';
if(strlen(argv[i]) > 20){
argv[i][20] = '\0';
fputs(argv[i], archive);
} else {
fputs(argv[i],archive);
for ( j = 0; j < 20-strlen(argv[i]); j++)
{
fputc('\0', archive);
}
}
and this is the method I am using to extract the file size that is followed by the file name
/*
copies the first four bytes from the buffer into an int which will be used as the size of the file
opens a new file with the file name taken from the last 20 bytes of the buffer
checks if the file was opened properly
*/
memcpy(&size_of_file,&buff[0],sizeof(size_of_file));
memcpy(file_name,&buff[4],20*sizeof(char));
file_name[20] = '\0';
new_file = fopen(file_name,"wb+");
if_opened(new_file);

Writing to an unformatted, direct access binary file in C

I'm trying to use this diehard repo to test a stream of numbers for randomness (https://github.com/reubenhwk/diehard). It gives these specifics for the file type it reads:
Then the command
diehard
will prompt for the name of the file to be tested.
That file must be a form="unformatted",access="direct" binary
file of from 10 to 12 million bytes.
These are file conventions specific to fortran, no? The problem is, I'm using a lfsr-generator to generate my random binary stream, and that's in C. I tried just doing fputc of the binary stream to a text file or .bin file, but diehard doesn't seem to be accepting it.
I have no experience with fortran. Is there any way to create this file type using C? Will I just have to bite the bullet and have C call a fortran subroutine that creates the file? Here's my C code for reference:
#include <stdio.h>
int main(void)
{
FILE *fp;
fp=fopen("numbers", "wb");
const unsigned int init = 1;
unsigned int v = init;
int counter = 0;
do {
v = shift_lfsr(v);
fputc( (((v & 1) == 0) ? '0' : '1'), fp);
counter += 1;
} while (counter < 11000000);
}
You're creating the binary file just fine. Your problem is that you're only writing a single random bit per byte (and expanding it into text). Diehard wants every bit to be random. So accumulate 8 bits at a time before you write:
do {
int b = 0;
for (int i = 0; i < 8; i += 1) {
v = shift_lfsr(v);
b <<= 1;
b |= (v & 1);
}
fputc(b, fp);
. . .

Create an array of values from different text files in C

I'm working in C on 64-bit Ubuntu 14.04.
I have a number of .txt files, each containing lines of floating point values (1 value per line). The lines represent parts of a complex sample, and they're stored as real(a1) \n imag(a1) \n real(a2) \n imag(a2), if that makes sense.
In a specific scenario there are 4 text files each containing 32768 samples (thus 65536 values), but I need to make the final version dynamic to accommodate up to 32 files (the maximum samples per file would not exceed 32768 though). I'll only be reading the first 19800 samples (depending on other things) though, since the entire signal is contained in those 39600 points (19800 samples).
A common abstraction is to represent the files / samples as a matrix, where columns represent return signals and rows represent the value of each signal at a sampling instant, up until the maximum duration.
What I'm trying to do is take the first sample from each return signal and move it into an array of double-precision floating point values to do some work on, move on to the second sample for each signal (which will overwrite the previous array) and do some work on them, and so forth, until the last row of samples have been processed.
Is there a way in which I can dynamically open files for each signal (depending on the number of pulses I'm using in that particular instance), read the first sample from each file into a buffer and ship that off to be processed. On the next iteration, the file pointers will all be aligned to the second sample, it would then move those into an array and ship it off again, until the desired amount of samples (19800 in our hypothetical case) has been reached.
I can read samples just fine from the files using fscanf:
rx_length = 19800;
int x;
float buf;
double *range_samples = calloc(num_pulses, 2 * sizeof(range_samples));
for (i=0; i < 2 * rx_length; i++){
x = fscanf(pulse_file, "%f", &buf);
*(range_samples) = buf;
}
All that needs to happen (in my mind) is that I need to cycle both sample# and pulse# (in that order), so when finished with one pulse it would move on to the next set of samples for the next pulse, and so forth. What I don't know how to do is to somehow declare file pointers for all return signal files, when the number of them can vary inbetween calls (e.g. do the whole thing for 4 pulses, and on the next call it can be 16 or 64).
If there are any ideas / comments / suggestions I would love to hear them.
Thanks.
I would make the code you posted a function that takes an array of file names as an argument:
void doPulse( const char **file_names, const int size )
{
FILE *file = 0;
// declare your other variables
for ( int i = 0; i < size; ++i )
{
file = fopen( file_names[i] );
// make sure file is open
// do the work on that file
fclose( file );
file = 0;
}
}
What you need is a generator. It would be reasonably easy in C++, but as you tagged C, I can imagine a function, taking a custom struct (the state of the object) as parameter. It could be something like (pseudo code) :
struct GtorState {
char *files[];
int filesIndex;
FILE *currentFile;
};
void gtorInit(GtorState *state, char **files) {
// loads the array of file into state, set index to 0, and open first file
}
int nextValue(GtorState *state, double *real, double *imag) {
// read 2 values from currentFile and affect them to real and imag
// if eof, close currentFile and open files[++currentIndex]
// if real and imag were found returns 0, else 1 if eof on last file, 2 if error
}
Then you main program could contain :
GtorState state;
// initialize the list of files to process
gtorInit(&state, files);
double real, imag);
int cr;
while (0 == (cr = nextValue(&state, &real, &imag)) {
// process (real, imag)
}
if (cr == 2) {
// process (at least display) error
}
Alternatively, your main program could iterate the values of the different files and call a function with state analog of the above generator that processes the values, and at the end uses the state of the processing function to get the results.
Tried a slightly different approach and it's working really well.
In stead of reading from the different files each time I want to do something, I read the entire contents of each file into a 2D array range_phase_data[sample_number][pulse_number], and then access different parts of the array depending on which range bin I'm currently working on.
Here's an excerpt:
#define REAL(z,i) ((z)[2*(i)])
#define IMAG(z,i) ((z)[2*(i)+1])
for (i=0; i<rx_length; i++){
printf("\t[%s] Range bin %i. Samples %i to %i.\n", __FUNCTION__, i, 2*i, 2*i+1);
for (j=0; j<num_pulses; j++){
REAL(fft_buf, j) = range_phase_data[2*i][j];
IMAG(fft_buf, j) = range_phase_data[2*i+1][j];
}
printf("\t[%s] Range bin %i done, ready to FFT.\n", __FUNCTION__, i);
// do stuff with the data
}
This alleviates the need to dynamically allocate file pointers and in stead just opens the files one at a time and writes the data to the corresponding column in the matrix.
Cheers.

Adding characters to the middle of a file without overwriting the existing characters in C

I am quite rusty with C and system calls and pointers in general, so this is a good refresher exercise to get back on track. All I need to do is, given a file such as this:
YYY.txt: "somerandomcharacters"
Change it to be like this:
YYY.txt: "somerandomabcdefghijklmnopqrstuvwxyzcharacters"
So all that is done is some characters added to the middle of the file. Obviously, this is quite simple, but in C you must keep track and manage the size of the file in advance before adding the additional characters.
Here is my naive try:
//(Assume a file called YYY.txt exists and an int YYY is the file descriptor.)
char ToBeInserted[26] = "abcdefghijklmnopqrstuvwxyz";
//Determine the current length of YYY
int LengthOfYYY = lseek(YYY, 0, 2);
if(LengthOfYYY < 0)
printf("Error upon using lseek to get length of YYY");
//Assume we want to insert at position 900 in YYY.txt, and length of YYY is over 1000.
//1.] Keep track of all characters past position 900 in YYY and store in a char array.
lseek(YYY, 900, 0); //Seeks to position 900 in YYY, so reading begins there.
char NextChar;
char EverythingPast900[LengthOfYYY-900];
int i = 0;
while(i < (LengthOfYYY - 900)) {
int NextRead = read(YYY, NextChar, 1); //Puts next character from YYY in NextChar
EverythingPast900[i] = NextChar;
i++;
}
//2.] Overwrite what used to be at position 900 in YYY:
lseek(YYY, 900, 0); //Moves to position 900.
int WriteToYYY = write(YYY, ToBeInserted, sizeof(ToBeInserted));
if(WriteToYYY < 0)
printf("Error upon writing to YYY");
//3.] Move to position 900 + length of ToBeInserted, and write the characters that were saved.
lseek(YYY, 926, 0);
int WriteMoreToYYY = write(YYY, EverythingPast900, sizeof(EverythingPast900));
if (WriteMoreToYYY < 0) {
printf("Error writing the saved characters back into YYY.");
}
I think the logic is sound, mostly, although there are much better ways to do it in C. I need help on my C pointers, basically, as well as the UNIX system calls. Does anyone mind walking me through how to properly implement this in C?
That's the basic idea. If you had to really conserve RAM and the file was a lot bigger you'd want to copy block by block in reverse order. But the simpler way is to read the entire thing into memory and rewrite the entire file.
also, I prefer the stream functions: fopen, fseek, fread. But the file descriptor method works.

Writing a VTK ASCII Legacy File to draw contours in VisIt

I'm trying to write a legacy .vtk file to be read into VisIt using C. Unfortunately my installed VisIt program refuses to render the VTK file that I have writing, reading: 'local host failed'
Below is the code used to read data from one file and convert it to a legacy VTK file. I use the macros XPIX, YPIX, and ZPIX to describe the dimensions of a pixel grid. Each pixel contains a scalar density value. I have listed the pixels in a 'grid-file' using row-major ordering: i.e.
int list_index(x,y,z) = YPIX * ZPIX * x + ZPIX * y + z;
Every entry in this pixel list is read into an array called grid[] of type double, and written to outfile beneath the legacy VTK header data:
/*Write vtk header */
fprintf(outfile,"# vtk DataFile Version 3.0\n");
fprintf(outfile,"Galaxy density grid\nASCII\nDATASET STRUCTURED_POINTS\n");
fprintf(outfile,"DIMENSIONS %d %d %d \n", (XPIX+1), (YPIX+1), (ZPIX+1));
fprintf(outfile,"ORIGIN 0 0 0\n");
fprintf(outfile,"SPACING 1 1 1\n");//or ASPECT_RATIO
fprintf(outfile,"CELL_DATA %d\n", totalpix);
fprintf(outfile,"SCALARS cell_density float 1\n");
fprintf(outfile, "LOOKUP_TABLE default\n");
/*Create Memory Space to store Pixel Grid*/
double *grid;
grid = malloc(XPIX * YPIX * ZPIX * sizeof(double));
if (grid == NULL ){
fprintf(stderr, "Pixel grid of type double failed to initialize\n");
exit(EXIT_FAILURE);
}
fprintf(stderr,"Pixel grid has been initialized.\n Now reading infile\n");
/*Read infile contents into double grid[], using Row-Major Indexing*/
double rho;
char newline;
int i, j, k;
for(i = 0; i < XPIX; i++){
for(j = 0; j < YPIX; j++){
for(k = 0; k < ZPIX; k++){
fscanf(infile, "%lf", &rho);
grid[getindex(i,j,k)] = rho;
}
}
fprintf(stderr,"%d\n", i);
}
fprintf(stderr,"Finished reading\n");
#if !DEBUG
/*Write out grid contents in Row major order*/
fprintf(stderr,"Now writing vtk file");
for(i = 0; i < XPIX; i++){
for(j = 0; j < YPIX; j++){
for(k = 0; k < ZPIX; k++){
fprintf(outfile, "%lf ", grid[getindex(i,j,k)]);
}
fprintf(outfile,"\n");
}
}
fprintf(stderr,"Finished Writing to outfile\n");
#endif
After running the grid data list through this routine, I have XPIX*YPIX lines in the lookup_table, each with ZPIX entries. Is this an incorrect format? VisIt continues to fail reading the input file. I'm aware of the fact that structured_points may use column major indexing, but my first goal of course is to get some sort of result from VisIt. I would like to draw a contour using the scalar cell_density eventually. Is my data set simply too large?
Have you seen the accepted answer to the question vtk data format error? The question is debugging a VTK writer in C++ but it is very similar to your code (and of course should yield the same results).
The key point from the accepted answer is that data is written in column major order, not row major order (you seem to hint at this in your question: "I'm aware of the fact that structured_points may use column major indexing").
Also, it is always helpful (if you can) to compare your code with something which you know works. For example, VisIt provides a small C library for writing legacy VTK file formats called VisItWriterLib. Compare the output from your code and the VisItWriterLib to see where your data files differ. I would recommend using VisItWriterLib for your VTK IO rather than writing your own routines - no need to reinvent the wheel.
Edit: To answer a couple of your other questions:
After running the grid data list through this routine, I have
XPIX*YPIX lines in the lookup_table, each with ZPIX entries. Is this
an incorrect format?
This is not the correct format. LOOKUP_TABLE should be a list of XPIX*YPIX*ZPIX lines, with one element per line (or alternatively, VisIt will accept one line with XPIX*YPIX*ZPIX elements). See the section Dataset Attribute Format in the VTK File Formats document (www.vtk.org/VTK/img/file-formats.pdf).
Is my data set simply too large?
I doubt it. VisIt is designed to handle huge datasets and, AFAIK, can render petabyte data sets. I would be very surprised if your data is that large.
However, if you are concerned about having large files, you can split your data into multiple files and tell VisIt to read these files is parallel. To do this, write a bit a your data into separate files, e.g. domain1.vtk, domain2.vtk, ... domainN.vtk etc. Then write a .visit master file, which has the structure
!NBLOCKS N
domain1.vtk
domain2.vtk
...
domainN.vtk
Save this as, for example, mydata.visit and then open this .visit file, rather than the .vtk files, in VisIt.

Resources