How to read and concatenate files in fortran - file

I got 5 files generated by a fortran code like this
longP=8
OPEN(unit=20,FILE="GMt_2.dat",ACTION="write",ACCESS='Direct',RECL=longP)
count1=1
do J=K,fact
READ(10,*)XA,XB,YA,YB,ZA,ZB,rho
call Grv('f',Nx,Ny,dimg,Dx,Dy,XO,YO,XA,XB,YA,YB,ZA,ZB,rho,G,elev,Svec)
do I=1,dimg
WRITE(UNIT=20,rec=count1)Svec(I)
count1=count1+1
end do
WRITE(*,*)J
end do
dim(2)=J-1
fact=fact+fact1
call flush(20)
CLOSE(20)
which returned with an unreadable file format, my professor said "its binary, machine code" My goal here is to concatenate the information in those 5 files in one array to perform some processing. how can I achieve this?.

The code you show writes the data using unformatted I/O and direct access. You'll need to read it using unformatted I/O as well. You could use direct access or, and this would be my recommendation, stream access (ACCESS='STREAM' in the OPEN statement.) Open each file in sequence, read the data and then write it using the same mechanism to your single file. Your question is ambiguous enough to not allow a more detailed response.

Related

How would I know that file is opened and it is saved after some writing operation using C code?

I have a set of configuration files (10 or more), and if user opens any of these file using any editor (e.g vim,vi,geany,qt,leafpad..). How would I come to know that which file is opened and if some writing process is done, then it is saved or not (using C code).
For the 1st part of your question, please refer e.g. to How to check if a file has been opened by another application in C++?
One way described there is to use a system tool like lsof and call this via a system() call.
For the 2nd part, about knowing whether a file has been modified, you will have to create a backup file to check against. Most editors already do that, but their naming scheme is different, so you might want to take care of that yourself. How to do that? Just automatically create a (hidden) file .mylogfile.txt if it does not exist by simply copying mylogfile.txt. If .mylogfile.txt exists, is having an older timestamp than mylogfile.txt, and differs in size and/or hash-value (using e.g. md5sum) your file was modified.
But before re-implementing this, take a look at How do I make my program watch for file modification in C++?

Trouble Using BackupRead Function

I have been trying to use the function in order to read alternate data streams, however I have been able to read only the file name. I want to read the content of the Alternate Stream.
I have been using the code from the following link.
What should I do in order to read the contents of the alternate stream?
I tried reading the documentation and looked for examples but havent really found anything useful
The example code your question refers to uses BackupRead to enumerate the names of the alternate streams (if any). It skips the actual content of each stream by calling BackupSeek to move directly to the next header.
You could either modify the code so that it reads the content rather than skipping it, or you can use the names to open and read from each stream separately.
To open a stream in file a.txt whose stream name is xyzzy, use the filename a.txt:xyzzy.

identifying data file type

I have a huge 1.9 GB data file without extension I need to open and get some data from, the problem is this data file is extension-less and I need to know what extension it should be and what software I can open it with to view the data in a table.
here is the picture :
Its only 2 lines file, I already tried csv on excel but it did not work, any help ?
I have never use it but you could try this:
http://mark0.net/soft-tridnet-e.html
explained here:
http://www.labnol.org/software/unknown-file-extensions/20568/
The third "column" of that line looks 99% chance to be from php's print_r function (with newlines imploded to be able to stored on a single line).
There may not be a "format" or program to open it with if its just some app's custom debug/output log.
A quick google found a few programs to split large files into smaller units. THat may make it easier to load into something (may or may not be n++) for reading.
It shouldnt be too hard to mash out a script to read the lines and reconstitute the session "array" into a more readable format (read: vertical, not inline), but it would to be a one-off custom job, since noone other than the holder of your file would have a use for it.

How can I use a text file from the internet in my C program?

I'm working on a project, and I made a C program that reads the date, time, and wave height from a .txt file stored on my computer, converts the date and time to GPS time for use at a scientific research institution, and outputs GPS time and wave height to the screen. However, the text file that I am working with is actually stored at http://www.ndbc.noaa.gov/data/realtime2/SPLL1.txt . Is there any way that I could have my C program open the text file from the web address rather than from my local hard drive?
FYI: to access the file on my computer I used fopen and to interact with the data contained I used a combination of fgets and fscanf.
It is much more involved to get a web-resource than to read a file from disk, but you can absolutely do it, for example by using a library such as libcurl.
An alternative strategy is to make components and tie them together with bash or other scripting. Your C program could for example read from standard input, and you could make a bash script something like this:
curl http://www.ndbc.noaa.gov/data/realtime2/SPLL1.txt | ./the_program
This way, you could keep your core C program simpler.

Reading a file directly from HDFS into a shell function

I have a shell function that is called from inside my map function. The shell function takes 2 parameters -> an input file and an output file. Something like this
$> unix-binary /pathin/input.txt /pathout/output.txt
The problem is, that these input.txt files reside in HDFS and the output.txt files need to be written back to HDFS. Currently, I first copy the needed file with fs.copyToLocalFile into the local hard drive, call the unix binary and then write the output.txt back to HDFS with fs.copyFromLocalFile.
The problem with this approach is that, it is not optimal because it involves substantial amount of redundant reading and writing to HDD which slows down the performance. So, my question is, how I can read the HDFS file directly as an input and output the results directly to HDFS?
obviously,
$>unix-binary hdfs://master:53410/pathin/input.txt' hdfs://master:54310/pathout/output.txt
will not work. Is there any other way around? Can I treat an HDFS file as a loacl file somehow?
I have access to the unix-binary source code written in C. Maybe changing the source code would help?
thanks
You can add the file to the DistributedCache and access it from the mapper from the cache. Call your shell function on the local file and write the output file to local disk and then copy the local file to HDFS.
However, operations such as calling shell functions, or reading/writing from within a mapper/reducer break the MapReduce paradigm. If you find yourself needing to perform such operations, MapReduce may not be the solution you're looking for. HDFS and MapReduce were designed to perform massive scale batch processing on small numbers of extremely large files.
Since you have access to unix-binary source code, your best option might be to implement the particular function(s) you want in java. Feed the input files to your mapper and call the function from the mapper on the data rather than working with files on HDFS/LocalFS.

Resources