How to name a Matlab output file using input from a text file - file

I am trying to take an input from a text file in this format:
Processed_kplr010074716-2009131105131_llc.fits.txt
Processed_kplr010074716-2009166043257_llc.fits.txt
Processed_kplr010074716-2009259160929_llc.fits.txt
etc.... (there are several hundred lines)
and use that input to name my output files for a Matlab loop. Each time the loop ends, i would like it to process the results and save them to a file such as:
Matlab_Processed_kplr010074716-2009131105131_llc.fits.txt
This would make identifying the object which has been processed easier as I can then just look for the ID number and not of to sort through a list of random saved filenames. I also need it to save plots that are generated in each loop in a similar fashion.
This is what I have so far:
fileNames = fopen('file_list_1.txt', 'rt');
inText = textscan(fileNames, '%s');
outText = [inText]';
fclose(fileNames)
for j:numel(Data)
%Do Stuff
save(strcat('Matlab_',outText(j),'.txt'))
print(Plot, '-djpeg', strcat(outText(j),'.txt'))
end
Any help is appreciated, thanks.

If you want to use the save command to save to a text file, you need to use -ascii tab, see the documentation for more details. You might also want to use dlmwrite instead(or even fprintf, but I don't believe you can write the whole matrix at once with fprintf, you have to loop over the rows).

Related

How to get a specific line on a text file directly in C? (without iterating line-by-line)

Is there a way to get a specific line inside a text file without iterating line-by-line in C?
for example I have this text file names.txt it contains the following names below;
John
James
Julia
Jasmine
and I want to access 'Julia' right away without iterating through 'John' and 'James'?, Something like, just give the index value of '2' or '3' to access 'Julia' right away.
Is there a way to do this in C?
I just want to know how because I want to deal with a very large text file something like in about 3 billion lines and I want to access a specific line in there right away and iterating line-by-line is very slow
You have to at least once iterate thru all lines. In this iteration, before reading a line, you record the position in the file and save it to an array or to another file (Usually named an index file). The file shall have a fixed record size that is good for storing the position off the line in the text file.
Later, when you want to access a give line, you either use the array to get the position (Line number is the array index) or the file (You seek into the file to offset line number of record size) and read the position. Once you get the position, you can see into the text file to that position and read the line.
Each time the text file is updated, you must reconstruct the array or index file.
There are other way to do that, but you need to better explain the context.

COBOL Replace the first line in a file without using OPEN I-O and REWRITE

Say that I have a file with the below format
<records count="n">
record line 1
record line 2
.
.
.
record line n
</records>
I'll have to open this file and change the value of n to another value based on some logic. After change my file should look like.
<records count="m">
record line 1
record line 2
.
.
.
record line n
</records>
I can open the file in OPEN I-O mode and change the first line using the REWRITE option to replace the first line. But I don't want to use these methods. Is there a way to achieve the same logic using OPEN INPUT and OPEN OUTPUT mode and replace the line with WRITE method.
Is there a way to achieve the same logic using OPEN INPUT and OPEN
OUTPUT mode and replace the line with WRITE method[?]
No, that would leave you with only the <records count="m"> in the file. All other records would be lost!
As long as the length of the first record is the same, after changing n to m, REWRITE is the most straight forward way to update that record.
Perhaps, if you explain why you want to use WRITE, there may be something else that could be done.
If the file is not 'too' large, read all the records into memory, change the first record, then write all the records to the file.
If the file is 'too' large, copy the file changing the first record, delete the first file, then rename the copy.
Perhaps less efficient for 'too' large, sort the file by adding a sequence number and changing the first record. This simply uses the sort file to hold the data, temporarily. Possibly a poor choice for a program to be converted.
You need to define what the limit for 'too' is.
There are non-standard routines for file access in Micro Focus, but those might be more difficult to convert.

Read matrices from multiple .csv files and print matrices in .csv files

So I have to write a C program to read data from .csv files supplied to me by multiple users, into matrices on which I will perform some operations (like matrix addition, multiplication with necessary conditions on dimensions, etc.) and print these matrices (or the output data) in to .csv files again.
I also need to dynamically allocate memory to my matrices.
Now, I have zero background in dealing with .csv files. I do not at all know the required code to read a .csv file or write into a .csv file. I have searched for long on the Internet but surprisingly I have not found any program that teaches how to deal with .csv files from the elementary level.
I am lost on this and need a lot of guidance, maybe a sample, fully well-written C program as I need a comprehensive example to begin with.
A CSV file is just a plain ASCII text file that contains a grid of values. Think of the file as a set of rows in a database table where each line in the file represents one record and the order of the data in each line is identical. Each item of data is separated using a comma character (hence the name). So to read the file:-
open file
until the end of the file
read line into a string
split the string into sub strings where ',' is the dilimiter
parse each sub string
Since there is no formatting information in a CSV file, if the data in each value consists of a string, then what do you do if the value has a comma in it? For reading numbers that is not a problem for you.
You could read the file in several passes, the first to determine the amount of data there is (number of columns, number of rows, etc) and the second to actually read the data.
Writing the CSV is quite simple:-
open file
for each record to write
for each element to write
write element
if not last element
write a comma
write a new line

Stable text feed for vim through vimserver

I am searching for a highly stable way to feed text (output of a program) into vim through vimserver. Assume that I have started a (g)vim session with gvim --servername vim myfile. The file myfile contains a (unique) line OUT: which marks the position where the text should be pasted. I can straight forwardly achieve this from the commandline with vim --servername vim --remote-send ':%s/OUT:/TEXT\\rOUT:/<Enter>'. I can repeatedly feed more text using the same command. Inside a C-program I can execute it with system(). However TEXT which is dynamic and arbitrary (received as a stream in the C-program) needs to be passed on the command line and hence it needs to be escaped. Furthermore using the replacement command %s vim will jump to the position where TEXT is inserted. I would like to find a way to paste large chunks of arbitrary text seamlessly in vim. An idea is to have vim read from a posix pipe with :r pipe and to write the the string from within the C-program to the pipe. Ideally the solution would be such that I can continuously edit the same file manually without noting that output is added at OUT: as long as this location is outside the visible area.
The purpose of this text feed is to create a command line based front end for scripting languages. The blocks of input is entered manually by the user in a vim buffer and is being sent to the interpreter through a pipe using vim's :! [interpreter] command. The [interpreter] can of course write the output to stdout (preceded by the original lines of input) in which case the input line is replaced by input and output (to be distinguished using some leading key characters for instance). However commands might take a long time to produce the actual output while the user might want to continue editing the file. Therefore my idea is to have [interpreter] return OUT: immediately and to append subsequent lines of output in this place as they become available using vimserver. However the output must be inserted in a way which does not disturb or corrupt the edits possibly made by the user at the same time.
EDIT
The proposed solutions seem to work.
However there seem to be at least two caveats: * if I send text two or more times this way the `` part of the commands will not take me back to the original cursor position (if I do it just once still the markers are modified which may interrupt the user editing the file manually) * if the user opens a different buffer (e.g. the online help) the commands will fail (or maybe insert the text in the present buffer)
Any ideas?
EDIT: After actually trying, this should work for you:
vim --servername vim --remote-send \
":set cpo+=C<CR>/OUT:<CR>:i<CR>HELLO<CR>WORLD<CR>.<CR>\`\`"
As far as I can see, the only caveats would be the period on a single line, which would terminate :insert instead of being inserted, and <..> sequences that might be interpreted as keypresses. Also, need to replace any newlines in the text with <CR>. However, you have no worries about regular expressions getting muddled, the input is not the command line, the amount of escaping necessary is minimal, and the jumping is compensated for with the double backticks.
Check out :help 'cpoptions', :help :insert, and :help ''.
Instead of dealing with the escaping, I would rather use lower-level functions. Use let lnum = search('^OUT:$') to locate the marker line, and call setline(lnum, [text, 'OUT:']) to insert the text and the marker line again.

How to replace a line on the middle of a txt file in C?

I am reading info (numbers) from a txt file and after that I am adding to those numbers, others I had in another file, with the same structure.
At the start of each line in the file is a number, that identifies a specific product. That code will allow me to search for the same product in the other file. In my program I have to add the other "variables" from one file to the other, and then replace it, in the same place in one of those files.
I didn't open any of those files with a or a+, I did it with r and r+ because i want to replace the information in the lines that may be in the middle of the file, and not in the end of it.
The program compiles, and runs, but when it comes to replace the info in the file, it just doesn't do anything.
How should I resolve the problem?
A program can replace (overwrite) text in the middle of the file. But the question is whether or not this should be performed.
In order to insert larger text or smaller text (and close up the gap), a new text file must be written. This is assuming the file is not fixed width. The fundamental rule is to copy all original text before the insertion to a new file. Write the new text. Finally write the remaining original text. This is a lot of work and will slow down even the simplest programs.
I suggest you design your data layout before you go any further. Also consider using a database, see my post: At what point is it worth using a database?
Your objective is to design the data to minimize duplication and data fetching.

Resources