Reading JLD files - file
I have encountered this problem when reading a JLD file. I have successfully created the file as follows:
using JLD, HDF5
for i in 1:10
file = jldopen("/MY PATH/mydata.jld", "w")
write(file, "A", vector[i] for i in 10 )
close(file)
end
but when I read the file using the following instructions:
file = jldopen("/My PATH/my_tree/mydata.jld", "r")
For this first instruction,it's executed correctly, but when I execute the following:
read(file, "A")
I got this error:
WARNING: type Base.Generator{Core.Int64,##1#2} not present in workspace; reconstructing
ERROR: MethodError: no method matching julia_type(::Void)
in _julia_type(::ASCIIString) at /root/.julia/v0.5/JLD/src/JLD.jl:966
in julia_type(::ASCIIString) at /root/.julia/v0.5/JLD/src/JLD.jl:32
in jldatatype(::JLD.JldFile, ::HDF5.HDF5Datatype) at /root/.julia/v0.5/JLD/src/jld_types.jl:672
in reconstruct_type(::JLD.JldFile, ::HDF5.HDF5Datatype, ::ASCIIString) at /root/.julia/v0.5/JLD/src/jld_types.jl:737
in jldatatype(::JLD.JldFile, ::HDF5.HDF5Datatype) at /root/.julia/v0.5/JLD/src/jld_types.jl:675
in read(::JLD.JldDataset) at /root/.julia/v0.5/JLD/src/JLD.jl:381
in read(::JLD.JldFile, ::ASCIIString) at /root/.julia/v0.5/JLD/src/JLD.jl:357
in eval(::Module, ::Any) at ./boot.jl:237
vector[i] for i in 10 creates a generator, which JLD happily writes to the file for you. You probably want an array, so wrap that expression in collect.
Related
Is there a way to make python print to file for every iteration of a for loop instead of storing all in the buffer?
I am looping over a very large document to try and lemmatise it. Unfortunately python does not seem to print to file for every line but run through the whole document before printing, which given the size of my file exceeds the memory... Before I chunk my document into more bite-sized chunks I wondered if there was a way to force python to print to file for every line. So far my code reads: import spacy nlp = spacy.load('de_core_news_lg') fin = "input.txt" fout = "output.txt" #%% with open(fin) as f: corpus = f.readlines() corpus_lemma = [] for word in corpus: result = ' '.join([token.lemma_ for token in nlp(word)]) corpus_lemma.append(result) with open(fout, 'w') as g: for item in corpus_lemma: g.write(f'{item}') To give credits for the code, it was kindly suggested here: Ho to do lemmatization on German text?
As described in: How to read a large file - line by line? If you do your lemmatisation inside the with block, Python will handle reading line by line using buffered I/O. In your case, it would look like: import spacy nlp = spacy.load('de_core_news_lg') fin = "input.txt" fout = "output.txt" #%% corpus_lemma = [] with open(fin) as f: for line in f: result = " ".join(token.lemma_ for token in nlp(line)) corpus_lemma.append(result) with open(fout) as g: for item in corpus_lemma: g.write(f"{item}")
Read text file as numpy array using np.loadtxt
I am trying to read a text file as a numpy array. For some reason one of the files is read fine, but an error (X = np.array(X, dtype) ValueError: setting an array element with a sequence.) is reported for another. The code is: freq_chan = np.loadtxt(os.path.join(dirs,fil), skiprows = 6+int(no_nodes)) The row of the file that is read is: 45.000000000000 1.73145123922036E-002 -2.27352994577858E-004 0.0000000000000 0.0000000000000 0.0000000000000 0.0000000000000 and the row of the file that is not read is: 450.00000000000 1.75123936984107E-003 4.99078580749004E-004 -1.01870220257046E-005 -1.25748632064143E-005 4.53694668200015E-004 1.75279359420616E-003 1.06388230080026E-005 1.25165432922695E-005 -1.26393875391086E-003 What might be the reason for this? Thanks
I suspect that there is a problem with your delimiter character at least in the first file. try to set the delimiter argument. Take a look to this explanation
Save a string vector as csv in matlab
I have the following string array in matlab built the following way: labels=textread(nome_tecnicas_base, '%s'); for i=1:size(labels) temp_vector=cell(1,10); [temp_vector{1:10}]=deal(labels{i}); final_vector=horzcat(final_vector,temp_vector); end I want to save this vector with each string value separated with commas (e.g., csv files) in a text file. I tried in several ways, but when I try to read it with, for example, the textread function i have the following error: a=textread('labels-cpen-R.txt') Error using dataread Trouble reading number from file (row 1, field 1) ==> dct,dct,dct,dct,dct,dct,dct,dct,dct,dct,hierar this is how my file was saved dct,dct,dct,dct,dct,dct,dct,dct,dct,dct,hierarch-sift,hierarch-sift,hierarch-sift,hierarch-sift,hierarch-sift,hierarch-sift,hierarch-sift,hierarch sift,hierarch-sift,hierarch sift,zernike,zernike,zernike,zernike,zernike,zernike,zernike,zernike,zernike,zernike,zernike2,zernike2,zernike2,zernike2,zernike2,zernike2,zernike2,zernike2,zernike2,zernike2,kpca,kpca,kpca,kpca,kpca,kpca,kpca,kpca,kpca,kpca,sift,sift,sift,sift,sift,sift,sift,sift,sift,sift,surf,surf,surf,surf,surf,surf,surf,surf,surf,surf,bayesianfusion0,bayesianfusion0,bayesianfusion0,bayesianfusion0,bayesianfusion0,bayesianfusion0,bayesianfusion0,bayesianfusion0,bayesianfusion0,bayesianfusion0,bks-fusion,bks-fusion,bks-fusion,bks-fusion,bks-fusion,bks-fusion,bks-fusion,bks-fusion,bks-fusion,bks-fusion,fusionvoting4,fusionvoting4,fusionvoting4,fusionvoting4,fusionvoting4,fusionvoting4,fusionvoting4,fusionvoting4,fusionvoting4,fusionvoting4,fusionvoting6,fusionvoting6,fusionvoting6,fusionvoting6,fusionvoting6,fusionvoting6,fusionvoting6,fusionvoting6,fusionvoting6,fusionvoting6,multiscale_voting,multiscale_voting,multiscale_voting,multiscale_voting,multiscale_voting,multiscale_voting,multiscale_voting,multiscale_voting,multiscale_voting,multiscale_voting,bks_rf_lvt,bks_rf_lvt,bks_rf_lvt,bks_rf_lvt,bks_rf_lvt,bks_rf_lvt,bks_rf_lvt,bks_rf_lvt,bks_rf_lvt,bks_rf_lvt,bks_svr_lvt,bks_svr_lvt,bks_svr_lvt,bks_svr_lvt,bks_svr_lvt,bks_svr_lvt,bks_svr_lvt,bks_svr_lvt,bks_svr_lvt,bks_svr_lvt,bks_svr_otsu,bks_svr_otsu,bks_svr_otsu,bks_svr_otsu,bks_svr_otsu,bks_svr_otsu,bks_svr_otsu,bks_svr_otsu,bks_svr_otsu,bks_svr_otsu,multiscale_bks_rf_lvt,multiscale_bks_rf_lvt,multiscale_bks_rf_lvt,multiscale_bks_rf_lvt,multiscale_bks_rf_lvt,multiscale_bks_rf_lvt,multiscale_bks_rf_lvt,multiscale_bks_rf_lvt,multiscale_bks_rf_lvt,multiscale_bks_rf_lvt,multiscale_bks_svr_lvt,multiscale_bks_svr_lvt,multiscale_bks_svr_lvt,multiscale_bks_svr_lvt,multiscale_bks_svr_lvt,multiscale_bks_svr_lvt,multiscale_bks_svr_lvt,multiscale_bks_svr_lvt,multiscale_bks_svr_lvt,multiscale_bks_svr_lvt How can I save this vector and how can I read this file properly?
try textscan for reading and fprintf for writing from the matlab documentation: fileID = fopen('data.csv'); C = textscan(fileID,'%f %f %f %f %u8 %f',... 'Delimiter',',','EmptyValue',-Inf); so in your case: textscan(fileID,'%s', 'Delimiter', ',') edit: for writing data to a file, you can use fprintf with a file identifier: fileID= fopen('data.csv', 'w') ; fprintf(fileID, '%s,', data{1,1:end-1}) ; fprintf(fileID, '%s\n', data{1,end}) ; fclose(fileID)
Inline Python "for" to read lines from file and append to list
I have this code: self.y_file = with open("y_file.txt", "r") as g: y_data.append(line for line in g.readlines()) But it doesn't seem to work and I am more than sure the problem lies within 1) How I open the file (with) and that for loop. Any way I could make this work?
you can just open and read. If you want auto-close you need to wrap it in function self.y_file = open('y_file.txt').readlines() Or: def read_file(fname): with open(fname) as f: return f.readlines() self.y_file = read_file('y_file.txt')
Pass a file name as a command line argument to GNU Octave script
In an executable Octave script, I want to pass the name of a file containing a matrix and make gnu octave load that file information as a matrix. How do I do that? Here is what the script should look like #! /usr/bin/octave -qf arg_list = argv() filename = argv{1} % Name of the file containing the matrix you want to load load -ascii filename % Load the information The file passed will be a matrix containing a matrix of arbitrary size say 2x3 1 2 3 5 7 8 At the command line the script should be run as ./myscript mymatrixfile where mymatrixfile contains the matrix. This is what I get when I try to execute the script just written above with octave [Desktop/SCVT]$ ./octavetinker.m generators.xyz (05-14 10:41) arg_list = { [1,1] = generators.xyz } filename = generators.xyz error: load: unable to find file filename error: called from: error: ./octavetinker.m at line 7, column 1 [Desktop/SCVT]$ Where generators.xyz is the file containing the matrix I need
This should work: #!/usr/bin/octave -qf arg_list = argv (); filename = arg_list{1}; load("-ascii",filename); when you wrote the line load filename you indicated to the load function to load the file name "filename". That is to say, you did the thing that is equivalent to load('filename');. In both MATLAB and Octave, a function "foo" followed by a space then the word "bar" indicates that bar is to be submitted as a string to foo. This is true even if bar is a defined variable in your workspace.