Creative way to create a Matlab array from a textfile with multiple headers - arrays

I am trying to parse a molecular dynamics dump file which has headers printed periodically. Between two successive headers, I have data (not guaranteed that the lenght of data is the same between any two successive headers) in a column format which I want to store and post-process. Is there a way I can do this without excessive use of for loops?
The basic gist of it is:
ITEM: TIMESTEP
0
ITEM: NUMBER OF ENTRIES
1079
ITEM: BOX BOUNDS xy xz yz ff ff pp
-1e+06 1e+06 0
-1e+06 1e+06 0
-1e+06 1e+06 0
ITEM: ENTRIES index c_1[1] c_1[2] c_2[1] c_2[2] c_2[3] c_2[4] c_2[5]
1 1 94 0.0399999 0 0.171554 -0.00124379 0
2 1 106 0.0399999 0 -0.0638316 0.116503 0
3 1 204 0.0299999 0 -0.124742 0.0290103 0
4 1 675 0.0299999 0 0.0245382 -0.116731 0
5 2 621 0.03 0 0.0328324 0.00185942 0
6 2 656 0.04 0 -0.0315086 0.016237 0
7 2 671 0.04 0 -0.00291159 -0.0169882 0
8 3 76 0.03 0 0.01775 0.0100646 0
9 3 655 0.03 0 0.00434063 -0.00750336 0
.
.
.
.
.
1076 678 692 100000 0 -0.222481 -1.44632e-06 0
1077 679 692 100000 0 -0.00232206 -8.05951e-09 0
1078 682 691 100000 0 0.0753935 -2.89438e-07 0
1079 687 692 100000 0 -0.0153246 -2.51076e-08 0
ITEM: TIMESTEP
1000
ITEM: NUMBER OF ENTRIES
1078
ITEM: BOX BOUNDS xy xz yz ff ff pp
-1e+06 1e+06 0
-1e+06 1e+06 0
-1e+06 1e+06 0
ITEM: ENTRIES index c_1[1] c_1[2] c_2[1] c_2[2] c_2[3] c_2[4] c_2[5]
1 1 94 0.0399997 0 1.3535 -0.00981109 0
2 1 106 0.0399986 0 -6.36969 11.6275 0
3 1 204 0.0299893 0 -236.114 54.9339 0
4 1 675 0.0299998 0 0.148064 -0.704365 0
.
.
.
.
TIA!

You don't need to write a single for loop to parse this file, MATLAB writes them for you:
[headers, tables] = parseTables('tables.txt')
...
function [headers, tables] = parseTables(filename)
content = fileread(filename); % read whole file
lines = splitlines(content); % split lines
values = cellfun(#str2num, lines, 'UniformOutput', false); % convert lines to float, when possible
headerLines = cellfun(#isempty, values); % lines with no floats
headers = lines(headerLines); % extract headers
startLines = find(headerLines)+1; % indices of first lines of tables
endLines = [startLines(2:end)-1; length(values)]; % indices of last lines of tables
tables = arrayfun(#(i, j) cell2mat(values(i:j)), ...
startLines, endLines, 'UniformOutput', false); % merge table rows to single matrix
end
The results will be stored in cell arrays:
headers =
8×1 cell array
{'ITEM: TIMESTEP' }
{'ITEM: NUMBER OF ENTRIES' }
{'ITEM: BOX BOUNDS xy xz yz ff ff pp' }
{'ITEM: ENTRIES index c_1[1] c_1[2] c_2[1] c_2[2] c_2[3] c_2[4] c_2[5] '}
{'ITEM: TIMESTEP' }
{'ITEM: NUMBER OF ENTRIES' }
{'ITEM: BOX BOUNDS xy xz yz ff ff pp' }
{'ITEM: ENTRIES index c_1[1] c_1[2] c_2[1] c_2[2] c_2[3] c_2[4] c_2[5] '}
tables =
8×1 cell array
{[ 0]}
{[ 1079]}
{ 3×3 double}
{13×8 double}
{[ 1000]}
{[ 1078]}
{ 3×3 double}
{ 4×8 double}

Related

AWK loop to parse file

I have trouble understandig an awk command which I want to change slightly (but can't because I don't understand the code enough).
The result of this awk command is to put together text files having 6 columns. In the output file, the first column is a mix of all first column of the input file. The other columns of the output file are the other column of the input file with added blank if needed, to still match with the first column values.
First, I would like to only parse some specific columns from these files and not all 6. I couldn't figure out where to specify it in the awk loop.
Secondly, the header of the columns are not the first row of the output file anymore. It would be nice to have it as header in the output file as well.
Thirdly, I need to know from which file the data comes from. I know that the command take the files in the order they appear when doing ls -lh *mosdepth.summary.txt so I can deduce that the first 6 columns are from file 1, the 6 next from file 2, ect. However, I would like to automatically have this information in the output file to reduce the potential human errors I can do by infering the origin of the data.
Here is the awk command
awk -F"\t" -v OFS="\t" 'F!=FILENAME { FNUM++; F=FILENAME }
{ COL[$1]++; C=$1; $1=""; A[C, FNUM]=$0 }
END {
for(X in COL)
{
printf("%s", X);
for(N=1; N<=FNUM; N++) printf("%s", A[X, N]);
printf("\n");
}
}' *mosdepth.summary.txt > Se_combined.coverage.txt
the input file look like this
cat file1
chrom length bases mean min max
contig_1_pilon 223468 603256 2.70 0 59
contig_2_pilon 197061 1423255 7.22 0 102
contig_6_pilon 162902 1372153 8.42 0 80
contig_19_pilon 286502 1781926 6.22 0 243
contig_29_pilon 263348 1251842 4.75 0 305
contig_32_pilon 291449 1819758 6.24 0 85
contig_34_pilon 51310 197150 3.84 0 29
contig_37_pilon 548146 4424483 8.07 0 399
contig_41_pilon 7529 163710 21.74 0 59
cat file2
chrom length bases mean min max
contig_2_pilon 197061 2098426 10.65 0 198
contig_19_pilon 286502 1892283 6.60 0 233
contig_32_pilon 291449 2051790 7.04 0 172
contig_37_pilon 548146 6684861 12.20 0 436
contig_42_pilon 14017 306188 21.84 0 162
contig_79_pilon 17365 883750 50.89 0 1708
contig_106_pilon 513441 6917630 13.47 0 447
contig_124_pilon 187518 374354 2.00 0 371
contig_149_pilon 1004879 13603882 13.54 0 801
the wrong output looks like this
contig_149_pilon 1004879 13603882 13.54 0 801
contig_79_pilon 17365 883750 50.89 0 1708
contig_1_pilon 223468 603256 2.70 0 59
contig_106_pilon 513441 6917630 13.47 0 447
contig_2_pilon 197061 1423255 7.22 0 102 197061 2098426 10.65 0 198
chrom length bases mean min max length bases mean min max
contig_37_pilon 548146 4424483 8.07 0 399 548146 6684861 12.20 0 436
contig_41_pilon 7529 163710 21.74 0 59
contig_6_pilon 162902 1372153 8.42 0 80
contig_42_pilon 14017 306188 21.84 0 162
contig_29_pilon 263348 1251842 4.75 0 305
contig_19_pilon 286502 1781926 6.22 0 243 286502 1892283 6.60 0 233
contig_124_pilon 187518 374354 2.00 0 371
contig_34_pilon 51310 197150 3.84 0 29
contig_32_pilon 291449 1819758 6.24 0 85 291449 2051790 7.04 0 172
EDIT:
Thanks to several input from several users I manage to answer my points 1 and 3 like this
awk -F"\t" -v OFS="\t" 'F!=FILENAME { FNUM++; F=FILENAME }
{ B[FNUM]=F; COL[$1]; C=$1; $1=""; A[C, FNUM]=$4}
END {
printf("%s\t", "contig")
for (N=1; N<=FNUM; N++)
{ printf("%.5s\t", B[N])}
printf("\n")
for(X in COL)
{
printf("%s\t", X);
for(N=1; N<=FNUM; N++)
{ printf("%s\t", A[X, N]);
}
printf("\n");
}
}' file1.txt file2.txt > output.txt
with output
contig file1 file2
contig_149_pilon 13.54
contig_79_pilon 50.89
contig_1_pilon 2.70
contig_106_pilon 13.47
contig_2_pilon 7.22 10.65
chrom mean mean
contig_37_pilon 8.07 12.20
contig_41_pilon 21.74
contig_6_pilon 8.42
contig_42_pilon 21.84
contig_29_pilon 4.75
contig_19_pilon 6.22 6.60
contig_124_pilon 2.00
contig_34_pilon 3.84
contig_32_pilon 6.24 7.04
Awk processes files in records, where the records are separated by the record separator RS. Each record is split in fields where the field separator is defined by the variable FS that the -F flag can define.
In the case of the program presented in the OP, the record separator is the default value which is the <newline>-character and the field separator is set to be the <tab>-character.
Awk programs are generally written as a sequence of pattern-action pairs of the form pattern { action }. These pairs are executed sequentially and state to perform action when pattern returns a non-zero or non-empty string value.
In the current program there are three such action-patter pairs:
F!=FILENAME { FNUM++; F=FILENAME }: This states that if the value of F is different from the current FILENAME which is processed, then increase the value of FNUM with one and update the value of F with the current FILENAME.
In the end, this is the same as just checking if we are processing a new file or not. The equivalent version of this would be:
(FNR==1) { FNUM++ }
which reads: If we are processing the first record of the current file (FNR), then increase the file count FNUM.
{ COL[$1]++; C=$1; $1=""; A[C, FNUM]=$0 }: As there is no pattern, it implies true by default. So here, for each record/line increment the number of times you saw the value in the first column and store it in an associative array COL (key-value pairs). Memorize the first field in C and store in an array A the value of the current record, but remove the first field. So if the record of the second file reads "foo A B C D", and foo already been seen 3 times, then, COL["foo"] will be equal to 4 and A["foo",2] will read " A B C D".
END{ ... } This is a special pattern-action pair. Here END indicates that this action should only be executed at the end, when all files have been processed. What the end statement does, is straightforward, it just prints all records of each file. Including empty records.
In the end, this entire script can be simplified to the following:
awk 'BEGIN{ FS="\t" }
{ file_list[FILENAME]
key_list[$1]
record_list[FILENAME,$1]=$0 }
END { for (key in key_list)
for (fname in file_list)
print ( record_list[fname,key] ? record_list[fname,key] : key )
}' file1 file2 file3 ...
Assuming your '*mosdepth.summary.txt' files look like the following:
$ ls *mos*txt
1mosdepth.summary.txt 2mosdepth.summary.txt 3mosdepth.summary.txt
And contents are:
$ cat 1mosdepth.summary.txt
chrom length bases mean min max
contig_1_pilon 223468 1181176 5.29 0 860
contig_2_pilon 197061 2556215 12.97 0 217
contig_6_pilon 162902 2132156 13.09 0 80
$ cat 2mosdepth.summary.txt
chrom length bases mean min max
contig_19_pilon 286502 2067244 7.22 0 345
contig_29_pilon 263348 2222566 8.44 0 765
contig_32_pilon 291449 2671881 9.17 0 128
contig_34_pilon 51310 525393 10.24 0 47
$ cat 3mosdepth.summary.txt
chrom length bases mean min max
contig_37_pilon 548146 6652322 12.14 0 558
contig_41_pilon 7529 144989 19.26 0 71
The following awk command might be appropriate:
$ awk -v target_cols="1 2 3 4 5 6" 'BEGIN{split(target_cols, cols," ")} \
NR==1{printf "%s ", "file#"; for (i=1;i<=length(cols);i++) {printf "%s ", $cols[i]} print ""} \
FNR==1{fnbr++} \
FNR>=2{printf "%s ", fnbr; for (i=1;i<=length(cols);i++) {printf "%s ", $cols[i]} print ""}' *mos*txt | column -t
Output:
file# chrom length bases mean min max
1 contig_1_pilon 223468 1181176 5.29 0 860
1 contig_2_pilon 197061 2556215 12.97 0 217
1 contig_6_pilon 162902 2132156 13.09 0 80
2 contig_19_pilon 286502 2067244 7.22 0 345
2 contig_29_pilon 263348 2222566 8.44 0 765
2 contig_32_pilon 291449 2671881 9.17 0 128
2 contig_34_pilon 51310 525393 10.24 0 47
3 contig_37_pilon 548146 6652322 12.14 0 558
3 contig_41_pilon 7529 144989 19.26 0 71
Alternatively, the following will output the filename rather than file#:
$ awk -v target_cols="1 2 3 4 5 6" 'BEGIN{split(target_cols, cols," ")} \
NR==1{printf "%s ", "fname"; for (i=1;i<=length(cols);i++) {printf "%s ", $cols[i]} print ""} \
FNR==1{fnbr=FILENAME} \
FNR>=2{printf "%s ", fnbr; fnbr="-"; for (i=1;i<=length(cols);i++) {printf "%s ", $cols[i]} print ""}' *mos*txt | column -t
Output:
fname chrom length bases mean min max
1mosdepth.summary.txt contig_1_pilon 223468 1181176 5.29 0 860
- contig_2_pilon 197061 2556215 12.97 0 217
- contig_6_pilon 162902 2132156 13.09 0 80
2mosdepth.summary.txt contig_19_pilon 286502 2067244 7.22 0 345
- contig_29_pilon 263348 2222566 8.44 0 765
- contig_32_pilon 291449 2671881 9.17 0 128
- contig_34_pilon 51310 525393 10.24 0 47
3mosdepth.summary.txt contig_37_pilon 548146 6652322 12.14 0 558
- contig_41_pilon 7529 144989 19.26 0 71
With either command, the target_cols="1 2 3 4 5 6" specifies the targeted columns to extract.
target_cols="1 2 3" for example, will produce:
fname chrom length bases
1mosdepth.summary.txt contig_1_pilon 223468 1181176
- contig_2_pilon 197061 2556215
- contig_6_pilon 162902 2132156
2mosdepth.summary.txt contig_19_pilon 286502 2067244
- contig_29_pilon 263348 2222566
- contig_32_pilon 291449 2671881
- contig_34_pilon 51310 525393
3mosdepth.summary.txt contig_37_pilon 548146 6652322
- contig_41_pilon 7529 144989
target_cols="4 5 6" will produce:
fname mean min max
1mosdepth.summary.txt 5.29 0 860
- 12.97 0 217
- 13.09 0 80
2mosdepth.summary.txt 7.22 0 345
- 8.44 0 765
- 9.17 0 128
- 10.24 0 47
3mosdepth.summary.txt 12.14 0 558
- 19.26 0 71

MATLAB: Remove sub-arrays from a multidimensional array into an array of ones

I would like to construct a function
[B, ind] = extract_ones(A)
which removes some sub-arrays from a binary array A in arbitrary dimensions, such that the remaining array B is the largest possible array with only 1's, and I also would like to record in ind that where each of the 1's in B comes from.
Example 1
Assume A is a 2-D array as shown
A =
1 1 0 0 0 1
1 1 1 0 1 1
0 0 0 1 0 1
1 1 0 1 0 1
1 1 0 1 0 1
1 1 1 1 1 1
After removing A(3,:) and A(:,3:5), we have the output B
B =
1 1 1
1 1 1
1 1 1
1 1 1
1 1 1
which is the largest array with only ones by removing rows and columns of A.
As the fifteen 1's of B corresponds to
A(1,1) A(1,2) A(1,6)
A(2,1) A(2,2) A(2,6)
A(4,1) A(4,2) A(4,6)
A(5,1) A(5,2) A(5,6)
A(6,1) A(6,2) A(6,6)
respectively, or equivalently
A(1) A(7) A(31)
A(2) A(8) A(32)
A(4) A(10) A(34)
A(5) A(11) A(35)
A(6) A(12) A(36)
so, the output ind looks like (of course ind's shape does not matter):
ind = [1 2 4 5 6 7 8 10 11 12 31 32 34 35 36]
Example 2
If the input A is constructed by
A = ones(6,3,4,3);
A(2,2,2,2) = 0;
A(4,1,3,3) = 0;
A(1,1,4,2) = 0;
A(1,1,4,1) = 0;
Then, by deleting the minimum cuboids containing A(2,2,2,2), A(4,1,3,3), A(1,1,4,3) and A(1,1,4,1), i.e. after deleting these entries
A(2,:,:,:)
A(:,1,:,:)
Then the remaining array B will be composed by 1's only. And the ones in B corresponds to
A([1,3:6],2:3,1:4,1:3)
So, the output ind lists the subscripts transformed into indices, i.e.
ind = [7 9 10 11 12 13 15 16 17 18 25 27 28 29 30 31 33 34 35 36 43 45 46 47 48 49 51 52 53 54 61 63 64 65 66 67 69 70 71 72 79 81 82 83 84 85 87 88 89 90 97 99 100 101 102 103 105 106 107 108 115 117 118 119 120 121 123 124 125 126 133 135 136 137 138 139 141 142 143 144 151 153 154 155 156 157 159 160 161 162 169 171 172 173 174 175 177 178 179 180 187 189 190 191 192 193 195 196 197 198 205 207 208 209 210 211 213 214 215 216]
As the array needed to be processed as above is in 8-D, and it should be processed more than once, so can anyone give me opinions on how to composing the program doing this task fast?
My work so far [Added at 2 am (GMT-4), 2nd Aug 2017]
My idea was that I delete the sub-arrays with the largest proportion of zero one by one. And here is my work so far:
Inds = reshape(1:numel(A),size(A)); % Keep track on which 1's survive.
cont = true;
while cont
sz = size(A);
zero_percentage = 0;
Test_location = [];
% This nested for loops are for determining which sub-array of A has the
% maximum proportion of zeros.
for J = 1 : ndims(A)
for K = 1 : sz(J)
% Location is in the form of (_,_,_,...,_)
% where the J-th blank is K, the other blanks are colons.
Location = strcat('(',repmat(':,',1,(J-1)),int2str(K),repmat(',:',1,(ndims(A)-J)),')');
Test_array = eval(strcat('A',Location,';'));
N = numel(Test_array);
while numel(Test_array) ~= 1
Test_array = sum(Test_array);
end
test_zero_percentage = 1 - (Test_array/N);
if test_zero_percentage > zero_percentage
zero_percentage = test_zero_percentage;
Test_location = Location;
end
end
end
% Delete the array with maximum proportion of zeros
eval(strcat('A',Test_location,'= [];'))
eval(strcat('Inds',Test_location,'= [];'))
% Determine if there are still zeros in A. If there are, continue the while loop.
cont = A;
while numel(cont) ~= 1
cont = prod(cont);
end
cont = ~logical(cont);
end
But I encountered two problems:
1) It may be not efficient to check all arrays in all sub-dimensions one-by-one.
2) The result does not contain the most number of rectangular ones. for example, I tested my work using a 2-dimensional binary array A
A =
0 0 0 1 1 0
0 1 1 0 1 1
1 0 1 1 1 1
1 0 0 1 1 1
0 1 1 0 1 1
0 1 0 0 1 1
1 0 0 0 1 1
1 0 0 0 0 0
It should return me the result as
B =
1 1
1 1
1 1
1 1
1 1
1 1
Inds =
34 42
35 43
36 44
37 45
38 46
39 47
But, instead, the code returned me this:
B =
1 1 1
1 1 1
1 1 1
Inds =
10 34 42
13 37 45
14 38 46
*My work so far 2 [Added at 12noon (GMT-4), 2nd Aug 2017]
Here is my current amendment. This may not provide the best result.
This may give a fairly OK approximation to the problem, and this does not give empty Inds. But I am still hoping that there is a better solution.
function [B, Inds] = Finding_ones(A)
Inds = reshape(1:numel(A),size(A)); % Keep track on which 1's survive.
sz0 = size(A);
cont = true;
while cont
sz = size(A);
zero_percentage = 0;
Test_location = [];
% This nested for loops are for determining which sub-array of A has the
% maximum proportion of zeros.
for J = 1 : ndims(A)
for K = 1 : sz(J)
% Location is in the form of (_,_,_,...,_)
% where the J-th blank is K, the other blanks are colons.
Location = strcat('(',repmat(':,',1,(J-1)),int2str(K),repmat(',:',1,(ndims(A)-J)),')');
Test_array = eval(strcat('A',Location,';'));
N = numel(Test_array);
Test_array = sum(Test_array(:));
test_zero_percentage = 1 - (Test_array/N);
if test_zero_percentage > zero_percentage
eval(strcat('Testfornumel = numel(A',Location,');'))
if Testfornumel < numel(A) % Preventing the A from being empty
zero_percentage = test_zero_percentage;
Test_location = Location;
end
end
end
end
% Delete the array with maximum proportion of zeros
eval(strcat('A',Test_location,'= [];'))
eval(strcat('Inds',Test_location,'= [];'))
% Determine if there are still zeros in A. If there are, continue the while loop.
cont = A;
while numel(cont) ~= 1
cont = prod(cont);
end
cont = ~logical(cont);
end
B = A;
% command = 'i1, i2, ... ,in'
% here, n is the number of dimansion of A.
command = 'i1';
for J = 2 : length(sz0)
command = strcat(command,',i',int2str(J));
end
Inds = reshape(Inds,numel(Inds),1); %#ok<NASGU>
eval(strcat('[',command,'] = ind2sub(sz0,Inds);'))
% Reform Inds into a 2-D matrix, which each column indicate the location of
% the 1 originated from A.
Inds = squeeze(eval(strcat('[',command,']')));
Inds = reshape(Inds',length(sz0),numel(Inds)/length(sz0));
end
It seems a difficult problem to solve, since the order of deletion can change a lot in the final result. If in your first example you start with deleting all the columns that contain a 0, you don't end up with the desired result.
The code below removes the row or column with the most zeros and keeps going until it's only ones. It keeps track of the rows and columns that are deleted to find the indexes of the remaining ones.
function [B,ind] = extract_ones( A )
if ~islogical(A),A=(A==1);end
if ~any(A(:)),B=[];ind=[];return,end
B=A;cdel=[];rdel=[];
while ~all(B(:))
[I,J] = ind2sub(size(B),find(B==0));
ih=histcounts(I,[0.5:1:size(B,1)+0.5]); %zero's in rows
jh=histcounts(J,[0.5:1:size(B,2)+0.5]); %zero's in columns
if max(ih)>max(jh)
idxr=find(ih==max(ih),1,'first');
B(idxr,:)=[];
%store deletion
rdel(end+1)=idxr+sum(rdel<=idxr);
elseif max(ih)==max(jh)
idxr=find(ih==max(ih),1,'first');
idxc=find(jh==max(jh),1,'first');
B(idxr,:)=[];
B(:,idxc)=[];
%store deletions
rdel(end+1)=idxr+sum(rdel<=idxr);
cdel(end+1)=idxc+sum(cdel<=idxc);
else
idxc=find(jh==max(jh),1,'first');
B(:,idxc)=[];
%store deletions
cdel(end+1)=idxc+sum(cdel<=idxc);
end
end
A(rdel,:)=0;
A(:,cdel)=0;
ind=find(A);
Second try: Start with a seed point and try to grow the matrix in all dimensions. The result is the start and finish point in the matrix.
function [ res ] = seed_grow( A )
if ~islogical(A),A=(A==1);end
if ~any(A(:)),res={};end
go = true;
dims=size(A);
ind = cell([1 length(dims)]); %cell to store find results
seeds=A;maxmat=0;
while go %main loop to remove all posible seeds
[ind{:}]=find(seeds,1,'first');
S = [ind{:}]; %the seed
St = [ind{:}]; %the end of the seed
go2=true;
val_dims=1:length(dims);
while go2 %loop to grow each dimension
D=1;
while D<=length(val_dims) %add one to each dimension
St(val_dims(D))=St(val_dims(D))+1;
I={};
for ct = 1:length(S),I{ct}=S(ct):St(ct);end %generate indices
if St(val_dims(D))>dims(val_dims(D))
res=false;%outside matrix
else
res=A(I{:});
end
if ~all(res(:)) %invalid addition to dimension
St(val_dims(D))=St(val_dims(D))-1; %undo
val_dims(D)=[]; D=D-1; %do not try again
if isempty(val_dims),go2=false;end %end of growth
end
D=D+1;
end
end
%evaluate the result
mat = prod((St+1)-S); %size of matrix
if mat>maxmat
res={S,St};
maxmat=mat;
end
%tried to expand, now remove seed option
for ct = 1:length(S),I{ct}=S(ct):St(ct);end %generate indices
seeds(I{:})=0;
if ~any(seeds),go=0;end
end
end
I tested it using your matrix:
A = [0 0 0 1 1 0
0 1 1 0 1 1
1 0 1 1 1 1
1 0 0 1 1 1
0 1 1 0 1 1
0 1 0 0 1 1
1 0 0 0 1 1
1 0 0 0 0 0];
[ res ] = seed_grow( A );
for ct = 1:length(res),I{ct}=res{1}(ct):res{2}(ct);end %generate indices
B=A(I{:});
idx = reshape(1:numel(A),size(A));
idx = idx(I{:});
And got the desired result:
B =
1 1
1 1
1 1
1 1
1 1
1 1
idx =
34 42
35 43
36 44
37 45
38 46
39 47

Matlab: Creating a binned RGB histogram [duplicate]

This question already has an answer here:
Content-Based Image Retrieval and Precision-Recall graphs using Color Histograms in MATLAB
(1 answer)
Closed 7 years ago.
I want to implement the following Matlab function:
function hist = binnedRgbHist(im, numChannelBins)
Given an image im and a number between 1 and 256 numChannelBins, it should create a histogram sized (numChannelBins)^3.
For example, if numChannelBins is 2, it should produce the following 8-sized histogram:
Number of pixels with R < 128, G < 128, B < 128
Number of pixels with R < 128, G < 128, B >= 128
Number of pixels with R < 128, G >= 128, B < 128
Number of pixels with R < 128, G >= 128, B >= 128
Number of pixels with R > 128, G < 128, B < 128
Number of pixels with R > 128, G < 128, B >= 128
Number of pixels with R > 128, G >= 128, B < 128
Number of pixels with R > 128, G >= 128, B >= 128
It is like creating a cube where each axis represents one of (R,G and B), where each axis is divided into 2 bins => Finally there are 8 bins in the cube.
My questions:
It there a built-in function for it?
If not, how is it better to implement it in manners of runtinme using the GPU? Should I better iterate over the pixels once and create the histogram manually, or should I better iterate over the bins and each time count the number of pixels which satisfy the bin's conditions?
accumarray is very suited for this. Let
im: input image;
N: number of bins per color component.
Then
result = accumarray(reshape(permute(ceil(im/255*N), [3 1 2]), 3, []).', 1, [N N N]);
How it works
ceil(im/255*N) quantizes each color vaue to 1, 2, ..., N.
reshape(permute(..., [3 1 2]), 3, []).' transforms the quantized image into a three-column matrix where each row is a pixel and each column is a (quantized) color component.
accumarray(..., 1, [N N N]) considers each row of that matrix as 3D index, and counts how many times each index appears, giving filling indices that don't appear with a 0.
Example 1
Data:
>> N = 2;
>> im = randi(256,4,5,3)
im(:,:,1) =
113 152 157 65 229
138 71 215 39 41
13 108 230 160 153
142 128 125 220 214
im(:,:,2) =
208 215 182 27 230
205 161 8 95 180
225 53 73 129 31
103 97 160 83 255
im(:,:,3) =
242 29 185 89 55
202 225 156 174 96
160 197 35 87 113
244 176 146 85 120
Result:
result(:,:,1) =
1 1
3 4
result(:,:,2) =
2 4
3 2
It can be checked for example that there is only 1 pixel with all R,G,B less than 128.
Example 2
Data:
>> im = repmat(150,20,30,3);
>> N = 4;
Result:
result(:,:,1) =
0 0 0 0
0 0 0 0
0 0 0 0
0 0 0 0
result(:,:,2) =
0 0 0 0
0 0 0 0
0 0 0 0
0 0 0 0
result(:,:,3) =
0 0 0 0
0 0 0 0
0 0 600 0
0 0 0 0
result(:,:,4) =
0 0 0 0
0 0 0 0
0 0 0 0
0 0 0 0
In this case all pixels belong to the same 3D-bin:
I see #Luis Mendo gave a great one-liner solution as I was writing this. In case it provides some deeper intuition, my solution makes use of histcounts and accumarray:
im = randi([1 255],[10,5,3]); %// A random 10-by-5 "image"
numChannelBins = 2;
[~,~,binR]=histcounts(im(:,:,1),[1 ceil((1:numChannelBins)*(255/numChannelBins))]);
[~,~,binG]=histcounts(im(:,:,2),[1 ceil((1:numChannelBins)*(255/numChannelBins))]);
[~,~,binB]=histcounts(im(:,:,3),[1 ceil((1:numChannelBins)*(255/numChannelBins))]);
hist=accumarray([binR(:) binG(:) binB(:)],1,[numChannelBins,numChannelBins,numChannelBins])
Explanation:
the three calls to histcounts bin the red, green, blue pixels separately -- the third output [~,~,binX] of histcounts gives the bin index for each pixel
accumarray accumulates all the unique index triplets

how to generate an image in the format of the mnist database?

I need to make a handwritten image to be tested with a neural network in Matlab. When I see the data contained in the training images from the MNIST I see that it is an array of different gray scales like:
Columns 343 through 351
0 -0.0240 0.4002 0.6555 0.0235 -0.0062 0 0 0
Columns 352 through 360
0 0 0 0 0 0 0 0 0
Columns 361 through 369
0 0 0 -0.0079 0.1266 0.3272 -0.0233 0.0005
corresponding to a 20x20 image, unrolled into a 1*400 dimensional array.
I have downloaded an image in jpeg format and did the following:
im=imread('image.jpg');
gi=rgb2gray(im);
gi=gi(:);
gi=gi';
that generates me an array gi that says <1*400 uint8>, the last part of uint8 does not appear in the MNIST samples when I put it in Matlab. When I check it up my array it appear the following values:
Columns 289 through 306
58 105 128 133 142 131 76 21 1 0 3 0 2 4 17 12 7 0
Columns 307 through 324
1 15 42 75 97 105 98 73 31 4 1 0 0 0 0 2 4 3
Columns 325 through 342
0 0 1 4 21 37 55 59 46 26 9 0 0 0 0 0 0 0
Columns 343 through 360
1 1 0 0 0 1 7 14 21 21 14 5 0 0 0 0 0 0
Columns 361 through 378
0 0 0 0 0 0 0 0 0 1 2 1 0 0 0 2 0 0
when I visualize them all is fine, but when I want to run my program the following message appears:
??? Error using ==> mtimes
MTIMES is not fully supported for integer classes. At least one input must be scalar.
Error in ==> predict at 15
h1 = sigmoid([ones(m, 1) X] * Theta1');
Error in ==> ex4 at 241
pred = predict(Theta1, Theta2, gi);
situation that does not occur when I test my program even with one random sample ofc the MNIST data; any help?
You could try something like this:
imfile = 'image.jpg';
im = double(rgb2gray(imread(imfile))); % double and convert to grayscale
im = imresize(im,[20,20]); % change to 20 by 20 dimension
im = im(:); % unroll matrix to vector
im = im./max(im);
Note the MNIST dataset is intended to be a good dataset to require minimal preprocessing and the images were actually originally black and white (bilevel) whereas you are using color image. Also they do normalisation and other preprocessing to make nice 28 by 28 image dataset, my brief snippet of code above is unlikely to be anywhere near as good as MNIST dataset and is just intended to attempt to fix your error.
Your specific error is likely because you don't use double().
You may also get further errors because your code needs right dimensions, which can be achieved using imresize.
More information on MNIST dataset here:
http://yann.lecun.com/exdb/mnist/

Perl Nested Loop: Arrays - Calculating Minimum Distance

temp.bgf
ATOM 218 CB ASN 1 34 -7.84400 -9.19900 -5.03100 C_3 4 0 -0.18000 0 0
ATOM 221 CG ASN 1 34 -7.37700 -7.83400 -4.55200 C_R 3 0 0.55000 0 0
ATOM 226 C ASN 1 34 -9.18200 -10.62100 -6.58300 C_R 3 0 0.51000 0 0
ATOM 393 CB THR 2 69 -3.33000 -7.97700 -7.72000 C_3 4 0 0.14000 0 0
ATOM 397 CG2 THR 2 69 -4.75300 -8.54400 -7.67200 C_3 4 0 -0.27000 0 0
ATOM 401 C THR 2 69 -2.58000 -9.55700 -5.85500 C_R 3 0 0.51000 0 0
ATOM 417 CB THR 2 71 1.99100 -9.86800 -2.77000 C_3 4 0 0.14000 0 0
ATOM 421 CG2 THR 2 71 2.86300 -10.15400 -1.55700 C_3 4 0 -0.27000 0 0
ATOM 425 C THR 2 71 -0.19100 -10.14200 -1.62900 C_R 3 0 0.51000 0 0
ATOM 492 CB CYS 2 77 -5.17100 -14.77100 4.04000 C_3 4 0 -0.11000 0 0
ATOM 495 SG CYS 2 77 -6.29600 -14.88500 2.59500 S_3 2 2 -0.23000 0 0
ATOM 497 C CYS 2 77 -4.65100 -13.75800 6.12000 C_R 3 0 0.51000 0 0
ATOM 2071 CB SER 7 316 -3.87300 -2.15900 1.02300 C_3 4 0 0.05000 0 0
ATOM 2076 C SER 7 316 -4.79700 -1.16500 -1.10800 C_R 3 0 0.51000 0 0
target.bgf
ATOM 575 CB ASP 2 72 -2.80100 -7.45000 -2.09400 C_3 4 0 -0.28000 0 0
ATOM 578 CG ASP 2 72 -3.74900 -6.45900 -1.31600 C_R 3 0 0.62000 0 0
ATOM 581 C ASP 2 72 -3.19300 -9.62400 -0.87900 C_R 3 0 0.51000 0 0
I got two files of data. The first file contains data for the residues I want to calculate the distance to. The second file contains the coordinates for the target residue.
I want to calculate the minimum distance between the two quantities (i.e. ASP and the residues in the temp.bgf). I haven't been able to come up with an optimal way to store the x,y,z values and compare the distance in temp.bgf.
There have been questions as to how the calculation should be done. Here is the idea I have
#asp_atoms
#asn_atoms
$asnmin, aspmin
foreach $ap (#asp_atoms)
{
foreach $an (#asn_atoms)
{
dist = dist($v..$g...);
if($dist < $min)
{
$min = $dist;
}
}
}
I hope that clarifies questions as to how to implement the code. However, the problem I am having is how to store the values in the array and traverse through the file.
Also, to clarify how exactly(i.e. what numbers will be used for distance, here is an example of what I want to do).
For the ASP CB atoms with the following coordinates: -2.80100 -7.45000 -2.09400
I want to calculate the distance between the ASN CB, ASN CG, ASN C atoms. The minimum is the value printed out. Unfortunately, I don't have an exact value as to what that minimum would be, but I have to print out values less than 5 units of distance. Then, the ASP CG atoms distance would be calculated to all the ASN atoms to see the min. So I am trying to find the min distance here.
You can solve this by simply splitting each row from your file on white spaces, then storing the results in arrays of arrays and then slicing out only the parameters you need in loops (in this case x,y,z). This is not a complete answer to your problem but it should give you an idea of how this can be accomplished.
open (my $temp,"<","temp.bgf");
open (my $target,"<","target.bgf");
my #temps = create_ar($temp);
my #targets = create_ar($target);
sub create_ar {
my $filehan = shift;
my #array;
foreach (<$filehan>) {
push #array,[split(/\s+/,$_)];
}
return #array;
}
foreach my $ap (#targets) {
my ($target_X,$target_Y,$target_Z) = #{$ap}[6,7,8];
foreach my $an (#temps) {
my ($temp_X,$temp_Y,$temp_Z) = #{$an}[6,7,8];
...
}
}

Resources