I'm a quite new MatLab programmer, so this might be an easy one.. :)
I'm trying to generate a script, that will be able to read any number of XYZ-files, in any order, into a array, and arrange them in the array according to the X and Y coordinates given in the file..
My attempt is to use Load to get the files into a array, and after that, read through the array and, as explained, use the X and Y coordinate as the locations in a new array..
I've tried presetting the array size, and also I'm subtracting a value from both X and Y to minimize the size of the array (fullArray)
%# Script for extraction of XYZ-data from DSM/DTM xyz files
%# Define folders and filter
DSMfolder='/share/CFDwork/site/OFSites/MABH/DSM/*.xyz';
DTMfolder='/share/CFDwork/site/OFSites/MABH/DTM/*.xyz';
%# Define minimumvalues, to reduce arrays.. Please leave some slack, for the
%# reduction-algorithm..
borderX=100000;
borderY=210000;
%% Expected array-size
expSizeX=20000;
expSizeY=20000;
%# Program starts.. Please do not edit below this line!
files=ls(DSMfolder);
clear fullArray
fullArray=zeros(expSizeX,expSizeY);
minX=999999999;
minY=999999999;
maxX=0;
maxY=0;
disp('Reading DSM files');
[thisFile,remaining]=strtok(files);
while (~isempty(thisFile))
disp(['Reading: ' thisFile]);
clear fromFile;
fromFile=load(thisFile);
for k=1:size(fromFile,1)
tic
fullArray(fromFile(k,1)-borderX,fromFile(k,2)-borderY)=fromFile(k,3);
disp([k size(fromFile,1)]);
if (fromFile(k,1)<minX)
minX=fromFile(k,1);
end
if (fromFile(k,2)<minY)
minY=fromFile(k,2);
end
if (fromFile(k,1)>maxX)
maxX=fromFile(k,1);
end
if (fromFile(k,2)>maxY)
maxY=fromFile(k,2);
end
toc
end
[thisFile,remaining]=strtok(remaining);
end
As can be seen, I've added a tic-toc, and the time was 3.36secs for one operation!
Any suggestion on, why this is so slow, and how to improve the speed.. I need to order 2x6,000,000 lines, and I can't be bothered to wait 466 days.. :D
Best regards
Mark
Have you considered using a sparse matrix?
A sparse matrix in matlab is defined by a list of values and their location in the matrix -
incidentally this matches your input file perfectly.
While this representation is generally meant for matrices which are truly sparse, (i.e. most of their values are zeros), it appears that in your case it would be much faster to load the matrix using the sparse function even if it is not truly sparse.
Since your data is organised in such a way (location of every data point) my guess is it is sparse anyway.
The function to create a sparse matrix takes the location as columns so instead of a for loop your code will look something like this (this segment replaces the whole for loop):
minX = min(fromFile(:,1);
maxX = max(fromFile(:,1);
minY = min(fromFile(:,2);
minY = max(fromFile(:,2);
S = sparse(fromFile(:,1) - borderX, fromFile(:,2) - borderY, fromFile(:,3));
Note that the other change I've made is calculating minimum / maximum values directly from the matrix - this is much faster than going over a for loop, as operating on vectors and matrices unleashes the true power of matlab :)
You can perform all sorts of operations on the sparse matrix, but if you want to convert it to a regular matrix you can use the matlab full function.
More information here and there.
Related
I am trying to perform a specific downsampling process. It is described by the following pseudocode.
//Let V be an input image with dimension of M by N (row by column)
//Let U be the destination image of size floor((M+1)/2) by floor((N+1)/2)
//The floor function is to emphasize the rounding for the even dimensions
//U and V are part of a wrapper class of Pixel_FFFF vImageBuffer
for i in 0 ..< U.size.rows {
for j in 0 ..< U.size.columns {
U[i,j] = V[(i * 2), (j * 2)]
}
}
The process basically takes pixel values on every other locations spanning on both dimensions. The resulting image will be approximately half of the original image.
On a one-time call, the process is relatively fast running by itself. However, it becomes a bottleneck when the code is called numerous times inside a bigger algorithm. Therefore, I am trying to optimize it. Since I use Accelerate in my app, I would like to be able to adapt this process in a similar spirit.
Attempts
First, this process can be easily done by a 2D convolution using the 1x1 kernel [1] with a stride [2,2]. Hence, I considered the function vImageConvolve_ARGBFFFF. However, I couldn't find a way to specify the stride. This function would be the best solution, since it takes care of the image Pixel_FFFF structure.
Second, I notice that this is merely transferring data from one array to another array. So, I thought vDSP_vgathr function is a good solution for this. However, I hit a wall, since the resulting vector of vectorizing a vImageBuffer would be the interleaving bits structure A,R,G,B,A,R,G,B,..., which each term is 4 bytes. vDSP_vgathr function transfers every 4 bytes to the destination array using a specified indexing vector. I could use a linear indexing formula to make such vector. But, considering both even and odd dimensions, generating the indexing vector would be as inefficient as the original solution. It would require loops.
Also, neither of the vDSP 2D convolution functions fit the solution.
Is there any other functions in Accelerate that I might have overlooked? I saw that there's a stride option in the vDSP 1D convolution functions. Maybe, does someone know an efficient way to translate 2D convolution process with strides to 1D convolution process?
Basically no for/while loops or if statements. Therefore I assume the colon operator is suppose to be used.
I'm new to Matlab and have basically used for loops in one way or another to accomplish virtually everything and can't find any online resources to help so a quick answer is greatly appreciated.
Essentially the goal is to create and return a new matrix based off an inputted matrix. The new matrix contains only the even indexed elements of the original so a 4x4 matrix would return a 2x2 and a 5x5 would also return a 2x2 because anything in the 5th row or column couldn't have both an even column and row.
My code:
function [A] = myFunction(M)
[x y] = size(M);
for a = 2:2:x
for b = 2:2:y
A(a/2, b/2) = M(a,b);
end
end
end
Which works but I am trying to understand how to do it without the for loops and using the colon operator so I can do that in other applications as well where it makes sense.
very simple
A = M(2:2:end, 2:2:end);
Read about matrix indexing for more information and details.
I want to have a time series of 2x2 complex matrices,Ot, and I then want to have 1-line commands to multiply an array of complex vectors Vt, by the array Ot where the position in the array is understood as the time instant. I will want Vtprime(i) = Ot(i)*Vt(i). Can anyone suggest a simple way to implement this?
Suppose I have a matrix, M(t), where the elements m(j,k) are functions of t and t is an element of some series (t = 0:0.1:3). Can I create an array of matrices very easily?
I understand how to have an array in Matlab, and even a two dimensional array, where each "i" index holds two complex numbers (j=0,1). That would be a way to have a "time series of complex 2-d vectors". A way to have a time series of complex matrices would be a three dimensional array. (i,j,k) denotes the "ith" matrix and j=0,1 and k=0,1 give the elements of that matrix.
If I go a head and treat matlab like a programming language with no special packages, then I end up having to write the matrix multiplications in terms of loops etc. This then goes towards all the matrix operations. I would prefer to use commands that will make all this very easy if I can.
This could be solved with Matlab array iterations like
vtprime(:) = Ot(:)*Vt(:)
if I understand your problem correctly.
Since Ot and Vt are both changing with time index, I think the best way to do this is in a loop. (If only one of Ot or Vt was changing with time, you could set it up in one big matrix multiplication.)
Here's how I would set it up: Ot is a complex 2x2xI 3D matrix, so that
Ot(:,:,i)
references the matrix at time instant i.
Vt is a complex 2xI matrix, so that
Vt(:,i)
references the vector at time instant i.
To do the multiplication:
for i = 1:I
Vtprime(:,i) = Ot(:,:,i) * Vt(:,i);
end
The resulting Vtprime is a 2xI matrix set up so that Vtprime(:,i) is the output at time instant i.
currently I have the following portion of code:
for i = 2:N-1
res(i) = k(i)/m(i)*x(i-1) -(c(i)+c(i+1))/m(i)*x(N+i) +e(i+1)/m(i)*x(i+1);
end
where as the variables k, m, c and e are vectors of size N and x is a vector of size 2*N. Is there any way to do this a lot faster using something like arrayfun!? I couldn't figure this out :( I especially want to make it faster by running on the GPU later and thus, arrayfun would be also helpfull since matlab doesn't support parallelizing for-loops and I don't want to buy the jacket package...
Thanks a lot!
You don't have to use arrayfun. It works if use use some smart indexing:
clear all
N=20;
k=rand(N,1);
m=rand(N,1);
c=rand(N,1);
e=rand(N,1);
x=rand(2*N,1);
% for-based implementation
%Watch out, you are not filling the first element of forres!
forres=zeros(N-1,1); %Initialize array first to gain some speed.
for i = 2:N-1
forres(i) = k(i)/m(i)*x(i-1) -(c(i)+c(i+1))/m(i)*x(N+i) +e(i+1)/m(i)*x(i+1);
end
%vectorized implementation
parres=k(2:N-1)./m(2:N-1).*x(1:N-2) -(c(2:N-1)+c(3:N))./m(2:N-1).*x(N+2:2*N-1) +e(3:N)./m(2:N-1).*x(3:N);
%compare results; strip the first element from forres
difference=forres(2:end)-parres %#ok<NOPTS>
Firstly, MATLAB does support parallel for loops via PARFOR. However, that doesn't have much chance of speeding up this sort of computation since the amount of computation is small compared to the amount of data you're reading and writing.
To restructure things for GPUArray "arrayfun", you need to make all the array references in the loop body refer to the loop iterate, and have the loop run across the full range. You should be able to do this by offsetting some of the arrays, and padding with dummy values. For example, you could prepend all your arrays with NaN, and replace x(i-1) with a new variable x_1 = [x(2:N) NaN]
I am looking for a way to store a large variable number of matrixes in an array in MATLAB.
Are there any ways to achieve this?
Example:
for i: 1:unknown
myArray(i) = zeros(500,800);
end
Where unknown is the varied length of the array, I can revise with additional info if needed.
Update:
Performance is the main reason I am trying to accomplish this. I had it before where it would grab the data as a single matrix, show it in real time and then proceed to process the next set of data.
I attempted it using multidimensional arrays as suggested below by Rocco, however my data is so large that I ran out of Memory, I might have to look into another alternative for my case. Will update as I attempt other suggestions.
Update 2:
Thank you all for suggestions, however I should have specified beforehand, precision AND speed are both an integral factor here, I may have to look into going back to my original method before trying 3-d arrays and re-evaluate the method for importing the data.
Use cell arrays. This has an advantage over 3D arrays in that it does not require a contiguous memory space to store all the matrices. In fact, each matrix can be stored in a different space in memory, which will save you from Out-of-Memory errors if your free memory is fragmented. Here is a sample function to create your matrices in a cell array:
function result = createArrays(nArrays, arraySize)
result = cell(1, nArrays);
for i = 1 : nArrays
result{i} = zeros(arraySize);
end
end
To use it:
myArray = createArrays(requiredNumberOfArrays, [500 800]);
And to access your elements:
myArray{1}(2,3) = 10;
If you can't know the number of matrices in advance, you could simply use MATLAB's dynamic indexing to make the array as large as you need. The performance overhead will be proportional to the size of the cell array, and is not affected by the size of the matrices themselves. For example:
myArray{1} = zeros(500, 800);
if twoRequired, myArray{2} = zeros(500, 800); end
If all of the matrices are going to be the same size (i.e. 500x800), then you can just make a 3D array:
nUnknown; % The number of unknown arrays
myArray = zeros(500,800,nUnknown);
To access one array, you would use the following syntax:
subMatrix = myArray(:,:,3); % Gets the third matrix
You can add more matrices to myArray in a couple of ways:
myArray = cat(3,myArray,zeros(500,800));
% OR
myArray(:,:,nUnknown+1) = zeros(500,800);
If each matrix is not going to be the same size, you would need to use cell arrays like Hosam suggested.
EDIT: I missed the part about running out of memory. I'm guessing your nUnknown is fairly large. You may have to switch the data type of the matrices (single or even a uintXX type if you are using integers). You can do this in the call to zeros:
myArray = zeros(500,800,nUnknown,'single');
myArrayOfMatrices = zeros(unknown,500,800);
If you're running out of memory throw more RAM in your system, and make sure you're running a 64 bit OS. Also try reducing your precision (do you really need doubles or can you get by with singles?):
myArrayOfMatrices = zeros(unknown,500,800,'single');
To append to that array try:
myArrayOfMatrices(unknown+1,:,:) = zeros(500,800);
I was doing some volume rendering in octave (matlab clone) and building my 3D arrays (ie an array of 2d slices) using
buffer=zeros(1,512*512*512,"uint16");
vol=reshape(buffer,512,512,512);
Memory consumption seemed to be efficient. (can't say the same for the subsequent speed of computations :^)
if you know what unknown is,
you can do something like
myArray = zeros(2,2);
for i: 1:unknown
myArray(:,i) = zeros(x,y);
end
However it has been a while since I last used matlab.
so this page might shed some light on the matter :
http://www.mathworks.com/access/helpdesk/help/techdoc/index.html?/access/helpdesk/help/techdoc/matlab_prog/f1-86528.html
just do it like this
x=zeros(100,200);
for i=1:100
for j=1:200
x(i,j)=input('enter the number');
end
end