Can you preallocate an array of random size? - arrays

The essential part of the code in question can be distilled into:
list=rand(1,x); % where x is some arbitrarily large integer
hitlist=[];
for n=1:1:x
if rand(1) < list(n)
hitlist=[hitlist n];
end
end
list(hitlist)=[];
This program is running quite slowly and I suspect this is why, however I'm unaware how to fix it. The length of the hitlist will necessarily vary in a random way, so I can't simply preallocate a 'zeros' of the proper size. I contemplated making the hitlist a zeros the length of my list, but then I would have to remove all the superfluous zeros, and I don't know how to do that without having the same problem.
How can I preallocate an array of random size?

I'm unsure about preallocating 'random size', but you can preallocate in large chunks, e.g. 1e3, or however is useful for your use case:
list=rand(1,x); % where x is some arbitrarily large integer
a = 1e3; % Increment of preallocation
hitlist=zeros(1,a);
k=1; % counter
for n=1:1:x
if rand(1) < list(n)
hitlist(k) = n;
k=k+1;
end
if mod(k-1,a)==0 % if a has been reached
hitlist = [hitlist zeros(1,a)]; % extend
end
end
hitlist = hitlist(1:k-1); % trim excess
% hitlist(k:end) = []; % alternative trim, but might error
list(hitlist)=[];
This won't be the fastest possible, but at least a whole lot faster than incrementing each iteration. Make sure to choose a suitable; you can even base it somehow on the available amount of RAM using memory, and trim the excess afterwards, that way you don't have to do the in-loop trick at all.
As an aside: MATLAB works column-major, so running through matrices that way is faster. I.e. first the first column, then the second and so on. For a 1D array this doesn't matter, but for matrices it does. Hence I prefer to use list = rand(x,1), i.e. as column.
For this specific case, don't use this looped approach anyway, but use logical indexing:
list = rand(x,1);
list = list(list<rand(size(list)));

Related

Looping through a set of sequences satisfying a certain property, without storing them

Below is a MATLAB code (recursion) which inputs a vector (l_1,l_2,...,l_r) of non negative integers and an integer m prints all sequences (m_1,m_2,...,m_r) satisfying:
0 <= m_i <= l_i for all 1 <= i <= r and m_1 + m_2 + ... + m_r = m
The r is captured in the function definition by calling the size of the (l_i) array below:
function arr=sumseq(m,lims)
arr=[];
r=size(lims,2);
if r==0 || m<0
arr=[];
elseif r==1 && lims(1)>=m
arr=[m]; %#ok<NBRAK>
else
for i=0:lims(1)
if(lims(1)<0)
arr=[];
else
v=sumseq(m-i,lims(2:end));
arr=[arr;[i*ones(size(v,1),1) v]];
end
end
end
end
Here what I have done is, stored a whole array of them and made it my output. Instead I want to only print them one by one and not store them in an array. This seems simple enough as there is not much choice in which line(s) I need to change (I believe it is the contents of the else block inside the for loop), but I get into a fix every time I try to achieve it.
(Also, MATLAB warned me that if I kept re-initializing the array with a larger array like in the statement:
arr=[arr;[i*ones(size(v,1),1) v]];
it reallocates a fresh array for all the contents of arr and spends a 'lot' of time doing so.)
In short: recursion or not, I want to save the trouble of storing it, and need an algorithm which is as efficient as or more efficient than what I have here.

Dynamically indexing an array in C

Is it possible to create arrays based of their index as in
int x = 4;
int y = 5;
int someNr = 123;
int foo[x][y] = someNr;
dynamically/on the run, without creating foo[0...3][0...4]?
If not, is there a data structure that allow me to do something similar to this in C?
No.
As written your code make no sense at all. You need foo to be declared somewhere and then you can index into it with foo[x][y] = someNr;. But you cant just make foo spring into existence which is what it looks like you are trying to do.
Either create foo with correct sizes (only you can say what they are) int foo[16][16]; for example or use a different data structure.
In C++ you could do a map<pair<int, int>, int>
Variable Length Arrays
Even if x and y were replaced by constants, you could not initialize the array using the notation shown. You'd need to use:
int fixed[3][4] = { someNr };
or similar (extra braces, perhaps; more values perhaps). You can, however, declare/define variable length arrays (VLA), but you cannot initialize them at all. So, you could write:
int x = 4;
int y = 5;
int someNr = 123;
int foo[x][y];
for (int i = 0; i < x; i++)
{
for (int j = 0; j < y; j++)
foo[i][j] = someNr + i * (x + 1) + j;
}
Obviously, you can't use x and y as indexes without writing (or reading) outside the bounds of the array. The onus is on you to ensure that there is enough space on the stack for the values chosen as the limits on the arrays (it won't be a problem at 3x4; it might be at 300x400 though, and will be at 3000x4000). You can also use dynamic allocation of VLAs to handle bigger matrices.
VLA support is mandatory in C99, optional in C11 and C18, and non-existent in strict C90.
Sparse arrays
If what you want is 'sparse array support', there is no built-in facility in C that will assist you. You have to devise (or find) code that will handle that for you. It can certainly be done; Fortran programmers used to have to do it quite often in the bad old days when megabytes of memory were a luxury and MIPS meant millions of instruction per second and people were happy when their computer could do double-digit MIPS (and the Fortran 90 standard was still years in the future).
You'll need to devise a structure and a set of functions to handle the sparse array. You will probably need to decide whether you have values in every row, or whether you only record the data in some rows. You'll need a function to assign a value to a cell, and another to retrieve the value from a cell. You'll need to think what the value is when there is no explicit entry. (The thinking probably isn't hard. The default value is usually zero, but an infinity or a NaN (not a number) might be appropriate, depending on context.) You'd also need a function to allocate the base structure (would you specify the maximum sizes?) and another to release it.
Most efficient way to create a dynamic index of an array is to create an empty array of the same data type that the array to index is holding.
Let's imagine we are using integers in sake of simplicity. You can then stretch the concept to any other data type.
The ideal index depth will depend on the length of the data to index and will be somewhere close to the length of the data.
Let's say you have 1 million 64 bit integers in the array to index.
First of all you should order the data and eliminate duplicates. That's something easy to achieve by using qsort() (the quick sort C built in function) and some remove duplicate function such as
uint64_t remove_dupes(char *unord_arr, char *ord_arr, uint64_t arr_size)
{
uint64_t i, j=0;
for (i=1;i<arr_size;i++)
{
if ( strcmp(unord_arr[i], unord_arr[i-1]) != 0 ){
strcpy(ord_arr[j],unord_arr[i-1]);
j++;
}
if ( i == arr_size-1 ){
strcpy(ord_arr[j],unord_arr[i]);
j++;
}
}
return j;
}
Adapt the code above to your needs, you should free() the unordered array when the function finishes ordering it to the ordered array. The function above is very fast, it will return zero entries when the array to order contains one element, but that's probably something you can live with.
Once the data is ordered and unique, create an index with a length close to that of the data. It does not need to be of an exact length, although pledging to powers of 10 will make everything easier, in case of integers.
uint64_t* idx = calloc(pow(10, indexdepth), sizeof(uint64_t));
This will create an empty index array.
Then populate the index. Traverse your array to index just once and every time you detect a change in the number of significant figures (same as index depth) to the left add the position where that new number was detected.
If you choose an indexdepth of 2 you will have 10² = 100 possible values in your index, typically going from 0 to 99.
When you detect that some number starts by 10 (103456), you add an entry to the index, let's say that 103456 was detected at position 733, your index entry would be:
index[10] = 733;
Next entry begining by 11 should be added in the next index slot, let's say that first number beginning by 11 is found at position 2023
index[11] = 2023;
And so on.
When you later need to find some number in your original array storing 1 million entries, you don't have to iterate the whole array, you just need to check where in your index the first number starting by the first two significant digits is stored. Entry index[10] tells you where the first number starting by 10 is stored. You can then iterate forward until you find your match.
In my example I employed a small index, thus the average number of iterations that you will need to perform will be 1000000/100 = 10000
If you enlarge your index to somewhere close the length of the data the number of iterations will tend to 1, making any search blazing fast.
What I like to do is to create some simple algorithm that tells me what's the ideal depth of the index after knowing the type and length of the data to index.
Please, note that in the example that I have posed, 64 bit numbers are indexed by their first index depth significant figures, thus 10 and 100001 will be stored in the same index segment. That's not a problem on its own, nonetheless each master has his small book of secrets. Treating numbers as a fixed length hexadecimal string can help keeping a strict numerical order.
You don't have to change the base though, you could consider 10 to be 0000010 to keep it in the 00 index segment and keep base 10 numbers ordered, using different numerical bases is nonetheless trivial in C, which is of great help for this task.
As you make your index depth become larger, the amount of entries per index segment will be reduced
Please, do note that programming, especially lower level like C consists in comprehending the tradeof between CPU cycles and memory use in great part.
Creating the proposed index is a way to reduce the number of CPU cycles required to locate a value at the cost of using more memory as the index becomes larger. This is nonetheless the way to go nowadays, as masive amounts of memory are cheap.
As SSDs' speed become closer to that of RAM, using files to store indexes is to be taken on account. Nevertheless modern OSs tend to load in RAM as much as they can, thus using files would end up in something similar from a performance point of view.

How do you calculate big O of an algorithm

I have a problem where i have to find missing numbers within an array and add them to a set.
The question goes like so:
Array of size (n-m) with numbers from 1..n with m of them missing.
Find one all of the missing numbers in O(log). Array is sorted.
Example:
n = 8
arr = [1,2,4,5,6,8]
m=2
Result has to be a set {3, 7}.
This is my solution so far and wanted to know how i can calculate the big o of a solution. Also most solution I have seen uses the divide and conquer approach. How do i calculate the big oh of my algorithm below ?
ps If i don't meet the requirement, Is there any way I can do this without having to do it recursively ? I am really not a fan of recursion, I simply cant get my head around it ! :(
var arr = [1,2,4,5,6,8];
var mySet = [];
findMissingNumbers(arr);
function findMissingNumbers(arr){
var temp = 0;
for (number in arr){ //O(n)
temp = parseInt(number)+1;
if(arr[temp] - arr[number] > 1){
addToSet(arr[number], arr[temp]);
}
}
}
function addToSet(min, max){
while (min != max-1){
mySet.push(++min);
}
}
There are two things you want to look at, one you have pointed out: how many times do you iterate the loop "for (number in arr)"? If you array contains n-m elements, then this loop should be iterated n-m times. Then look at each operation you do inside the loop and try to figure out a worst-case scenario (or typical) scenario for each. The temp=... line should be a constant cost (say 1 unit per loop), the conditional is constant cost (say 1 unit per loop) and then there is the addToSet. The addToset is more difficult to analyze because it isn't called every time, and it may vary in how expensive it is each time called. So perhaps what you want to think is that for each of the m missing elements, the addToSet is going to perform 1 operation... a total of m operations (which you don't know when they will occur, but all m must occur at some point). Then add up all of your costs.
n-m loops iterations, in each one you do 2 operations total of 2(n-m) then add in the m operations done by addToSet, for a total of something like 2n-m ~ 2n (assuming that m is small compared to n). This could be O(n-m) or also O(n) (If it is O(n-m) it is also O(n) since n >= n-m.) Hope this helps.
In your code you have a complexity of O(n) in time because you check n index of your array. A faster way to do this is something like that :
Go to the half of your array
Is this number at the right place (this
means the other ones will be too because array is sorted)
If it's the expected number : go to the half of the second half
If not : add this number in the set and go to the half of the first half
Stop when the number you looking at is at index size-1
Note that you can have some optimization, for example you can directly check if the array have the correct size and return an empty array. It depends of your problem.
My algorithm is also in O(n) because you always take the worst set of data. In my case I would be that we miss one data at the end of the array. So technically it should be O(n-1) but constants are negligible in front of n (assumed to be very high). That's why you have to keep in mind the average complexity too.
For what it's worth here is a more succinct implementation of the algorithm (javascript):
var N = 10;
var arr = [2,9];
var mySet = [];
var index = 0;
for(var i=1;i<=N;i++){
if(i!=arr[index]){
mySet.push(i);
}else{
index++;
}
}
Here the big(O) is trivial as there is only a single loop which runs exactly N times with constant cost operations each iteration.
Big O is the complexity of the algorithm. It is a function for the number of steps it takes your program to come up with a solution.
This gives a pretty good explanation of how it works:
Big O, how do you calculate/approximate it?

how to make matlab loop over 2d array faster

I have the above loop running on the above variables:
A is a 2d array of size mxn.
mask is a 1d logical array of size 1xn
results is a 1d array of size 1xn
B is a vector of the form mx1
C is a mxm matrix, m is the same as the above.
Edit: expanded foo(x) into the function.
here is the code:
temp = (B.'*C*B);
for k = 1:n
x = A(:,k);
if(mask(k) == 1)
result(k) = (B.'*C*x)^2 / (temp*(x.'*C*x)); %returns scalar
end
end
take note, I am already successfully using the above code as a parfor loop instead of for. I was hoping you would be able to suggest some way to use meshgrid or the sort to yield better performance improvement. I don't think I have RAM problems so a solution can also be expensive memory wise.
Many thanks.
try this:
result=(B.'*C*A).^2./diag(temp*(A.'*C*A))'.*mask;
This vectorization via matrix multiplication will also make sure that result is a 1xn vector. In the code you provided there can be a case where the last elements in mask are zeros, in this case your code will truncate result to a smaller length, whereas, in the answer it'll keep these elements zero.
If your foo admits matrix input, you could do:
result = zeros(1,n); % preallocate result with zeros
mask = logical(mask); % make mask logical type
result(mask) = foo(A(mask),:); % compute foo for all selected columns

How can I efficiently convert a large decimal array into a binary array in MATLAB?

Here's the code I am using now, where decimal1 is an array of decimal values, and B is the number of bits in binary for each value:
for (i = 0:1:length(decimal1)-1)
out = dec2binvec(decimal1(i+1),B);
for (j = 0:B-1)
bit_stream(B*i+j+1) = out(B-j);
end
end
The code works, but it takes a long time if the length of the decimal array is large. Is there a more efficient way to do this?
bitstream = zeros(nelem * B,1);
for i = 1:nelem
bitstream((i-1)*B+1:i*B) = fliplr(dec2binvec(decimal1(i),B));
end
I think that should be correct and a lot faster (hope so :) ).
edit:
I think your main problem is that you probably don't preallocate the bit_stream matrix.
I tested both codes for speed and I see that yours is faster than mine (not very much tho), if we both preallocate bitstream, even though I (kinda) vectorized my code.
If we DONT preallocate the bitstream my code is A LOT faster. That happens because your code reallocates the matrix more often than mine.
So, if you know the B upfront, use your code, else use mine (of course both have to be modified a little bit to determine the length at runtime, which is no problem since dec2binvec can be called without the B parameter).
The function DEC2BINVEC from the Data Acquisition Toolbox is very similar to the built-in function DEC2BIN, so some of the alternatives discussed in this question may be of use to you. Here's one option to try, using the function BITGET:
decimal1 = ...; %# Your array of decimal values
B = ...; %# The number of bits to get for each value
nValues = numel(decimal1); %# Number of values in decimal1
bit_stream = zeros(1,nValues*B); %# Initialize bit stream
for iBit = 1:B %# Loop over the bits
bit_stream(iBit:B:end) = bitget(decimal1,B-iBit+1); %# Get the bit values
end
This should give the same results as your sample code, but should be significantly faster.

Resources