I have a recursive function that goes through the elements of a matrix by a specific rule. Every time I go through an element I am calculating a result based on the element I am in the matrix at that moment, and when all the calls of the function end I am back to the first element I started at I have the result I need in that result I mentioned before.
Now the problem is that I have to replace every element of the matrix that I went through with this result. From what I am thinking i have 3 options :
Use the same algorithm to call the function again from the starting element and go through it again, basically going through the same path and replacing the elements with the result (this would be the most inefficient as the algorithm and the size of the matrix are pretty big )
Memorizing the addresses of the elements I am arriving at in the matrix on every callout, then I would have a vector of addresses and after the function finishes I am just going through
that vector and replacing the values with the one I calculating.
This is quite a longshot for me to implement but I was thinking that I could synchronize all the variables in the positions that I have arrived in the matrix with one external variable. When the function finishes and I have my result I just change the value of this external variable with the result and all the other elements in the matrix that I have linked this variable to would automatically change.
My question is how could I implement something like "3."? is there any way you can synchronize a variable(or multiple ones) to always have the same value as another one that is constantly updating ?
From a definition of a Matrix like:
struct Matrix {
int N, M;
int *Mat;
};
Type Values[];
Where the value at (i,j) is Values[G->Mat[G->M * i + j]]; that is Mat just holds indexes into Values[] which can be dynamically extended. At the beginning if a path traversal, you could allocate a new Values[t], and change each visited node (i,j) to be t: G->Mat[G->M * i + j] = t. When you assign Values[t], all of the nodes linked to it will be automatically updated.
While you are making your traversal, you would use their value, then change their index as you moved to the next node. Serendipitously, if you encountered a node already with an index of t, you would know that you are caught in a cycle.
Although I might have misread your requirement terribly.
Related
Recently I was reading a Programming book and found this question:
I have an array :
array = [2,3,6,7,8,9,33,22];
Now, Suppose I have deleted the element at 4th position i.e. 8 .
Now I have to rearrange the array as:
Newarray = [2,3,6,7,9,33,22];
How Can I do this. And I have to also minimize the complexity.
Edit I have no choice to make another copy of it.I have to only modify it.
You can "remove" a value from an array by simply copy over the element by the next elements, that's easy to do with memmove:
int array[8] = {2,3,6,7,8,9,33,22};
memmove(&array[4], &array[5], sizeof(int) * 3);
The resulting array will be {2,3,6,7,9,33,22,22}.
And from that you can see the big problem with attempting to "remove" an element from a compile-time array: You can't!
You can overwrite the element, but the size of the array is fixed at time of compilation and can't actually be changed at run-time.
One common solution is to keep track of the actual number of valid elements in the array manually, and make sure you update that size as you add or remove elements. Either that or set unused elements to a value that's not going to be used otherwise (for example if your array can only contain positive numbers, then you could set unused elements to -1 and check for that).
If you don't want to use a library function (why not?) then loop and set e.g.
array[4] = array[5];
array[5] = array[6];
and so on.
Do this, just use these two functions and it will work fine
index=4;//You wanted to delete it from the array.
memcpy(newarray,array,sizeof(array));
memmove(&newarray[index], &newarray[index + 1], sizeof(newarray)-1);
now the newarray contains your exact replica without the character that you wished to remove
You can simply displace each element from the delIdx(deletion index) one step forward.
for(int i=delIdx; i<(arr_size-1);i++)
{
arr[i]= arr[i+1];
}
If required you can either set the last element to a non-attainable value or decrease the size of the array.
I'm trying to create a cell array of size N,
where every cell is a randomized Matrix of size M,
I've tried using deal or simple assignments, but the end result is always N identical Matrices of size M
for example:
N=20;
M=10;
CellArray=cell(1,N);
CellArray(1:20)={rand(M)};
this yields identical matrices in each cell, iv'e tried writing the assignment like so:
CellArray{1:20}={rand(M)};
but this yields the following error:
The right hand side of this assignment has too few values to satisfy the left hand side.
the ends results should be a set of transition probability matrices to be used for a model i'm constructing,
there's a currently working version of the model, but it uses loops to create the matrices, and works rather slowly,
i'd be thankful for any help
If you don't want to use loops because you are interested in a low execution time, get rid of the cells.
RandomArray=rand(M,M,N)
You can access each slice, which is your intended MxM matrix, using RandomArray(:,:,index)
Use cellfun:
N = 20;
M = 10;
CellArray = cellfun(#(x) rand(M), cell(1,N), 'uni',0)
For every cell it newly calls rand(M) - unlike before, you were assigning the same rand(M) to every cell, which was just computed once.
You are given an unsorted array of n integers, and you would like to find if there are any duplicates in the array (i.e. any integer appearing more than once).
Describe an algorithm (implemented with two nested loops) to do this.
The question that I am stuck at is:
How can you limit the input data to achieve a better Big O complexity? Describe an algorithm for handling this limited data to find if there are any duplicates. What is the Big O complexity?
Your help will be greatly appreciated. This is not related to my coursework, assignment or coursework and such. It's from the previous year exam paper and I am doing some self-study but seem to be stuck on this question. The only possible solution that i could come up with is:
If we limit the data, and use nested loops to perform operations to find if there are duplicates. The complexity would be O(n) simply because the amount of time the operations take to perform is proportional to the data size.
If my answer makes no sense, then please ignore it and if you could, then please suggest possible solutions/ working out to this answer.
If someone could help me solve this answer, I would be grateful as I have attempted countless possible solution, all of which seems to be not the correct one.
Edited part, again.. Another possible solution (if effective!):
We could implement a loop to sort the array so that it sorts the array (from lowest integer to highest integer), therefore the duplicates will be right next to each other making them easier and faster to be identified.
The big O complexity would still be O(n^2).
Since this is linear type, it would simply use the first loop and iterate n-1 times as we are getting the index in the array (in the first iteration it could be, for instance, 1) and store this in a variable names 'current'.
The loop will update the current variable by +1 each time through the iteration, within that loop, we now write another loop to compare the current number to the next number and if it equals to the next number, we can print using a printf statement else we move back to the outer loop to update the current variable by + 1 (next value in the array) and update the next variable to hold the value of the number after the value in current.
You can do linearly (O(n)) for any input if you use hash tables (which have constant look-up time).
However, this is not what you are being asked about.
By limiting the possible values in the array, you can achieve linear performance.
E.g., if your integers have range 1..L, you can allocate a bit array of length L, initialize it to 0, and iterate over your input array, checking and flipping the appropriate bit for each input.
A variance of Bucket Sort will do. This will give you complexity of O(n) where 'n' is the number of input elements.
But one restriction - max value. You should know the max value your integer array can take. Lets say it as m.
The idea is to create a bool array of size m (all initialized to false). Then iterate over your array. As you find an element, set bucket[m] to true. If it is already true then you've encountered a duplicate.
A java code,
// alternatively, you can iterate over the array to find the maxVal which again is O(n).
public boolean findDup(int [] arr, int maxVal)
{
// java by default assigns false to all the values.
boolean bucket[] = new boolean[maxVal];
for (int elem : arr)
{
if (bucket[elem])
{
return true; // a duplicate found
}
bucket[elem] = true;
}
return false;
}
But the constraint here is the space. You need O(maxVal) space.
nested loops get you O(N*M) or O(N*log(M)) for O(N) you can not use nested loops !!!
I would do it by use of histogram instead:
DWORD in[N]={ ... }; // input data ... values are from < 0 , M )
DWORD his[M]={ ... }; // histogram of in[]
int i,j;
// compute histogram O(N)
for (i=0;i<M;i++) his[i]=0; // this can be done also by memset ...
for (i=0;i<N;i++) his[in[i]]++; // if the range of values is not from 0 then shift it ...
// remove duplicates O(N)
for (i=0,j=0;i<N;i++)
{
his[in[i]]--; // count down duplicates
in[j]=in[i]; // copy item
if (his[in[i]]<=0) j++; // if not duplicate then do not delete it
}
// now j holds the new in[] array size
[Notes]
if value range is too big with sparse areas then you need to convert his[]
to dynamic list with two values per item
one is the value from in[] and the second is its occurrence count
but then you need nested loop -> O(N*M)
or with binary search -> O(N*log(M))
I am familiar with iterative methods on paper, but MATLAB coding is relatively new to me and I cannot seem to find a way to code this.
In code language...
This is essentially what I have:
A = { [1;1] [2;1] [3;1] ... [33;1]
[1;2] [2;2] [3;2] ... [33;2]
... ... ... ... ....
[1;29] [2;29] [3;29] ... [33;29] }
... a 29x33 cell array of 2x1 column vectors, which I got from:
[X,Y] = meshgrid([1:33],[1:29])
A = squeeze(num2cell(permute(cat(3,X,Y),[3,1,2]),1))
[ Thanks to members of stackOverflow who helped me do this ]
I have a function that calls each of these column vectors and returns a single value. I want to institute a 2-D 5-point stencil method that evaluates a column vector and its 4 neighbors and finds the maximum value attained through the function out of those 5 column vectors.
i.e. if I was starting from the middle, the points evaluated would be :
1.
A{15,17}(1)
A{15,17}(2)
2.
A{14,17}(1)
A{14,17}(2)
3.
A{15,16}(1)
A{15,16}(2)
4.
A{16,17}(1)
A{16,17}(2)
5.
A{15,18}(1)
A{15,18}(2)
Out of these 5 points, the method would choose the one with the largest returned value from the function, move to that point, and rerun the method. This would continue on until a global maximum is reached. It's basically an iterative optimization method (albeit a primitive one). Note: I don't have access to the optimization toolbox.
Thanks a lot guys.
EDIT: sorry I didn't read the iterative part of your Q properly. Maybe someone else wants to use this as a template for a real answer, I'm too busy to do so now.
One solution using for loops (there might be a more elegant one):
overallmax=0;
for v=2:size(A,1)-1
for w=2:size(A,2)-1
% temp is the horizontal part of the "plus" stencil
temp=A((v-1):(v+1),w);
tmpmax=max(cat(1,temp{:}));
temp2=A(v,(w-1):(w+1));
% temp2 is the vertical part of the "plus" stencil
tmpmax2=max(cat(1,temp2{:}));
mxmx=max(tmpmax,tmpmax2);
if mxmx>overallmax
overallmax=mxmx;
end
end
end
But if you're just looking for max value, this is equivalent to:
maxoverall=max(cat(1,A{:}));
Hey guys. I have this question to ask. In C programming, if we want to store several values in an array, we implement that using loops like this:
j=0; //initialize
for (idx=1,idx less than a constant; idex++)
{
slope[j]=(y2-y1)/(x2-x1);
j++;
}
My question is in Matlab do we have any simpler way to get the same array 'slope' without manually increasing j? Something like:
for idx=1:constant
slope[]=(y2-y1)/(x2-x1);
Thank you!
Such operations can usually be done without looping.
For example, if the slope is the same for all entries, you can write
slope = ones(numRows,numCols) * (y2-y1)/(x2-x1);
where numRows and numCols are the size of the array slope.
If you have a list of y-values and x-values, and you want the slope at every point, you can call
slope = (y(2:end)-y(1:end-1))./(x(2:end)-x(1:end-1)
and get everything in one go. Note that y(2:end) are all elements from the second to the last, and y(1:end-1) are all elements from the first to the second to last. Thus, the first element of the slope is calculated from the difference between the second and the first element of y. Also, note the ./ instead of /. The dot makes it an element-wise operation, meaning that I divide the first element of the array in the numerator by the first element of the array in the denominator, etc.