Find the max size of rectangular contiguous submatrix of unique (i.e. non repeated within a given submatrix) element.
How can I solve this?
You should set a maximum value to 0. Iterate the rows of the matrix and if they are not repeating (whatever that means), compare its size to the maximum. If it is bigger, then store the new maximum value and use that for further iterations. In case you found a new maximum, store whatever you need to store. So, the algorithm looks like this:
maximum <- 0
for all rows as row
if (row is not repeating) then
if (row rectangle size > maximum) then
maximum <- new maximum
store whatever you need to store
end if
end if
end for
Note, that if you do not have further information, then it is pointless to do a binary search, since you will have to check the size of each rectangle. If you have further knowledge about your rectangles, then the algorithm might be optimized.
A first idea (recursion): Maybe identify pairs in the whole array, this will identify constraints to respect. If there is a value v at both positions x0,y0 and x1,y1 then you cannot have a rectangle containing these positions, so this will let you construct some possible rectangles from these values and recurse on them?
Another one (dynamic programming): start with elementary arrays (size 1x1) and try to merge them respecting the constraint?
Related
I'm designing a game in Scratch. The game is suppose to have a Spaceship travel through space, avoiding asteroids. These asteroids start at a fixed X position on the right side of the screen and go to the left, horizontally until they hit a fixed X position and they'll disappear. The asteroids will start in groups between 2-6 (it's a random number generated), and each set is about 1 second apart.
Assuming the game throws out up to 6 asteroids at once, I want to make sure each asteroid is distant from the next. I tried using two variables and comparing the distance, but this did not work. I can put the group of asteroids Y spawning position into a list. So say for instance in my list, I have:
0, 100, 5, 30, -20
As you can see, there are two items in that list that are close together. What I'm trying to do, is prevent this, so the third item would be something else, like -50, for instance, and then if a six item is generated, ensure it's also distant.
Can someone pseudocode how to achieve something like this? It doesn't matter what programming language it's in, I can probably translate the general idea into Scratch.
There is a way to do this without a trial-and-error loop.
Instead of picking random positions, pick random distances, then scale them to fit the screen.
Roughly, the approach is as follows.
The lists below represent the distances between neighboring asteroids (ordered by Y coordinate), as well as distances between the outermost asteroids and the edges of the screen.
For example, if a group contains 6 asteroids, then you need lists of 7 elements each.
Create a list L1 of minimal distances. Obviously, these are all fixed values.
Create a list L2 of random numbers. Take them from some arbitrary, fixed range with a positive lower bound, e.g. [1..100].
Calculate the total 'slack' = height of screen minus sum(L1).
Calculate a multiplication factor = slack divided by sum(L2).
Multiply every element of L2 with the multiplication factor.
Add every value from L1 to the value in L2 at the same index.
L2 now contains a list of distances that:
obey the minimal distances specified in L1
together equal the height of the screen
The final step is to position every asteroid relative to its neightbor, based on the distances in L2.
Note: if step 3 gives a negative number, then obviously there is not enough room on screen for all asteroids. What's worse, a naive 'trial-and-error' algorithm would then result in an infinite loop. The solution is of course to fix your parameters; you cannot fit 6 asteroids in 360 pixels with a minimal distance of 100.
To do this, you need to do through each previous entry in the array, compare that value to the new value, and if any element is too close change the value. This process needs to repeat until a suitable number is found. If this number is less then some minimum distance, then a variable tooClose is set to yes and the value will be reset. At the begining of the loop tooClose is set to yes so that at least one random number will be generated. Then, at the beginning of the loop, the value is randomized, and tooClose is set to no, then, I loop through all the previous entries with the value i, comparing each element and setting tooClose to yes if it is too close. The comparison between numbers is done with a subtraction, followed by an absolute value, which will ensure the result is positive, giving the difference between the two numbers as a positive value.
Here is a screenshot of the code:
And here is the project:
https://scratch.mit.edu/projects/408196031/
Just like any other median selection problem for an unsorted array but with extra restriction. we are required to use a provided subroutine/helper function, Quart(A,p,r), that finds the 1/4th ordered item in a given subarray in linear time. How can we use this helper function to find the median an array?
Further restriction:
1. Your solution must be performed in-place (no new
array can be created). In particular, one alternative solution would be
to extend the array to size m so that all the entries in A[n+1, ... ,m] =
1 and m > 2n. After this, you would be able to solve the median
problem in the original array with just one call to the quartile problem
in the extended array. With further restriction, this is not possible.
2. while running the algorithm you may temporarily change elements in the array, e.g., a SWAP changes elements. But, after the conclusion of your algorithm, all elements in the array must be the same as they were in the beginning (but just as in the randomized selection algorithm taught in class, they may be in a different order than they were originally).
Since you are not allowed to create new arrays, this means that you are only allowed to modify a small (constant) number of items.
Do a pass through the array and find the min and max.
Call Quart to find the quartile value
Iterate through the array and add (max - min) + 1 to all values below the quartile. This will move the bottom quarter of the values to the top
Call Quart again to find the quartile of the new values (which will be the median of the original values)
Iterate through the array and subtract (max - min) + 1 from all values greater than the max to return the array to its original state
You might need some additional rules to handle special cases e.g. if there are multiple values equal to the quartile.
I have a two-dimensional array of doubles that implicitly define values on a two-dimensional bounded integer lattice. Separately, I have n 2D seed points (possibly with non-integer coordinates). I'd like to identify each grid point with its closest seed point, and then sum up the values of the grid points identified with each seed point.
What's the most efficient way to do with with JTS/Geotools? I've gotten as far as building a Voronoi diagram with VoronoiDiagramBuilder, but I'm not sure how to efficiently assign all the grid points based on it.
The best way to do this depends on the size of n and the number of polygons in your voronoi diagram. However basically you need to iterate of one of the sets and find the element in the other set that interacts with it.
So assuming that n is less than the number of polygons, I'd do something like:
// features is the collection of Voronoi polygons
// Points is the N points
Expression propertyName = filterFactory.property(features.getSchema()
.getGeometryDescriptor()
.getName());
for (Point p: points) {
Filter filter = filterFactory.contains(propertyName,
filterFactory.literal(p));
SimpleFeatureCollection sub = features.subCollection(filter);
//sub now contains your polygon
//do some processing or save ID
}
If n is larger than number of polygons - reverse the loops and use within instead of contains to find all the points in each polygon.
I have an array which is changing rapidly and has variable length -this could be 100 minimum and about 5k maximum-. And i'm going to use these values to encolouring a data column that i produce by drawing lines one by one. This will be something like scan graph.
And another thing is, i have a fixed column length but variable data array length, so every member of the array should be fit into the graph. If the length of the data is less than column length, i should expand the array which is easier one and i did that. But if the length of the data is bigger, i have to do something like decimation.
The problem is, i should keep the characteristic of the array during the decimation. When i tried to calculate the arithmetic mean the group of every N member, the graph is getting smoother which i don't want to.
What should i do to fit this array into the graph, without change it's characteristic?
This is how the graph looks like : http://imgur.com/KFAzaAQ
I guess you mean that your relevant information is the location of spikes and their height. So you want to rescale the graph while preserving the spikes and their height.
One way to achieve that is to remove data points with minimal value difference with its neighbours. You could compute a score for each data point that is proportional to the difference of its value with its neighbour and decimate the data points with smallest score. A candidate score function is the sum of the square of difference.
You then create a score vector as big as your data vector and for each data value you compute its score like this
score[i] = square(data[i]-data[i-1]) + square(data[i+1]-data[i]);
Another candidate is
score[i] = abs(data[i]-data[i-1]) + abs(data[i+1]-data[i]);
You also want that this decimation is applied as uniformly as possible over your data so that the graph doesn't become distorted. One way to achieve this is to split your data into buckets and decimate in each buckets the required number of data points.
If you have to remove many data points (N > 2), it might be preferable to do it in multiple pass where each pass recomputes the score. In this case you don't distort to much the content of each buckets.
I am tracking particles into a 3D lattice. Each lattice element is labeled with an index corresponding to an unrolled 3D array
S = x + WIDTH * (y + DEPTH * z)
I am interested in the transition form cell S1 to cell S2. The resulting transition matrix M(S1,S2) is sparsely populated, because particles can reach only near by cells. Unfortunately using the indexing of an unrolled 3D array cells that are geometrically near might have big difference in their indexes. For instance, cells that are siting on top of each other (say at z and z+1) will have their indexes shifted by WIDTH*DEPTH. Therefore if I try accumulating the resulting 2D matrix M(S1,S2) , S1 and S2 will be very different, even dough the cells are adjacent. This is a significant problem, because I can't use the usual sparse matrix storage.
At the beginning I tried storing the matrix in coordinate format:
I , J VALUE
Unfortunately I need to loop the entire index set to find the proper S1,S2 and store the accumulated M(S1,S2).
Unusually sparse matrices have some underlying structure and therefore the indexing is quite straightforward. In this case however, I have some troubles figuring out how to index my cells.
I would appreciate your help
Thank you in advance,
There are several approaches. Which is best depends on operations that need to be performed on the matrix.
A good general purpose one is to use a hash table where the key is the index tuple, in your case (i,j).
If neighboring (in the Euclidean sense) matrix elements must be discoverable, then an alternate strategy is a balanced tree with a Morton Order key. The Morton order value of a key (i,j) is just the integers i and j with their bits interleaved. You should quickly see that index tuples close to each other in the index 2-space are also close in linear Morton order.
Of course if you are building the matrix all at once, after which it's immutable, then you can build the key-value pairs in an array rather than a hash table or balanced tree, sort them (lexicographically for (i,j) pairs and linearly for Morton keys) and then do reads with simple binary search.