//example no instersection
60
40,41
65,75,85
33,43,53,63
//Here is an intersection (marked it)
[68]
08,18
28,38,48
65,66,67,[68]
these are ships coordinates, 4 different length 2d arrays and i should check if there are intersection between the elements.
So technically i am making a Battleship game in c# console and i must prevent the ships to intersect.
That means if i would check if there is a same number thats not good because it can be in a column or a row. I have no idea how should I check if at Row coordinates is there an intersection and for column coordinates too.
Related
Suppose we've 10 arrays like below sample
Every element of those array has two parts: the section number, time the cursor was there in second(for e.g in 3rd and 10th seconds cursor was in that section)
By the way, if they give the new array; we need to compare new one with our model and then show the similarity percentage for them to score the actions.
I really have no idea should I use any clustering or classification methods and if yes how it should be for arrays(we always learned about element of an array or some vectors in university)
I found something in Jiawei Han, Micheline Kamber, Jian Pei Data mining book. what do you think about Cosine similarity?
but we need to convert that array to another including Section numbers and Frequency of refers.
converted array
I am looking for ideas of how to write an algorithm that is going to populate 2d array with predefined set of blocks/arrays.
Let's say you have 2d array 5x5 (so 25 fields), and have 6 elements you want to randomly fit into it (accordingly 5, 5, 4, 4, 4, 3 piece straight blocks). What is important - I want to put those randomly either horizontally or vertically, but no cell can be left with nothing.
The end effect should look like this (given the blocks of X elements on the left side)
This is somewhat related to packaging algorithms, although here the idea is that I specify the container dimensions (2d array) let's say 8x4, and randomize X amount of stripes length 3-5 that will be fit into array. The sum of all lengths of stripes will be equal to array size (so no cell is left empty).
On top of that, I think, I could add a fail-safe feature, meaning that after putting most stripes into Array and last one cant fit, algorithm could find any empty cells and fill it with neighbouring element so that the stripes are longer, but not longer than 5 elements
I have a two-dimensional array of doubles that implicitly define values on a two-dimensional bounded integer lattice. Separately, I have n 2D seed points (possibly with non-integer coordinates). I'd like to identify each grid point with its closest seed point, and then sum up the values of the grid points identified with each seed point.
What's the most efficient way to do with with JTS/Geotools? I've gotten as far as building a Voronoi diagram with VoronoiDiagramBuilder, but I'm not sure how to efficiently assign all the grid points based on it.
The best way to do this depends on the size of n and the number of polygons in your voronoi diagram. However basically you need to iterate of one of the sets and find the element in the other set that interacts with it.
So assuming that n is less than the number of polygons, I'd do something like:
// features is the collection of Voronoi polygons
// Points is the N points
Expression propertyName = filterFactory.property(features.getSchema()
.getGeometryDescriptor()
.getName());
for (Point p: points) {
Filter filter = filterFactory.contains(propertyName,
filterFactory.literal(p));
SimpleFeatureCollection sub = features.subCollection(filter);
//sub now contains your polygon
//do some processing or save ID
}
If n is larger than number of polygons - reverse the loops and use within instead of contains to find all the points in each polygon.
I have a 60x60x35 array and would like to calculate the Wilcoxon signed rank test to calculate if the median for each element value across the third array dimension (i.e. with 35 values) is different from zero. Thus, I would like my results in two 60x60 arrays - with values of 0 and 1 depending on the test statistic, and in a separate array with corresponding p values.
The problem I am facing is specifying the command in a way that desired output would have appropriate dimensions and would be calculated across the appropriate dimension of the array.
Thanks for your help and all the best!
So one way to solve your problem is using a nested for-loop. Lets say your data is stored in data:
data=rand(60,60,35);
size_data=size(data);
p=zeros(size_data(1),size_data(2));
p(:,:)=NaN;
h=zeros(size_data(1),size_data(2));
h(:,:)=NaN;
for k=1:size_data(1)
for l=1:size_data(2)
tmp_data=data(k,l,:);
tmp_data=reshape(tmp_data,1,numel(tmp_data));
[p(k,l), h(k,l)]=signrank(tmp_data);
end
end
What I am doing is I preallocate the memory of p,h as a 60x60 matrix. Then I set them to NaN, so if you can easily see if sth went wrong (0 would be an acceptable result). Now I loop over all elements and store the actual data array in a new variable. signrank needs the data to be an array so I reshape it to two dimensions.
I guess you could skip those loops by using bsxfun
I was trying to think of an algorithm which chooses 6 random cells from an array with 50 cells, such that the probability for each cell to be picked is equal.
I need to find a solution that uses the function Random(start,end) no more than 6 times.
I can't use any extra data structure, and it is important that the probability for each cell to be picked will be equal and independent.
Call Random(0,49). Read the resulting cell, then shuffle everything after it in the array down one place so that you have a 49-cell array with the picked value missing.
Call Random(0,48) and repeat 6 times.
Put the cells in a list, shuffle it, take six of 'em