I'm given a 2d binary array. Some of the dots are on, some are off (1 for on, 0 for off).
I know that the "on" dots were created before by putting circles on the 2d array.
The circles are of the same radius, and each time a circle was put, the dots inside it changed to 1 instead of 0.
All the circles are within the edges of the array and dot touching the edge of the circle is lit.
An illustration can be seen below. The circles are ordered randomly and may touch.
Notice that the dots inside the circles are 1 and all other are 0.
Can you find how many circles were there just by looking at the 2d array without the circles after I had put them? Is this problem solvable?
My attempt at solving this problem was:
First, I assumed that my circles can contain dots as in the figure (radius big enough to contain 4 to 7 dots.
Then I tried to categorize what possible orientation can the circles have, however there are just a lot.
I would like to find these two circles. Notice that they can cannot overlap but can be just one near the other.
If your circles don't overlap, you can use connected component labeling algorithm and get number of circles:
NCircles = (NComponents - 1) / 2
(if inner empty regions of circles and outer empty place form separate components)
Edit: with these dots it is worth to select only connected conponents with size in some range to exclude dots and other false regions.
Simple kind of CCL suitable for this picture:
scan image until black pixel is met
do flood fill while possible, keep bounding box of scanned black pixels
if box corresponds to circle size, count it
scan further from any unmarked pixel
One more possible approach: you can try Hough algorithm for circles of predefined radius.
For example, OpenCV library contains labeling function that works with images and arrays (and Hough transform too)
Why not just generate randomly generate circles and count them?
When you insert a new circle, just check if they do not overlap.
And stop inserting new circles after you tried a certain times and failed to insert a new circle. With this last value you probably need to play a bit.
You can probably repeat this a couple of times and average the result like that.
Related
I'm making a program in C (simple snake game).
I'm using window.h and came across an inconvenience.
I'm using COORD's and SetConsoleCursorPosition to move about the cursor.
However, moving one y coordinate is almost the same as moving two x coordinates in terms of how many pixels each represents.
For example, this square window has a width of 80 and height of 40 in terms of the cursor position coordinates.
Also, you can clearly see the contraction (and therefore reduction of apparent speed of the snake) when moving sidewards in the images below.
Is there any efficient solution to this so that the pixel size of one move in the x direction is the same as one move in the y direction.
Many thanks.
[
The SetCurrentConsoleFontEx function lets you specify the console font size in the lpConsoleCurrentFontEx's dwFontSize member. There you can set the font width and height be the same.
I am a beginner hobbyist programmer in my first year of college. Recently I've been obsessed with the puzzle game "The Witness", for its minimalist yet surprisingly difficult puzzles. As a passion project I'm attempting to recreate just the Puzzle element of the game for others to enjoy.
THE GAME
This is How the Game Looks So Far
Essentially, you have a white path that is controlled by the user, and you must navigate that path through the grid, splitting the grid into region consisting of black and white tiles. Each region must have only white or only black tiles.
I've posted a picture of how the project looks so far, with a solved puzzle.
THE PROBLEM
I cannot for the life of me figure out a function to split the grid into regions as shown in the image. The Path is a 1D array of x and y coordinates of each point in the path. when its done, it should be at the top right corner of the grid at (cols,rows). This is assuming the lower left corner is (0,0).
Path = [[x1,y1],[x2,y2],...,[cols,rows]]
each Puzzle has n rows and n columns, so id like a function getRegions(path, cols, rows) that takes in the path, and the rows and columns, and outputs an array like this
arrayWithRegions=
[[2,3,3,2],
[2,2,2,2],
[1,1,1,2],
[1,1,1,2]]
where each square is marked as being in a distinct region based on the boundaries set by the path and the outer border. The example is how the array would look for the puzzle in the image provided. (disregard the black and white blocks, they don't matter right now)
I'd appreciate any sort of help or even a nudge in the right direction. Thanks!
You can just execute Flood fill algorithm using path line and field edges as borders.
Choose any unmarked cell (for example - left bottom one), start flood fill with region mark 1, traverse all possible cells. Then find another unmarked cell, start fill with region mark 2 and so on.
The simplest recursive implementation of FF algo and sequential search of unmarked cells should work nicely for small size of your field.
Hi, I'm currently in the middle of a project where a new grid is added on to a chain of blocks on the grid every timestep. How would I be able to detect that a circle has been made in the grid? Given that all I have are the coordinates (x,y) and the color of each cell. By "circle" I mean an area that is sealed off, as shown in the picture.
Thanks in advance! By the way, I'm not asking how to click on a cell and apply the flood-fill algorithm.
The aftermath of the algorithm should produce this:
You need to split all of your white (unfilled) squares into sets of squares adjacent to each other. Start with any white square, add all of its unfilled adjacent squares to the set, and keep doing it until you've included all of the squares.
Once you have those sets, you will have a "circle" (as you named it) if there are non-empty sets that do not contain any border squares. Then to fill these sets you just change the color of each member to blue.
If you have the sets from the previous step, when you add another brick you just need to consider the set that included the affected square to see if it has been split into two sets and whether either of these new sets may be a "circle".
What I am doing is a pick program. There are many triangles and I want select the front and visible ones by a rectangular region. The main method is described below.
there are a lot of triangles and each triangle has its own color.
draw all the triangles to a frame buffer.
read the color of pixel in frame buffer and based on the color, we know which triangles are selected.
The problem is that there are some tiny triangles can not be displayed in the final frame buffer. Just like the green triangle in the picture. I think the triangle is too tiny and ignored by the graphic card.
My question is how to display the tiny triangles in the final frame buffer? or how to know which triangles are ignored by the graphic card?
Triangles are not skipped based on their size, but if a pixel center does not fall inside or lie on the top or left edge (this is referred to as coverage testing) they do not generate any fragments during rasterization.
That does mean that certain really small triangles are never rasterized, but it is not entirely because of their size, just that their position is such that they do not satisfy pixel coverage.
Take a moment to examine the following diagram from the DirectX API documentation. Because of the size and position of the the triangle I have circled in red, this triangle does not satisfy coverage for any pixels (I have illustrated the left edge of the triangle in green) and thus never shows up on screen despite having a tangible surface area.
If the triangle highlighted were moved about a half-pixel in any direction it would cover at least one pixel. You still would not know it was a triangle, because it would show up as a single pixel, but it would at least be pickable.
Solving this problem will require you to ditch color picking altogether. Multisample rasterization can fix the coverage issue for small triangles, but it will compute pixel colors as the average of all samples and that will break color picking.
Your only viable solution is to do point inside triangle testing instead of relying on rasterization. In fact, the typical alternative to color picking is to cast a ray from your eye position through the far clipping plane and test for intersection against all objects in the scene.
The usability aspect of what you seem to be doing seems somewhat questionable to me. I doubt that most users would expect a triangle to be pickable if it's so small that they can't even see it. The most obvious solution is that you let the user zoom in if they really need to selectively pick such small details.
On the part that can actually be answered on a technical level: To find out if triangles produced any visible pixels/fragments/samples, you can use queries. If you want to count the pixels for n "objects" (which can be triangles), you would first generate the necessary query object names:
GLuint queryIds[n]; // probably dynamically allocated in real code
glGenQueries(n, queryIds);
Then bracket the rendering of each object with glBeginQuery()/glEndQuery():
loop over objects
glBeginQuery(GL_SAMPLES_PASSED, queryIds[i]);
// draw object
glEndQuery(GL_SAMPLES_PASSED);
Then at the end, you can get all the results:
loop over objects
GLint pixelCount = 0;
glGetQueryObjectiv(queryIds[i], GL_QUERY_RESULT, &pixelCount);
if (pixelCount > 0) {
// object produced visible pixels
}
A couple more points to be aware of:
If you only want to know if any pixels were rendered, but don't care how many, you can use GL_ANY_SAMPLES_PASSED instead of GL_SAMPLES_PASSED.
The query counts samples that pass the depth test, as the rendering happens. So there is an order dependency. A triangle could have visible samples when it is rendered, but they could later be hidden by another triangle that is drawn in front of it. If you only want to count the pixels that are actually visible at the end of the rendering, you'll need a two-pass approach.
I'm in the process of writing a C program using OpenCV to detect some rectangles made with tape, which are hollow on the inside. Problem is, each physical rectangle gives two digital rectangles: one for the inner perimeter, one for the outer perimeter. The outer rectangle in all cases completely encloses the inner rectangle.
I need some way to remove the inner rectangles, and in a reasonably efficient manner, as this is being run on a video feed and must not drop framerate considerably (approx. 15fps, on a BeagleBoard xM, which is not terribly powerful).
There are always four physical rectangles, and somewhere between four to eight digital rectangles depending on the cleanliness of the processing operations. The outer rectangle is detected reliably; the inner rectangle is not. The image is thresholded, eroded, and dilated such that the image is clean and detection is reliable in general.
I feel that this problem is separate from OpenCV and is really just working with rectangles and could probably be solved by me with some time, but the project is on a crunch deadline, so I'm also throwing this question in. Thanks in advance, guys.
there is a function called grouprectangle in opencv.
The function can remove multiple rectangles...
Have a happy coding.
Since you only have at most 8 digital rectangles, I think it would be fine to use the natural, brute force, algorithm to figure out which rectangles are inside other rectanges. It's OK to do O(N^2) algorithms when N is small, and 8 is small.
Here is the pseudo code:
for each rectangle i {
for each rectangle j {
if i != j and rectangle i is inside rectangle j {
disregard rectangle i
}
}
}
Solved - the speedy solution is to take the distance to one of the corners from the center point of the rectangle, and compare that distance between rectangles whose centers are very close together. The one with the shorter distance must be the inner rectangle.
Code-wise you'd want to calculate the center, then find, say, the bottom right point, which is just the point with both min x and min y. Calculate the distance between them and store it somehow. For each rectangle, iterate over the other ones and check if their centers are very close (a constant of ~30px works fine for this in my case). Compare the distances calculated earlier; the rectangle with the shorter distance should be deleted from the list of rectangles.