Most efficient way to determine circle coordinates? - c

I am making a function for drawing a circle in 2d space.
For this, I have identified 2 approaches:
go through all the possible pixels and run them through a formula that will return a value that shows whether the pixel coordinates are inside the circle, outside (bonus: or intersecting it)
get all the circle pixels (basically draw the circle)
I tried to look at some math sources, but I have met with some problems:
in the second approach, the resolution at which I am incrementing the angle matters, so if it is too little, or radius is too small, there will be unnecessary duplication. On the other hand, if the angle gets incremented by more, or radius is too large, there will be gaps.
The formula I was using is:
struct vec2{int x; int y;};
void get_circle(int x, int y, int r, int angle, struct vec2 *coordinates) {
coordiantes->x = x + r * cos(angle);
coordinates->y = y + r * sin(angle);
}
This is obviously a bit much to run a lot of times.
I also want to make some kind of primitive anti-aliasing, so if I can get a value where a pixel only intersects the circle line by a half, it would be drawn as a half-pixel.
My final goal is to draw a nice circle with a line that can be thick. The thickness can be achieved with the area approach where I fill all pixels in a circle area, and then I remove pixels in the inner circle. Or it can be several iterations of the circle. I didn't write the array part of the computation, but yes, I would like each pixel identified. If we take a pixel as a rectangle, then I would like no pixel to be drawn if the theoretical circle goes through <33% of the surface, half-pixel 33-66, and full if >66%.
Please advise. I need some approach that will be computationally efficient.

First, "most efficient" depends on quite a few things. For most modern OpenGL systems you can usually get away with just computing points around the circumference using sine and cosine (and an appropriate aspect scale) with the native floating-point type, then plotting the points using any decent polyline algorithm.
Once you have things working, profile.
If profiling shows your algorithm to be holding things up (and compared to other normal and common computations, it shouldn't be), only then should you spend time and effort on trickier (read: more complicated) stuff, like the Midpoint Circle Algorithm to generate points to send to your polyline.
Also, don't forget to memoize into a sprite or texture or pixmap or whatever is appropriate for your hardware/software IFF profiling shows a worthwhile improvement.

Related

How to display the tiny triangles or recognize them quickly?

What I am doing is a pick program. There are many triangles and I want select the front and visible ones by a rectangular region. The main method is described below.
there are a lot of triangles and each triangle has its own color.
draw all the triangles to a frame buffer.
read the color of pixel in frame buffer and based on the color, we know which triangles are selected.
The problem is that there are some tiny triangles can not be displayed in the final frame buffer. Just like the green triangle in the picture. I think the triangle is too tiny and ignored by the graphic card.
My question is how to display the tiny triangles in the final frame buffer? or how to know which triangles are ignored by the graphic card?
Triangles are not skipped based on their size, but if a pixel center does not fall inside or lie on the top or left edge (this is referred to as coverage testing) they do not generate any fragments during rasterization.
That does mean that certain really small triangles are never rasterized, but it is not entirely because of their size, just that their position is such that they do not satisfy pixel coverage.
Take a moment to examine the following diagram from the DirectX API documentation. Because of the size and position of the the triangle I have circled in red, this triangle does not satisfy coverage for any pixels (I have illustrated the left edge of the triangle in green) and thus never shows up on screen despite having a tangible surface area.
If the triangle highlighted were moved about a half-pixel in any direction it would cover at least one pixel. You still would not know it was a triangle, because it would show up as a single pixel, but it would at least be pickable.
Solving this problem will require you to ditch color picking altogether. Multisample rasterization can fix the coverage issue for small triangles, but it will compute pixel colors as the average of all samples and that will break color picking.
Your only viable solution is to do point inside triangle testing instead of relying on rasterization. In fact, the typical alternative to color picking is to cast a ray from your eye position through the far clipping plane and test for intersection against all objects in the scene.
The usability aspect of what you seem to be doing seems somewhat questionable to me. I doubt that most users would expect a triangle to be pickable if it's so small that they can't even see it. The most obvious solution is that you let the user zoom in if they really need to selectively pick such small details.
On the part that can actually be answered on a technical level: To find out if triangles produced any visible pixels/fragments/samples, you can use queries. If you want to count the pixels for n "objects" (which can be triangles), you would first generate the necessary query object names:
GLuint queryIds[n]; // probably dynamically allocated in real code
glGenQueries(n, queryIds);
Then bracket the rendering of each object with glBeginQuery()/glEndQuery():
loop over objects
glBeginQuery(GL_SAMPLES_PASSED, queryIds[i]);
// draw object
glEndQuery(GL_SAMPLES_PASSED);
Then at the end, you can get all the results:
loop over objects
GLint pixelCount = 0;
glGetQueryObjectiv(queryIds[i], GL_QUERY_RESULT, &pixelCount);
if (pixelCount > 0) {
// object produced visible pixels
}
A couple more points to be aware of:
If you only want to know if any pixels were rendered, but don't care how many, you can use GL_ANY_SAMPLES_PASSED instead of GL_SAMPLES_PASSED.
The query counts samples that pass the depth test, as the rendering happens. So there is an order dependency. A triangle could have visible samples when it is rendered, but they could later be hidden by another triangle that is drawn in front of it. If you only want to count the pixels that are actually visible at the end of the rendering, you'll need a two-pass approach.

Maximum-perimeter bounding rectangle for a set of points

I've been struggling for quite a while with this seemingly simple problem. I am given a set of points (which I have further simplified down to a convex hull) and my task is to find a rectangle (not necessarily axis-aligned) that encompasses all of them, has no extra space around (so that it is tight-fitting around the points) and has the maximum possible perimeter. It was no trouble for me to find the minimal one, but this has proven to be a tougher nut to crack. When searching for the minimal bounding rectangle, I was able to use the assumption that one of the rectangle's sides was always aligned with one of the hull's sides, but here I don't see any such case here. Am I missing something painfully obvious? The only way I could come up so far is to test antipodal pairs of points if they can project onto the sides of the rectangle and use some trig to maximize the function, but I just lost myself in the calculations.
Thanks in advance!
First, compute the convex hull of your point set.
Now, think about spinning the polygon around and computing the smallest enclosing axis-aligned rectangle. Notice that the top point, the left point, the right point, and the bottom point will proceed clockwise around the convex hull from one vertex to the next.
You can't try every possible angle explicitly. You can, however, do a sweep-line trick. Given an angle, though, you can compute the top, left, bottom, and right points after spinning the polygon by that angle as well as the first of the top, left, bottom, and right points to change identity as you continue rotating the polygon. So you get a range of angles for which your current choices of top, left, bottom, and right are correct; further, you know what the next correct choice of top, left, bottom, and right is.
For each legitimate choice of top, left, bottom, and right, You wind up having to compute the maximum value of a*sin(theta) + b*cos(theta) for fixed a and b over some range of theta. Recall from trig that a*sin(theta) + b*cos(theta) = sqrt(a^2+b^2) cos(theta - arctan(b/a)). You evaluate the function at the boundaries of your interval and where the derivative is zero (at arctan(b/a) plus any integer times pi) and you're golden.

does glRotate in OpenGL rotate the camera or rotate the world axis or rotate the model object?

I want to know whether glRotate rotates the camera, the world axis, or the object. Explain how they are different with examples.
the camera
There is no camera in OpenGL.
the world axis
There is no world in OpenGL.
or the object.
There are no objects in OpenGL.
Confused?
OpenGL is a drawing system, that operates with points, lines and triangles. There is no concept of a scene or a world in OpenGL. All there is are vertices of which each has a set of attributes and there is the state of OpenGL which determines how vertices are turned into pixels.
The very first stage of this process is getting the vertex positions within the viewport. In the fixed function pipeline (i.e. without shaders), to get those, each vertex position if first multiplied with the so called "modelview" matrix, the intermediary result is used for lighting calculations and then multiplied with the "projection" matrix. After that clipping and then normalization into viewport coordinates are applied.
Those two matrices I mentioned save two purposes. The first one "modelview" is used to apply some transformation on the incoming vertices so that those end up in the desired spot relative to the origin. There is no difference in first moving geometry to some place in the world, and then moving the viewpoint within the world. Or keeping the viewpoint at the origin and move the whole world in the opposite. All this can be described by the modelview matrix.
The second one "projection" works together with the normalization process to behave like a kind of "lens", so to speak. With this you set the field of view (and a few other parameters, like shift, which you need for certain applications – don't worry about it).
The interesting thing about matrices is, that they're non-commutative, i.e. for two given matrices N, M
M * N =/= N * M ; for most M, N
This ultimately means, that you can compose a series of transformations A, B, C, D... into one single compound transformation matrix T by multiplying the primitive transformations onto each other in the right order.
The OpenGL matrix manipulation functions (they're obsolete BTW), do just that. You have a matrix selected for manipulation (the matrix mode) for example the modelview M. Then glRotate effectively does this:
M *= R(angle,axis)
i.e. the active matrix gets multiplied on a rotation matrix constructed from the given parameters. Similar for scale and translate.
If this happens to appear to behave like a camera or placing a object depends entirely on how and in which order those manipulations are combined.
But for OpenGL there are just numbers/vectors (vertex attributes), which somehow translate into 2-dimensional viewport coordinates, that get drawn as points for filled inbetween as line or a triangle.
glRotate works on the current matrix. So it depend if the matrix is the camera one or a world trasformation one. To know more about the current matrix have a look at glMatrixMode().
Finding example is just googling: I found this one that in order to me should help you to figure out what's happening.

Eliminating rectangles enclosed within other rectangles in OpenCV

I'm in the process of writing a C program using OpenCV to detect some rectangles made with tape, which are hollow on the inside. Problem is, each physical rectangle gives two digital rectangles: one for the inner perimeter, one for the outer perimeter. The outer rectangle in all cases completely encloses the inner rectangle.
I need some way to remove the inner rectangles, and in a reasonably efficient manner, as this is being run on a video feed and must not drop framerate considerably (approx. 15fps, on a BeagleBoard xM, which is not terribly powerful).
There are always four physical rectangles, and somewhere between four to eight digital rectangles depending on the cleanliness of the processing operations. The outer rectangle is detected reliably; the inner rectangle is not. The image is thresholded, eroded, and dilated such that the image is clean and detection is reliable in general.
I feel that this problem is separate from OpenCV and is really just working with rectangles and could probably be solved by me with some time, but the project is on a crunch deadline, so I'm also throwing this question in. Thanks in advance, guys.
there is a function called grouprectangle in opencv.
The function can remove multiple rectangles...
Have a happy coding.
Since you only have at most 8 digital rectangles, I think it would be fine to use the natural, brute force, algorithm to figure out which rectangles are inside other rectanges. It's OK to do O(N^2) algorithms when N is small, and 8 is small.
Here is the pseudo code:
for each rectangle i {
for each rectangle j {
if i != j and rectangle i is inside rectangle j {
disregard rectangle i
}
}
}
Solved - the speedy solution is to take the distance to one of the corners from the center point of the rectangle, and compare that distance between rectangles whose centers are very close together. The one with the shorter distance must be the inner rectangle.
Code-wise you'd want to calculate the center, then find, say, the bottom right point, which is just the point with both min x and min y. Calculate the distance between them and store it somehow. For each rectangle, iterate over the other ones and check if their centers are very close (a constant of ~30px works fine for this in my case). Compare the distances calculated earlier; the rectangle with the shorter distance should be deleted from the list of rectangles.

OpenGL: How do I avoid rounding errors when specifying UV co-ordinates

I'm writing a 2D game using OpenGL. When I want to blit part of a texture as a sprite I use glTexCoord2f(u, v) to specify the UV co-ordinates, with u and v calculated like this:
GLfloat u = (GLfloat)xpos_in_texture/(GLfloat)width_of_texture;
GLfloat v = (GLfloat)ypos_in_texture/(GLfloat)height_of_texture;
This works perfectly most of the time, except when I use glScale to zoom the game in or out. Then floating point rounding errors cause some pixels to be drawn one to the right of or one below the intended rectangle within the texture.
What can be done about this? At the moment I'm subtracting an 'epsilon' value from the right and bottom edges of the rectangle, and it seems to work but this seems like a horrible kludge. Are there any better solutions?
Your issue is most likely not coming from rounding errors, but a misunderstanding on how OpenGL maps texels to pixels. If you notice off-by-one errors, it's probably because your UVs, your vertex positions or your projection matrix/viewport pair are not aligned to where they ought to be.
To simplify, I'll just talk about 1D, and be assuming you use a projection and a viewport that map X,Y coordinates to the equivalent pixel location (i.e. a glOrtho(0,width,0,height,zmin,zmax) and a glViewport(0,width,0,height).
Say you want to draw 5 texels (starting at 0 for simplicity) of your 64-wide texture showing on the 10 pixels (scale of 2) of your screen starting at pixel 20.
To get there, draw the triangle with X coordinates 20 and 30, and U (of the UV pair) of 10/64 and 15/64. The rasterization of OpenGL will generate 10 pixels to shade, with X coordinates 20.5, 21.5, ... 29.5. Note that the positions are not full integers. OpenGL rasterizes in the middle of the pixel.
Likewise, it will generate U coordinates of 10.25/64, 10.75/64, 11.25/64, 11.75/64 ... 14.25/64, 14.75/64. Note again that texel coordinates, brought back to texel positions in the texture space, are not full integers. OpenGL samples from the middle of texel locations, so this is fine.
How the samplers use these UVs to generate texel values depend on filtering modes, but be it nearest or linear, the pixels should be contained solely inside the texels of interest (0.25 with a size of 0.5 should only use color from 0 to 0.5, which is all inside the first texel).
In general, if you follow the general principles I laid out, you should never see artifacts.
Use Ortho and Viewport of exactly your frame buffer size
Use positions of X, X+width exactly
Use UVs that correspond to exactly the texels you want (if you want the 10 texels starting from the texel 0, use U=0 to U=10.
If you ever have a -1 somewhere in your math, it's likely not correct (for position or UVs).
To get back to your example, it's unclear how you link the uvs you compute to positions (since you don't show the position computation).
It's also unclear how you got xpos_in_texture (you should explain how you computed them for the corners of your sprite). My guess is that you computed that wrong.
A bit late, but for posterity I was having the same problem, with the pixels from adjacent regions of a texture atlas bleeding into sprites/tiles when scaling or zooming the view. I had my glOrtho, glViewport, etc dimensions all set correctly, then I realized the problem was I was scaling the view before translating the camera, which meant that even though I was snapping to integer pixels pre-zoom, after the zoom it would align to a fraction of a pixel and introduce the texel problem.
So if your code looks something like this, where camera.zoom is a non-integer (i.e. 0.75):
glScalef(camera.zoom, camera.zoom, 1.0f);
glTranslatef(camera.x, camera.y, 0.0f);
You'll want to make sure the result of the translation after scaling aligns to whole pixels on the screen, so you can do something like:
glScalef(camera.zoom, camera.zoom, 1.0f);
glTranslatef(
floor(camera.x * camera.zoom) / camera.zoom,
floor(camera.y * camera.zoom) / camera.zoom,
0.0f);
Do the division as a double, round the result down yourself to the desired level of precision, then cast it to GLFloat.
Your xpos/ypos must be based on 0 to (width or height) - 1 and then:
GLfloat u = (GLfloat)xpos_in_texture/(GLfloat)(width_of_texture - 1);
GLfloat v = (GLfloat)ypos_in_texture/(GLfloat)(height_of_texture - 1);

Resources