Use polygons as clickable zones, instead of rectangles - c

I'm creating a 2D game using the allegro library and I need to create polygonal territories on a map that the player can click on to move his units.
Am I forced to use some sort of point in polygon detection algorithm or is there a simpler solution using the polygon I drew already?
So far I managed to draw a polygon such as:
ALLEGRO_VERTEX v[] =
{
{ .x = 0, .y = 0, .z = 0, .color = al_map_rgb_f(1, 0, 0) },
{ .x = 0, .y = 48, .z = 0, .color = al_map_rgb_f(1, 0, 0) },
{ .x = 32, .y = 64, .z = 0, .color = al_map_rgb_f(1, 0, 0) },
{ .x = 80, .y = 32, .z = 0, .color = al_map_rgb_f(1, 0, 0) },
{ .x = 112, .y = 0, .z = 0, .color = al_map_rgb_f(1, 0, 0) }
};
al_draw_prim(v, NULL, NULL, 0, 5, ALLEGRO_PRIM_TRIANGLE_FAN);
EDIT: Ok I figured I can detect if the mouse is in a polygon using this algorithm still I feel this is not the right way to do this. I still need to call the function for every different polygon, that doesn't sound right.

You found an algorithm that does point-in-polygon for all your polygons and tells you which polygon the user clicked on. Good job you can use that. You wanted a built-in API call to do it and didn't get it. Since nobody else posted a contrary answer, I presume you won't. You should use what you've got.
I will now address why this should feel right rather than not feel right.
If the library itself had implemented it for you, it would still be constrained by the underlying OS primitives, which are in turn constrained by the algorithmic complexity of the problem, which is point-in-polygon per polygon. Thus, you may project all the polygons in your application, use one mouse hit box for the whole screen, and test them in turn. This is what the API would have had to do if there was an API for this.
I project a significant chance you coded it and found it too slow. An easy solution that almost always works is to do an axis-oriented bounding box test first, which is fast.
BugSquasher suggests an alternate solution. Render twice, with the second to an offscreen buffer with one color per polygon, and point-test the color. This also works, and is a good speedup if hit-testing is much more common than polygon moving. It does cost memory though.

Related

sdl2 flickering unless I don't use createrenderer

I have a somewhat basic rendering loop which blits a scaled image to the screen as fast as I can process the event loop. I created a minimal example that recreates the flickering on pastebin here.
If I don't use "SDL_CreateRenderer", and instead leave renderer to NULL, it works. I just can't clear the screen first. If I set the renderer, I get this crazy fast flickering.
// if I comment this out in my init_sdl(), no flickering...
renderer = SDL_CreateRenderer(window, -1, 0);
assert(renderer != NULL);
my draw function happens at the end of the event loop:
void draw()
{
SDL_SetRenderDrawColor(renderer, 255, 0, 128, 255);
SDL_RenderClear(renderer);
SDL_Rect dstrect = {
.x = 50,
.y = 50,
.h = 100,
.w = 100,
};
SDL_BlitScaled(img, NULL, screen, &dstrect);
SDL_UpdateWindowSurface(window);
SDL_RenderPresent(renderer);
}
I've seen this potential duplicate question, but the problem was that they had their RenderPresent in the wrong place. You can see I'm calling SDL_RenderPresent at the end of all drawing operations, which was my takeaway from that. It is still happening.
I'm using msys2 (mingw_x64), gcc, windows 10, SDL2.

How do i get the actual position of vertices in OpenGL ES 2.0

After Applying a rotation or a translation matrix on the vertex array, the vertex buffer is not updated
So how can i get the position of vertices after applying the matrix?
here's the onDrawFrame() function
public void onDrawFrame(GL10 gl) {
PositionHandle = GLES20.glGetAttribLocation(Program,"vPosition");
MatrixHandle = GLES20.glGetUniformLocation(Program,"uMVPMatrix");
ColorHandle = GLES20.glGetUniformLocation(Program,"vColor");
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT|GLES20.GL_DEPTH_BUFFER_BIT );
Matrix.rotateM(RotationMatrix,0,-90f,1,0,0);
Matrix.multiplyMM(vPMatrix,0,projectionMatrix,0,viewMatrix,0);
Matrix.multiplyMM(vPMatrix,0,vPMatrix,0,RotationMatrix,0);
GLES20.glUniformMatrix4fv(MatrixHandle, 1, false, vPMatrix, 0);
GLES20.glUseProgram(Program);
GLES20.glEnableVertexAttribArray(PositionHandle);
GLES20.glVertexAttribPointer(PositionHandle,3,GLES20.GL_FLOAT,false,0,vertexbuffer);
GLES20.glUniform4fv(ColorHandle,1,color,1);
GLES20.glDrawArrays(GLES20.GL_TRIANGLES,0,6);
GLES20.glDisableVertexAttribArray(PositionHandle);
}
The GPU doesn't normally write back transformed results anywhere the application can use them. It's possible in ES 3.0 with transform feedback, BUT it's very expensive.
For touch event "hit" testing, you generally don't want to use the raw geometry. Generally use some simple proxy geometry, which can be transformed in software on the CPU.
Maybe you should try this:
private float[] modelViewMatrix = new float[16];
...
Matrix.rotateM(RotationMatrix, 0, -90f, 1, 0, 0);
Matrix.multiplyMM(modelViewMatrix, 0, viewMatrix, 0, RotationMatrix, 0);
Matrix.multiplyMM(vpMatrix, 0, projectionMatrix, 0, modelViewMatrix, 0);
You can use the vertex movement calculations in the CPU, and then use the GLU.gluProject() function to convert the coordinates of the vertex of the object in pixels of the screen. This data can be used when working with touch events.
private var view: IntArray = intArrayOf(0, 0, widthScreen, heightScreen)
...
GLU.gluProject(modelX, modelY, modelZ, mvMatrix, 0,
projectionMatrix, 0, view, 0,
coordinatesWindow, 0)
...
// coordinates in pixels of the screen
val x = coordinatesWindow[0]
val y = coordinatesWindow[1]

How to zoom in on a point (the math behind it)

I'm working on a fractal graphic. I need to be able to zoom in on a specific point.
Here's what I've got so far. If you keep the mouse in the same position for the whole zoom, it works. But if you zoom part of the way then move the mouse to a new position and try to zoom some more from there, it starts jumping all over.
scale_change = zoom * ((button == SCROLL_DOWN) ? ZOOM_INC : -ZOOM_INC);
zoom += scale_change;
center->x -= (mouse->x - (IMG_SIZE / 2)) * scale_change;
center->y -= (mouse->y - (IMG_SIZE / 2)) * scale_change;
I assume some part of it is over-simplistic? There's some variable I'm not accounting for? It does work if you don't move the mouse, though.
The best approach is probably to use a transform matrix to both scale the image and find out what point in the scaled image your mouse is over so you can transform based on that point.
I know you're working in C, but I created an example in js, because it allows me to demonstrate working code easier. Click on the image and use Z and X to zoom in and out. https://jsfiddle.net/7ekqg8cb/
Most of the code is implementing matrix multiplication, matrix inversion and matrix point transformation. The important part is:
var scaledMouse = transformPoint(mouse);
matrixMultiplication([1, 0, scaledMouse.x,
0, 1, scaledMouse.y,
0, 0, 1]);
var scale = 1.2;
if (direction) {
matrixMultiplication([scale, 0, 0,
0, scale, 0,
0, 0, 1]);
}
else {
matrixMultiplication([1/scale, 0, 0,
0, 1/scale, 0,
0, 0, 1]);
}
matrixMultiplication([1, 0, -scaledMouse.x,
0, 1, -scaledMouse.y,
0, 0, 1]);
transformPoint uses the inverse of the transform matrix to find out where the mouse is relative to the transformed image. Then the transform matrix is translated, scaled and translated back, to scale the transform matrix around that point.

Why doesn't this code draw a triangle?

I'm very new to OpenGL and I just wrote up a section of code using SDL 2 that to my knowledge should have drawn a triangle, but this code doesn't seem to work and so I am not done learning. I've got all the initialization code SDL 2 documentation says I need already written in, and the functions returned by dynamic loading ARE callable. When I execute this code instead of a triangle I get a black (but cleared) window. Why does this code not draw the triangle I want, and why is the window cleared to black by this code? I want to know the technical details behind mainly the first question so I can depend on it later.
(*main_context.glViewport)(0, 0, 100, 100);
(*main_context.glBegin)(GL_TRIANGLES);
(*main_context.glColor4d)(255, 255, 255, 255);
(*main_context.glVertex3d)(1, 1, -50);
(*main_context.glVertex3d)(1, 30, 1);
(*main_context.glVertex3d)(30, 1, 1);
(*main_context.glEnd)();
(*main_context.glFinish)();
(*main_context.glFlush)();
SDL_GL_SwapWindow(window);
Update:
I've revised my code to include different coordinates and I got the triangle to draw, but I cannot get it to draw when farther away. Why is that?
(*main_context.glVertex3d)(2, -1, 1); /* Works. */
(*main_context.glVertex3d)(2, -1, 3); /* Doesn't work. */
Unless you are setting up a projection and/or modelview matrix elsewhere in your code, it's using the default (identity matrix) transform, which is an orthographic projection with (-1, -1) at the bottom left and (1, 1) at the top right. glViewport only changes the portion of the default framebuffer being rendered to, it has no bearing on the projection whatsoever.
With an orthographic projection, the Z coordinate does not affect the screen-space position of a point, except that points outside the Z clipping planes will not be rendered. In this case, that's everything outside of -1 <= z <= 1. Given that one of your points is (1, 1, -50), this seems to be your problem.

Efficient reflections in Clutter/COGL?

I'm working on a program that uses Clutter (1.10) and COGL to render elements to the display.
I've created a set of ClutterTextures that I am rendering video to, and I'd like the video textures to have reflections.
The "standard" way to implement this seems to be a callback every time the texture is painted, with code similar to:
static void texture_paint_cb (ClutterActor *actor ) {
ClutterGeometry geom;
CoglHandle cmaterial;
CoglHandle ctexture;
gfloat squish = 1.5;
cogl_push_matrix ();
clutter_actor_get_allocation_geometry (actor, &geom);
guint8 opacity = clutter_actor_get_paint_opacity (actor);
opacity /= 2;
CoglTextureVertex vertices[] =
{
{ geom.width, geom.height, 0, 1, 1 },
{ 0, geom.height, 0, 0, 1 },
{ 0, geom.height*squish, 0, 0, 0 },
{ geom.width, geom.height*squish, 0, 1, 0 }
};
cogl_color_set_from_4ub (&vertices[0].color, opacity, opacity, opacity, opacity);
cogl_color_set_from_4ub (&vertices[1].color, opacity, opacity, opacity, opacity);
cogl_color_set_from_4ub (&vertices[2].color, 0, 0, 0, 0);
cogl_color_set_from_4ub (&vertices[3].color, 0, 0, 0, 0);
cmaterial = clutter_texture_get_cogl_material (CLUTTER_TEXTURE (actor));
ctexture = clutter_texture_get_cogl_texture (CLUTTER_TEXTURE (actor));
cogl_material_set_layer (cmaterial, 0, ctexture);
cogl_set_source(cmaterial);
cogl_set_source_texture(ctexture);
cogl_polygon (vertices, G_N_ELEMENTS (vertices), TRUE);
cogl_pop_matrix ();
}
This is then hooked to the paint signal on the ClutterTexture. There's a similar bit of code here that does something similar. (Google cache, since the page has been down today)
The problem that I'm having is that the reflection effect is causing a performance hit - 5~7 fps is being lost when I enable it. Part of the problem is likely the low-power hardware I'm using (a Raspberry Pi).
I've managed to do something similar to what this code does, by setting up a clone of the texture and making it somewhat transparent. This causes no performance hit whatsoever. However, unlike the paint callback method, the reflection has hard edges and doesn't fade out.
I'd like to get a better looking reflection effect without the performance hit. I'm wondering if there's some way to get a similar effect that doesn't require so much work per paint... There are a bunch of other Clutter and COGL methods that manipulate materials, shaders, and so forth, but I have little to no OpenGL expertise so I don't have any idea if I could get something along those lines to do what I want, or even how to find examples of something similar I could work off of.
Is it possible to get a better looking, high performance reflection effect via Clutter/COGL?

Resources