How to zoom in on a point (the math behind it) - c

I'm working on a fractal graphic. I need to be able to zoom in on a specific point.
Here's what I've got so far. If you keep the mouse in the same position for the whole zoom, it works. But if you zoom part of the way then move the mouse to a new position and try to zoom some more from there, it starts jumping all over.
scale_change = zoom * ((button == SCROLL_DOWN) ? ZOOM_INC : -ZOOM_INC);
zoom += scale_change;
center->x -= (mouse->x - (IMG_SIZE / 2)) * scale_change;
center->y -= (mouse->y - (IMG_SIZE / 2)) * scale_change;
I assume some part of it is over-simplistic? There's some variable I'm not accounting for? It does work if you don't move the mouse, though.

The best approach is probably to use a transform matrix to both scale the image and find out what point in the scaled image your mouse is over so you can transform based on that point.
I know you're working in C, but I created an example in js, because it allows me to demonstrate working code easier. Click on the image and use Z and X to zoom in and out. https://jsfiddle.net/7ekqg8cb/
Most of the code is implementing matrix multiplication, matrix inversion and matrix point transformation. The important part is:
var scaledMouse = transformPoint(mouse);
matrixMultiplication([1, 0, scaledMouse.x,
0, 1, scaledMouse.y,
0, 0, 1]);
var scale = 1.2;
if (direction) {
matrixMultiplication([scale, 0, 0,
0, scale, 0,
0, 0, 1]);
}
else {
matrixMultiplication([1/scale, 0, 0,
0, 1/scale, 0,
0, 0, 1]);
}
matrixMultiplication([1, 0, -scaledMouse.x,
0, 1, -scaledMouse.y,
0, 0, 1]);
transformPoint uses the inverse of the transform matrix to find out where the mouse is relative to the transformed image. Then the transform matrix is translated, scaled and translated back, to scale the transform matrix around that point.

Related

How do i get the actual position of vertices in OpenGL ES 2.0

After Applying a rotation or a translation matrix on the vertex array, the vertex buffer is not updated
So how can i get the position of vertices after applying the matrix?
here's the onDrawFrame() function
public void onDrawFrame(GL10 gl) {
PositionHandle = GLES20.glGetAttribLocation(Program,"vPosition");
MatrixHandle = GLES20.glGetUniformLocation(Program,"uMVPMatrix");
ColorHandle = GLES20.glGetUniformLocation(Program,"vColor");
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT|GLES20.GL_DEPTH_BUFFER_BIT );
Matrix.rotateM(RotationMatrix,0,-90f,1,0,0);
Matrix.multiplyMM(vPMatrix,0,projectionMatrix,0,viewMatrix,0);
Matrix.multiplyMM(vPMatrix,0,vPMatrix,0,RotationMatrix,0);
GLES20.glUniformMatrix4fv(MatrixHandle, 1, false, vPMatrix, 0);
GLES20.glUseProgram(Program);
GLES20.glEnableVertexAttribArray(PositionHandle);
GLES20.glVertexAttribPointer(PositionHandle,3,GLES20.GL_FLOAT,false,0,vertexbuffer);
GLES20.glUniform4fv(ColorHandle,1,color,1);
GLES20.glDrawArrays(GLES20.GL_TRIANGLES,0,6);
GLES20.glDisableVertexAttribArray(PositionHandle);
}
The GPU doesn't normally write back transformed results anywhere the application can use them. It's possible in ES 3.0 with transform feedback, BUT it's very expensive.
For touch event "hit" testing, you generally don't want to use the raw geometry. Generally use some simple proxy geometry, which can be transformed in software on the CPU.
Maybe you should try this:
private float[] modelViewMatrix = new float[16];
...
Matrix.rotateM(RotationMatrix, 0, -90f, 1, 0, 0);
Matrix.multiplyMM(modelViewMatrix, 0, viewMatrix, 0, RotationMatrix, 0);
Matrix.multiplyMM(vpMatrix, 0, projectionMatrix, 0, modelViewMatrix, 0);
You can use the vertex movement calculations in the CPU, and then use the GLU.gluProject() function to convert the coordinates of the vertex of the object in pixels of the screen. This data can be used when working with touch events.
private var view: IntArray = intArrayOf(0, 0, widthScreen, heightScreen)
...
GLU.gluProject(modelX, modelY, modelZ, mvMatrix, 0,
projectionMatrix, 0, view, 0,
coordinatesWindow, 0)
...
// coordinates in pixels of the screen
val x = coordinatesWindow[0]
val y = coordinatesWindow[1]

Use polygons as clickable zones, instead of rectangles

I'm creating a 2D game using the allegro library and I need to create polygonal territories on a map that the player can click on to move his units.
Am I forced to use some sort of point in polygon detection algorithm or is there a simpler solution using the polygon I drew already?
So far I managed to draw a polygon such as:
ALLEGRO_VERTEX v[] =
{
{ .x = 0, .y = 0, .z = 0, .color = al_map_rgb_f(1, 0, 0) },
{ .x = 0, .y = 48, .z = 0, .color = al_map_rgb_f(1, 0, 0) },
{ .x = 32, .y = 64, .z = 0, .color = al_map_rgb_f(1, 0, 0) },
{ .x = 80, .y = 32, .z = 0, .color = al_map_rgb_f(1, 0, 0) },
{ .x = 112, .y = 0, .z = 0, .color = al_map_rgb_f(1, 0, 0) }
};
al_draw_prim(v, NULL, NULL, 0, 5, ALLEGRO_PRIM_TRIANGLE_FAN);
EDIT: Ok I figured I can detect if the mouse is in a polygon using this algorithm still I feel this is not the right way to do this. I still need to call the function for every different polygon, that doesn't sound right.
You found an algorithm that does point-in-polygon for all your polygons and tells you which polygon the user clicked on. Good job you can use that. You wanted a built-in API call to do it and didn't get it. Since nobody else posted a contrary answer, I presume you won't. You should use what you've got.
I will now address why this should feel right rather than not feel right.
If the library itself had implemented it for you, it would still be constrained by the underlying OS primitives, which are in turn constrained by the algorithmic complexity of the problem, which is point-in-polygon per polygon. Thus, you may project all the polygons in your application, use one mouse hit box for the whole screen, and test them in turn. This is what the API would have had to do if there was an API for this.
I project a significant chance you coded it and found it too slow. An easy solution that almost always works is to do an axis-oriented bounding box test first, which is fast.
BugSquasher suggests an alternate solution. Render twice, with the second to an offscreen buffer with one color per polygon, and point-test the color. This also works, and is a good speedup if hit-testing is much more common than polygon moving. It does cost memory though.

Cairo image blurred when scaled

I have the following cairo code:
cairo_set_source_rgba(cr, 1, 1, 1, 1);
cairo_rectangle(cr, 0, 0, WINDOW_SIZE, WINDOW_SIZE);
cairo_fill(cr);
cairo_scale(cr, 8, 8);
draw_image(cr, "q.png", 5, 5);
And
void draw_image(cairo_t* cr, char* img_name, int x, int y)
{
cairo_translate(cr, x, y);
cairo_surface_t* img = cairo_image_surface_create_from_png(img_name);
cairo_set_source_surface(cr, img, 0, 0);
cairo_paint(cr);
cairo_translate(cr, -x, -y);
}
q.png is a 5x5 image:
But when the program is run, the image is slightly blurred:
I have already tried
cairo_set_antialias(cr, CAIRO_ANTIALIAS_NONE);
but it does not work.
Is there any way to fix this problem?
This is because of how the image is scaled up. Instead of setting a source surface directly, create a pattern out of the surface with cairo_pattern_create_for_surface(), call cairo_pattern_set_filter() on it to set the scaling mode, and then call cairo_set_source() to load the pattern. See the documentation for cairo_filter_t for the scaling modes. CAIRO_FILTER_NEAREST, for example, will give you a normal pixel zoom with no blurring or other transformations.

Should I translate first or rotate first?

I'm trying to create a simple scene where I can walk around, with the criteria of being able to pan around and walk around with the keys. However, in my draw scene function, when I translate my scene than rotate, the panning around doesn't work properly as the entire scene just rotates around me, causing objects to go through me. When I rotate than translate my scene, I can pan around properly, however, I can move only in a certain direction, so if I pan around to my right 90 degrees, I'll move left instead of going forward. Is there anyway where I can put these 2 effects together?
This is the code that I use to draw my view:
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
glPushMatrix();
glTranslated(xposition, 0, zposition); //This is where I translate my views
glRotated(yrot, 0, 1, 0); //
glEnableClientState(GL_NORMAL_ARRAY);
glEnableClientState(GL_COLOR_ARRAY);
glEnableClientState(GL_VERTEX_ARRAY);
glBindBuffer(GL_ARRAY_BUFFER_ARB, quadVBO);
glNormalPointer(GL_FLOAT, 0, (void*)sizeof(sideArray));
glColorPointer(3, GL_FLOAT, 0, (void*)sizeof(sideArray)+sizeof(normals));
glVertexPointer(3, GL_FLOAT, 0, 0);
glDrawArrays(GL_QUADS, 0, sizeof(sideArray)/sizeof(GLfloat)/3);
glPopMatrix();
glFlush();
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_COLOR_ARRAY);
glDisableClientState(GL_NORMAL_ARRAY);
glBindBufferARB(GL_ARRAY_BUFFER_ARB, 0);
Here are some pictures that illustrate my problem right now:
Rotate then translate:
Pic1
http://dl.dropbox.com/u/2747708/Screen%20Shot%202012-04-03%20at%2010.17.39%20PM.PNG
Pic2
I can imitate the turn of the camera
http://dl.dropbox.com/u/2747708/Screen%20Shot%202012-04-03%20at%2010.17.48%20PM.PNG
Pic3
However, when I walk forward, it only walks in one direction, and not the direction I'm looking at.
http://dl.dropbox.com/u/2747708/Screen%20Shot%202012-04-03%20at%2010.18.30%20PM.PNG
http://dl.dropbox.com/u/2747708/Screen%20Shot%202012-04-03%20at%2010.18.39%20PM.PNG
Translate then Rotate:
Pic1
http://dl.dropbox.com/u/2747708/Screen%20Shot%202012-04-03%20at%2010.19.44%20PM.PNG
Pic2
I can move around freely, walking straight to any direction I'm looking at.
http://dl.dropbox.com/u/2747708/Screen%20Shot%202012-04-03%20at%2010.19.52%20PM.PNG
Pic3
However, when I rotate the scene, the entire thing rotates, which causes objects to clip through me and doesn't "pan" through the view anymore like when I rotate then translate my view.
http://dl.dropbox.com/u/2747708/Screen%20Shot%202012-04-03%20at%2010.20.01%20PM.PNG
Your problem isn't with the order of transformations. You should be rotating then translating.
The problem is that you aren't taking rotation into account when moving. The formula for movement:
movementX = sin(direction);
movementY = cos(direction);
where direction is the number of radians turned clockwise from north, and positive X is east, and positive Y is north.
I'm not sure how to comment on or add to Kendall's answer so I'll just add this as a new answer.
I think you may be using Y for up, so you would use this if direction is in radians:
movementX = sin(direction);
movementZ = cos(direction);
if direction is in degrees you will have to convert it to radians:
radians = degrees*(PI/180);
Just multiply the movement by 1 or -1 based on whether you are moving forwards or backwards:
movementX = sin(direction)*forwardsBackwards;
movementZ = cos(direction)*forwardsBackwards;
If you need strafing as well you can do:
movementX = sin(direction)*forwardsBackwards+sin(direction+1.5707)*sideToSide;
movementZ = cos(direction)*forwardsBackwards+cos(direction+1.5707)*sideToSide;
Where sideToSide is 1 or -1 based on if you are strafing left or right. The 1.5707 is 90 degrees in radians(PI/2), which means no matter what direction you are facing it takes the 90 degree angle to the right of it. You can also add 90 degrees to the degree rotation before converting if you want to.
Multiply the entire thing by your desired movement speed:
movementX = (sin...eToSide)*speed;
movementZ = (cos...eToSide)*speed;
This will however create a straferunning effect where you will move faster if you move in 2 directions at once. If you want to make it so that doesn't happen add this before calculating movement:
if (!forwardsBackwards && !sideToSide )
{
forwardsBackwards *= 0.7071;
sideToSide *= 0.7071;
}
You could also replace .7071 with cos(45) if you need it to be extremely accurate.
Alternatively you could:
float diagonalMod = 1;
if (!forwardsBackwards && !sideToSide)
diagonalMod = 0.7071; // or cos(45)
movementX = (sin...eToSide)*speed*diagonalMod;
movementZ = (cos...eToSide)*speed*diagonalMod;
You will want to rotate first and then translate. The other way, if I am thinking about it right, is basically like moving the point at which your camera will rotate around, the pivot point. This is why you would clip through objects when rotating.
Also, you should perform "camera" rotation and translation immediately following a glLoadIdentity(); as you want everything else in your scene to move and rotate, as that is how an OpenGL "camera" functions. This is how it would be setup:
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
glRotated(yrot, 0, 1, 0);
glTranslated(xposition, 0, zposition);
glPushMatrix();
And for future reference in case you expand on it:
glRotatef(Pitch, 1, 0, 0); // Up and down look
glRotatef(Yaw, 0, 1, 0); // Left and right look
glRotatef(Roll, 0, 0, 1); // Like a barrel roll in a jet
glTranslatef(X,Y,Z);

Mapping a 3D rectangle to a 2D screen

I've searched SO but I just can't figure this out. The other questions didn't help or I didn't understand them.
The problem is, I have a bunch of points in a 3D image. The points are for a rectangle, which doesn't look like a rectangle from the 3d camera's view because of perspective. The task is to map the points from that rectangle to the screen. I've seen some ways which some call "quad to quad transformations" but most of them are for mapping a 2d quadrilateral to another one. But I've got the X, Y and Z coordinates of the rectangle in the real world so I'm looking for some easier ways. Does anyone know any practical algorithm or method of doing this?
If it helps, my 3d camera is actually a Kinect device with OpenNI and NITE middlewares, and I'm using WPF.
Thanks in advance.
edit:
I also found the 3d-projection page on Wikipedia that used angles and cosines but that seems to be a difficult way (finding angles in the 3d image) and I'm not sure if it's the real solution or not.
You might want to check out projection matrices
That's how any 3D rasterizer "flattens" 3D volumes on a 2D screen.
See this code to get the projection matrix for a given WPF camera:
private static Matrix3D GetProjectionMatrix(OrthographicCamera camera, double aspectRatio)
{
// This math is identical to what you find documented for
// D3DXMatrixOrthoRH with the exception that in WPF only
// the camera's width is specified. Height is calculated
// from width and the aspect ratio.
double w = camera.Width;
double h = w / aspectRatio;
double zn = camera.NearPlaneDistance;
double zf = camera.FarPlaneDistance;
double m33 = 1 / (zn - zf);
double m43 = zn * m33;
return new Matrix3D(
2 / w, 0, 0, 0,
0, 2 / h, 0, 0,
0, 0, m33, 0,
0, 0, m43, 1);
}
private static Matrix3D GetProjectionMatrix(PerspectiveCamera camera, double aspectRatio)
{
// This math is identical to what you find documented for
// D3DXMatrixPerspectiveFovRH with the exception that in
// WPF the camera's horizontal rather the vertical
// field-of-view is specified.
double hFoV = MathUtils.DegreesToRadians(camera.FieldOfView);
double zn = camera.NearPlaneDistance;
double zf = camera.FarPlaneDistance;
double xScale = 1 / Math.Tan(hFoV / 2);
double yScale = aspectRatio * xScale;
double m33 = (zf == double.PositiveInfinity) ? -1 : (zf / (zn - zf));
double m43 = zn * m33;
return new Matrix3D(
xScale, 0, 0, 0,
0, yScale, 0, 0,
0, 0, m33, -1,
0, 0, m43, 0);
}
/// <summary>
/// Computes the effective projection matrix for the given
/// camera.
/// </summary>
public static Matrix3D GetProjectionMatrix(Camera camera, double aspectRatio)
{
if (camera == null)
{
throw new ArgumentNullException("camera");
}
PerspectiveCamera perspectiveCamera = camera as PerspectiveCamera;
if (perspectiveCamera != null)
{
return GetProjectionMatrix(perspectiveCamera, aspectRatio);
}
OrthographicCamera orthographicCamera = camera as OrthographicCamera;
if (orthographicCamera != null)
{
return GetProjectionMatrix(orthographicCamera, aspectRatio);
}
MatrixCamera matrixCamera = camera as MatrixCamera;
if (matrixCamera != null)
{
return matrixCamera.ProjectionMatrix;
}
throw new ArgumentException(String.Format("Unsupported camera type '{0}'.", camera.GetType().FullName), "camera");
}
You could do a basic orthographic projection (I'm thinking in terms of raytracing, so this might not apply to what you're doing):
The code is quite intuitive:
for y in image.height:
for x in image.width:
ray = new Ray(x, 0, z, Vector(0, 1, 0)) # Pointing forward
intersection = prism.intersection(ray) # Since you aren't shading, you can check only for intersections.
image.setPixel(x, y, intersection) # Returns black and white image of prism mapped to plane
You just shoot vectors with a direction of (0, 1, 0) directly out into space and record which ones hit.
I found this. Uses straight forward mathematics instead of matricies.
This is called perspective projection to convert from a 3D vertex to a 2D screen vertex. I used this to help me with my 3D program I have made.
HorizontalFactor = ScreenWidth / Tan(PI / 4)
VerticalFactor = ScreenHeight / Tan(PI / 4)
ScreenX = ((X * HorizontalFactor) / Y) + HalfWidth
ScreenY = ((Z * VerticalFactor) / Y) + HalfHeight
Hope this could help. I think its what you where looking for. Sorry about the formatting (new here)
Mapping points in a 3d world to a 2d screen is part of the job of frameworks like OpenGL and Direct3d. It's called Rasterisation like Heandel said. Perhaps you could use Direct3d?

Resources