Related
I'm writing a C program to render a Mandelbrot set and currently, I'm stuck with trying out to figure out how to zoom in properly.
I want for the zoom to be able to follow the mouse pointer on the screen - so that the fractal zooms in into the cursor position.
I have a window defined by:
# define WIDTH 800
# define HEIGHT 600
My Re_max, Re_min, Im_Max, Im_Min are defined and initialized as follows:
man->re_max = 2.0;
man->re_min = -2.0;
man->im_max = 2.0;
man->im_min = -2.0;
The interpolation value (more on in later) is defined and initialized as follows:
pos->interp = 1.0;
To map the pixel coordinates to the center of the screen, I'm using the position function:
void position(int x, int y, t_mandel *man)
{
double *s_x;
double *s_y;
s_x = &man->pos->shift_x;
s_y = &man->pos->shift_y;
man->c_re = (x / (WIDTH / (man->re_max - man->re_min)) + man->re_min) + *s_x;
man->c_im =(y / (HEIGHT / (man->im_max - man->re_min)) + man->im_min) + *s_y;
man->c_im *= 0.8;
}
To zoom in, I first get the coordinates of the mouse pointer and map them to the visible area given by the rectangle defined by the (Re_Max, Re_Min, Im_Max, Im_Min) using this function, where x and y are coordinates of the pointer on a screen:
int mouse_move(int x, int y, void *p)
{
t_fract *fract;
t_mandel *man;
fract = (t_fract *)p;
man = fract->mandel;
fract->mouse->Re = x / (WIDTH / (man->re_max - man->re_min)) + man->re_min;
fract->mouse->Im = y / (HEIGHT / (man->im_max - man->re_min)) + man->im_min;
return (0);
}
This function is called when a mouse wheel scroll is registered. The actual zooming is achieved by this function:
void zoom_control(int key, t_fract *fract)
{
double *interp;
interp = &fract->mandel->pos->interp;
if (key == 5) // zoom in
{
*interp = 1.0 / 1.03;
apply_zoom(fract->mandel, fract->mouse->Re, fract->mouse->Im, *interp);
}
else if (key == 4) // zoom out
{
*interp = 1.0 * 1.03;
apply_zoom(fract->mandel, fract->mouse->Re, fract->mouse->Im, *interp);
}
}
Which calls this:
void apply_zoom(t_mandel *man, double m_re, double m_im, double interp)
{
man->re_min = interpolate(m_re, man->re_min, interp);
man->im_min = interpolate(m_im, man->im_min, interp);
man->re_max = interpolate(m_re, man->re_max, interp);
man->im_max = interpolate(m_im, man->im_max, interp);
}
I have a simple interpolate function to redefine the area bounding rectangle:
double interpolate(double start, double end, double interp)
{
return (start + ((end - start) * interp));
}
So the problem is:
My code renders the fractal like this -
Mandelbrot set
But when I try to zoom in as described with the mouse, instead of going nicely "in", it just distorts like this, the image just sort of collapses onto itself instead of actually diving into the fractal.
I would really appreciate help with this one as I've been stuck on it for a while now.
If you please could also explain the actual math behind your solutions, I would be overjoyed!
Thank you!
After quite a bit of headache and a lot of paper wasted on recalculation interpolation methods, I've realized that the way I've mapped my complex numbers on-screen was incorrect, to begin with. Reworking my mapping method solved my problem, so I'll share what have I done.
-------------------------------OLD WAY--------------------------------------
I've initialized my Re_max, Re_min, Im_Max, Im_Min values, which define the visible area in the following way:
re_max = 2.0;
re_min = -2.0;
im_max = 2.0;
im_min = -2.0;
Then, I used this method to convert my on-screen coordinates to the complex numbers used to calculate the fractal (note that the coordinates used for mapping the mouse position for zoom interpolation and coordinates used to calculate the fractal itself use the same method):
Re = x / (WIDTH / (re_max - re_min)) + re_min;
Im = y / (HEIGHT / (im_max - re_min)) + im_min;
However, this way I didn't take the screen ratio into account and I've neglected the fact (due to a lack of knowledge) that the y coordinate on-screen is inverse (at least in my program) - negative direction is up, positive is down.
This way, when I tried to zoom in with my interpolation, naturally, the image distorted.
------------------------------CORRECT WAY-----------------------------------
When defining the bounding rectangle of the set, maximum imaginary im_max) part should be calculated, based on the screen ratio, to avoid image distortion when the display window isn't a square:
re_max = 2.0;
re_min = -2.0;
im_min = -2.0;
im_max = im_min + (re_max - re_min) * HEIGHT / WIDTH;
To map the on-screen coordinates to the complex numbers, I first found the "coordinate-to-number* ratio, which is equal to *rectangle length / screen width*:
re_factor = (re_max - re_min) / (WIDTH - 1);
im_factor = (im_max - im_min) / (HEIGHT - 1);
Then, I've mapped my pixel coordinates to the real and imaginary part of a complex number used in calculations like so:
c_re = re_min + x * re_factor;
c_im = im_max - y * im_factor;
After implementing those changes, I was finally able to smoothly zoom into the mouse position without any distortion or image "jumps".
If I understand you correctly, you want to make the point where the mouse is located a new center of the image, and change the scale of the image by a factor of 1.03. I would try something like that:
Your position() and mouse_move() functions remain the same.
in zoom_control() just change the way how you set the new value of interpolation, it should not be a fixed constant, but should be based on its current value. Also, pass the new scaling factor to the apply_zoom():
void zoom_control(int key, t_fract *fract)
{
double *interp;
interp = &fract->mandel->pos->interp;
double zoom_factor = 1.03;
if (key == 5) // zoom in
{
*interp /= zoom_factor;
apply_zoom(fract->mandel, fract->mouse->Re, fract->mouse->Im, 1.0 / zoom_factor);
}
else if (key == 4) // zoom out
{
*interp *= zoom_factor;
apply_zoom(fract->mandel, fract->mouse->Re, fract->mouse->Im, zoom_factor);
}
}
modify the apply zoom function:
void apply_zoom(t_mandel *man, double m_re, double m_im, double zoom_factor)
{
// Calculate the new ranges along the real and imaginary axes.
// They are equal to the current ranges multiplied by the zoom_factor.
double re_range = (man->re_max - man->re_min) * zoom_factor;
double im_range = (man->im_max - man->im_min) * zoom_factor;
// Set the new min/max values for real and imaginary axes with the center at
// mouse coordinates m_re and m_im.
man->re_min = m_re - re_range / 2;
man->re_max = m_re + re_range / 2;
man->im_min = m_im - im_range / 2;
man->im_max = m_im + im_range / 2;
}
I want to get an earth texture on sphere. My sphere is an icosphere built with many triangles (100+) and I found it confusing to set the UV coordinates for whole sphere. I tried to use glTexGen and effects are quite close but I got my texture repeated 8 times (see image) . I cannot find a way to make it just wrap the whole object once. Here is my code where the sphere and textures are created.
glEnable(GL_TEXTURE_2D);
glTexGeni(GL_T, GL_TEXTURE_GEN_MODE, GL_OBJECT_LINEAR);
glTexGeni(GL_S, GL_TEXTURE_GEN_MODE, GL_OBJECT_LINEAR);
glBegin(GL_TRIANGLES);
for (int i = 0; i < new_sphere->NumOfTrians; i++)
{
Triangle *draw_Trian = new_sphere->Trians+i;
glVertex3f(draw_Trian->pnts[0].coords[0], draw_Trian->pnts[0].coords[1], draw_Trian->pnts[0].coords[2]);
glVertex3f(draw_Trian->pnts[1].coords[0], draw_Trian->pnts[1].coords[1], draw_Trian->pnts[1].coords[2]);
glVertex3f(draw_Trian->pnts[2].coords[0], draw_Trian->pnts[2].coords[1], draw_Trian->pnts[2].coords[2]);
}
glDisable(GL_TEXTURE_2D);
free(new_sphere->Trians);
free(new_sphere);
glEnd();
You need to define how your texture is supposed to map to your triangles. This depends on the texture you're using. There are a multitude of ways to map the surface of a sphere with a texture (since no one mapping is free of singularities). It looks like you have a cylindrical projection texture there. So we will emit cylindrical UV coordinates.
I've tried to give you some code here, but it's assuming that
Your mesh is a unit sphere (i.e., centered at 0 and has radius 1)
pnts.coords is an array of floats
You want to use the second coordinate (coord[1]) as the 'up' direction (or the height in a cylindrical mapping)
Your code would look something like this. I've defined a new function for emitting cylindrical UVs, so you can put that wherever you like.
/* Map [(-1, -1, -1), (1, 1, 1)] into [(0, 0), (1, 1)] cylindrically */
inline void uvCylinder(float* coord) {
float angle = 0.5f * atan2(coord[2], coord[0]) / 3.14159f + 0.5f;
float height = 0.5f * coord[1] + 0.5f;
glTexCoord2f(angle, height);
}
glEnable(GL_TEXTURE_2D);
glBegin(GL_TRIANGLES);
for (int i = 0; i < new_sphere->NumOfTrians; i++) {
Triangle *t = new_sphere->Trians+i;
uvCylinder(t->pnts[0].coords);
glVertex3f(t->pnts[0].coords[0], t->pnts[0].coords[1], t->pnts[0].coords[2]);
uvCylinder(t->pnts[1].coords);
glVertex3f(t->pnts[1].coords[0], t->pnts[1].coords[1], t->pnts[1].coords[2]);
uvCylinder(t->pnts[2].coords);
glVertex3f(t->pnts[2].coords[0], t->pnts[2].coords[1], t->pnts[2].coords[2]);
}
glEnd();
glDisable(GL_TEXTURE_2D);
free(new_sphere->Trians);
free(new_sphere);
Note on Projections
The reason it's confusing to build UV coordinates for the whole sphere is that there isn't one 'correct' way to do it. Mathematically-speaking, there's no such thing as a perfect 2D mapping of a sphere; hence why we have so many different types of projections. When you have a 2D image that's a texture for a spherical object, you need to know what type of projection that image was built for, so that you can emit the correct UV coordinates for that texture.
I'm a little stuck. I'm trying to achieve a basic polar to rectangular conversion to match that of Photoshop's but I'm not getting the same results.
Converting from rectangular to polar matches Photoshop's but going from polar back to rectangular does not.
You can see in this image the differences between Photoshop's and mine:
float a, b, ang, dist;
int px, py;
const PI=3.141592653589793;
// Convert from cartesian to polar
for (y=y_start; y<y_end; ++y)
{
for (x=x_start; x<x_end; ++x)
{
a = (float)(x-X/2);
b = (float)(y-Y/2);
dist = (sqr(a*a + b*b)*2.0);
ang = atan2(b,-a)*(58);
ang = fmod(ang + 450.0,360.0);
px = (int)(ang*X/360.0);
py = (int)(dist);
pset(x, y, 0, src(px,py,0));
pset(x, y, 1, src(px,py,1));
pset(x, y, 2, src(px,py,2));
}
}
// Convert back to cartesian
for (y=y_start; y<y_end; ++y)
{
for (x=x_start; x<x_end; ++x)
{
ang = ((float)x/X)*PI*2.0;
dist = (float)y*0.5;
px = (int)(cos(ang)*dist)+X/2;
py = (int)(sin(ang)*dist)+Y/2;
pset(x, y, 0, pget(px,py,0));
pset(x, y, 1, pget(px,py,1));
pset(x, y, 2, pget(px,py,2));
}
}
This is my code. I'm sure I've messed something up in the polar to cartesian. The language is based off C.
What am I doing wrong? Any suggestions?
There are two issues with your polar-to-cartesian transformation:
The axes of the coordinate system you use to define angles are pointing right (x) and down (y) while you used a coordinate system with upwards- (x) and left-pointing (y) axes for your cartesian-to-polar transformation. The code to convert the angle to cartesian should be (I've added a bit of rounding)
px = round(-sin(ang)*dist + X/2.)
py = round(-cos(ang)*dist + Y/2.)
With that code you move from red to green to blue instead of from gray to blue to green in the final picture when increasing the x coordinate there.
Assuming that pget and pset operate on the same bitmap, you're overwriting your source image. The loop construct takes you outward along concentric circles around the center of the source image while filling the target line by line, top to bottom. at some point the circles and the lines start to overlap and you start reading the data you modified earlier (happens at the apex of the parabola-like shape). It gets even more convoluted because at some point you start reading the transform of that modified data, so that it is effectively transformed again (I guess that causes the irregular triangular region on the right).
I am trying to model the Lorenz attractor in 3D space using OpenGL. I have written the following code in my display function:
void display()
{
// Clear the image
glClear(GL_COLOR_BUFFER_BIT);
// Reset previous transforms
glLoadIdentity();
// Set view angle
glRotated(ph,1,0,0);
glRotated(th,0,1,0);
glColor3f(1,1,0);
glPointSize(1);
float x = 0.1, y = 0.1, z = 0.1;
glBegin(GL_POINTS);
int i;
for (i = 0; i < initialIterations; i++) {
// compute a new point using the strange attractor equations
float xnew = sigma*(y-x);
float ynew = x*(r-z) - y;
float znew = x*y - b*z;
// save the new point
x = x+xnew*dt;
y = y+ynew*dt;
z = z+znew*dt;
glVertex4f(x,y,z,i);
}
glEnd();
// Draw axes in white
glColor3f(1,1,1);
glBegin(GL_LINES);
glVertex3d(0,0,0);
glVertex3d(1,0,0);
glVertex3d(0,0,0);
glVertex3d(0,1,0);
glVertex3d(0,0,0);
glVertex3d(0,0,1);
glEnd();
// Label axes
glRasterPos3d(1,0,0);
Print("X");
glRasterPos3d(0,1,0);
Print("Y");
glRasterPos3d(0,0,1);
Print("Z");
// Display parameters
glWindowPos2i(5,5);
Print("View Angle=%d,%d %s",th,ph,text[mode]);
// Flush and swap
glFlush();
glutSwapBuffers();
}
However, I can't get the right attractor. I believe my equations for x, y, z are correct. I am just not sure how to display it the right way to get the right attractor. Thanks for any help. below is what my program is currently putting out:
Hello
Okay so I had this problem and there are a few things you want to do,
First off when you go do draw the point with glVertex4f() you want to either change it to glVertex3f or change your w value to 1. with glVertex3f it will set w to 1 by default. The w value changes the scaling of the points so you will end up with some crazy number all the way out with an i of 50000 or so.
Second after fixing that you're going to find that the values are way out of your visual range so you need to scale it down. I would do this at the time you draw the points so in your case I would use glVertex3f(x*.05,y*.05,z*.05). if .05 is too large or too small adjust it to fit your needs.
finally make sure that your dt value is .001 and your starting point should be around 1 for x,y,and z.
Then ideally you want to put all these points in an array then read that array to draw your points instead of doing the calculations each time you call display. So do your calculations elsewhere and just send the points to display. Hope this helped.
I've searched SO but I just can't figure this out. The other questions didn't help or I didn't understand them.
The problem is, I have a bunch of points in a 3D image. The points are for a rectangle, which doesn't look like a rectangle from the 3d camera's view because of perspective. The task is to map the points from that rectangle to the screen. I've seen some ways which some call "quad to quad transformations" but most of them are for mapping a 2d quadrilateral to another one. But I've got the X, Y and Z coordinates of the rectangle in the real world so I'm looking for some easier ways. Does anyone know any practical algorithm or method of doing this?
If it helps, my 3d camera is actually a Kinect device with OpenNI and NITE middlewares, and I'm using WPF.
Thanks in advance.
edit:
I also found the 3d-projection page on Wikipedia that used angles and cosines but that seems to be a difficult way (finding angles in the 3d image) and I'm not sure if it's the real solution or not.
You might want to check out projection matrices
That's how any 3D rasterizer "flattens" 3D volumes on a 2D screen.
See this code to get the projection matrix for a given WPF camera:
private static Matrix3D GetProjectionMatrix(OrthographicCamera camera, double aspectRatio)
{
// This math is identical to what you find documented for
// D3DXMatrixOrthoRH with the exception that in WPF only
// the camera's width is specified. Height is calculated
// from width and the aspect ratio.
double w = camera.Width;
double h = w / aspectRatio;
double zn = camera.NearPlaneDistance;
double zf = camera.FarPlaneDistance;
double m33 = 1 / (zn - zf);
double m43 = zn * m33;
return new Matrix3D(
2 / w, 0, 0, 0,
0, 2 / h, 0, 0,
0, 0, m33, 0,
0, 0, m43, 1);
}
private static Matrix3D GetProjectionMatrix(PerspectiveCamera camera, double aspectRatio)
{
// This math is identical to what you find documented for
// D3DXMatrixPerspectiveFovRH with the exception that in
// WPF the camera's horizontal rather the vertical
// field-of-view is specified.
double hFoV = MathUtils.DegreesToRadians(camera.FieldOfView);
double zn = camera.NearPlaneDistance;
double zf = camera.FarPlaneDistance;
double xScale = 1 / Math.Tan(hFoV / 2);
double yScale = aspectRatio * xScale;
double m33 = (zf == double.PositiveInfinity) ? -1 : (zf / (zn - zf));
double m43 = zn * m33;
return new Matrix3D(
xScale, 0, 0, 0,
0, yScale, 0, 0,
0, 0, m33, -1,
0, 0, m43, 0);
}
/// <summary>
/// Computes the effective projection matrix for the given
/// camera.
/// </summary>
public static Matrix3D GetProjectionMatrix(Camera camera, double aspectRatio)
{
if (camera == null)
{
throw new ArgumentNullException("camera");
}
PerspectiveCamera perspectiveCamera = camera as PerspectiveCamera;
if (perspectiveCamera != null)
{
return GetProjectionMatrix(perspectiveCamera, aspectRatio);
}
OrthographicCamera orthographicCamera = camera as OrthographicCamera;
if (orthographicCamera != null)
{
return GetProjectionMatrix(orthographicCamera, aspectRatio);
}
MatrixCamera matrixCamera = camera as MatrixCamera;
if (matrixCamera != null)
{
return matrixCamera.ProjectionMatrix;
}
throw new ArgumentException(String.Format("Unsupported camera type '{0}'.", camera.GetType().FullName), "camera");
}
You could do a basic orthographic projection (I'm thinking in terms of raytracing, so this might not apply to what you're doing):
The code is quite intuitive:
for y in image.height:
for x in image.width:
ray = new Ray(x, 0, z, Vector(0, 1, 0)) # Pointing forward
intersection = prism.intersection(ray) # Since you aren't shading, you can check only for intersections.
image.setPixel(x, y, intersection) # Returns black and white image of prism mapped to plane
You just shoot vectors with a direction of (0, 1, 0) directly out into space and record which ones hit.
I found this. Uses straight forward mathematics instead of matricies.
This is called perspective projection to convert from a 3D vertex to a 2D screen vertex. I used this to help me with my 3D program I have made.
HorizontalFactor = ScreenWidth / Tan(PI / 4)
VerticalFactor = ScreenHeight / Tan(PI / 4)
ScreenX = ((X * HorizontalFactor) / Y) + HalfWidth
ScreenY = ((Z * VerticalFactor) / Y) + HalfHeight
Hope this could help. I think its what you where looking for. Sorry about the formatting (new here)
Mapping points in a 3d world to a 2d screen is part of the job of frameworks like OpenGL and Direct3d. It's called Rasterisation like Heandel said. Perhaps you could use Direct3d?