I'm a little stuck. I'm trying to achieve a basic polar to rectangular conversion to match that of Photoshop's but I'm not getting the same results.
Converting from rectangular to polar matches Photoshop's but going from polar back to rectangular does not.
You can see in this image the differences between Photoshop's and mine:
float a, b, ang, dist;
int px, py;
const PI=3.141592653589793;
// Convert from cartesian to polar
for (y=y_start; y<y_end; ++y)
{
for (x=x_start; x<x_end; ++x)
{
a = (float)(x-X/2);
b = (float)(y-Y/2);
dist = (sqr(a*a + b*b)*2.0);
ang = atan2(b,-a)*(58);
ang = fmod(ang + 450.0,360.0);
px = (int)(ang*X/360.0);
py = (int)(dist);
pset(x, y, 0, src(px,py,0));
pset(x, y, 1, src(px,py,1));
pset(x, y, 2, src(px,py,2));
}
}
// Convert back to cartesian
for (y=y_start; y<y_end; ++y)
{
for (x=x_start; x<x_end; ++x)
{
ang = ((float)x/X)*PI*2.0;
dist = (float)y*0.5;
px = (int)(cos(ang)*dist)+X/2;
py = (int)(sin(ang)*dist)+Y/2;
pset(x, y, 0, pget(px,py,0));
pset(x, y, 1, pget(px,py,1));
pset(x, y, 2, pget(px,py,2));
}
}
This is my code. I'm sure I've messed something up in the polar to cartesian. The language is based off C.
What am I doing wrong? Any suggestions?
There are two issues with your polar-to-cartesian transformation:
The axes of the coordinate system you use to define angles are pointing right (x) and down (y) while you used a coordinate system with upwards- (x) and left-pointing (y) axes for your cartesian-to-polar transformation. The code to convert the angle to cartesian should be (I've added a bit of rounding)
px = round(-sin(ang)*dist + X/2.)
py = round(-cos(ang)*dist + Y/2.)
With that code you move from red to green to blue instead of from gray to blue to green in the final picture when increasing the x coordinate there.
Assuming that pget and pset operate on the same bitmap, you're overwriting your source image. The loop construct takes you outward along concentric circles around the center of the source image while filling the target line by line, top to bottom. at some point the circles and the lines start to overlap and you start reading the data you modified earlier (happens at the apex of the parabola-like shape). It gets even more convoluted because at some point you start reading the transform of that modified data, so that it is effectively transformed again (I guess that causes the irregular triangular region on the right).
Related
I'm writing a C program to render a Mandelbrot set and currently, I'm stuck with trying out to figure out how to zoom in properly.
I want for the zoom to be able to follow the mouse pointer on the screen - so that the fractal zooms in into the cursor position.
I have a window defined by:
# define WIDTH 800
# define HEIGHT 600
My Re_max, Re_min, Im_Max, Im_Min are defined and initialized as follows:
man->re_max = 2.0;
man->re_min = -2.0;
man->im_max = 2.0;
man->im_min = -2.0;
The interpolation value (more on in later) is defined and initialized as follows:
pos->interp = 1.0;
To map the pixel coordinates to the center of the screen, I'm using the position function:
void position(int x, int y, t_mandel *man)
{
double *s_x;
double *s_y;
s_x = &man->pos->shift_x;
s_y = &man->pos->shift_y;
man->c_re = (x / (WIDTH / (man->re_max - man->re_min)) + man->re_min) + *s_x;
man->c_im =(y / (HEIGHT / (man->im_max - man->re_min)) + man->im_min) + *s_y;
man->c_im *= 0.8;
}
To zoom in, I first get the coordinates of the mouse pointer and map them to the visible area given by the rectangle defined by the (Re_Max, Re_Min, Im_Max, Im_Min) using this function, where x and y are coordinates of the pointer on a screen:
int mouse_move(int x, int y, void *p)
{
t_fract *fract;
t_mandel *man;
fract = (t_fract *)p;
man = fract->mandel;
fract->mouse->Re = x / (WIDTH / (man->re_max - man->re_min)) + man->re_min;
fract->mouse->Im = y / (HEIGHT / (man->im_max - man->re_min)) + man->im_min;
return (0);
}
This function is called when a mouse wheel scroll is registered. The actual zooming is achieved by this function:
void zoom_control(int key, t_fract *fract)
{
double *interp;
interp = &fract->mandel->pos->interp;
if (key == 5) // zoom in
{
*interp = 1.0 / 1.03;
apply_zoom(fract->mandel, fract->mouse->Re, fract->mouse->Im, *interp);
}
else if (key == 4) // zoom out
{
*interp = 1.0 * 1.03;
apply_zoom(fract->mandel, fract->mouse->Re, fract->mouse->Im, *interp);
}
}
Which calls this:
void apply_zoom(t_mandel *man, double m_re, double m_im, double interp)
{
man->re_min = interpolate(m_re, man->re_min, interp);
man->im_min = interpolate(m_im, man->im_min, interp);
man->re_max = interpolate(m_re, man->re_max, interp);
man->im_max = interpolate(m_im, man->im_max, interp);
}
I have a simple interpolate function to redefine the area bounding rectangle:
double interpolate(double start, double end, double interp)
{
return (start + ((end - start) * interp));
}
So the problem is:
My code renders the fractal like this -
Mandelbrot set
But when I try to zoom in as described with the mouse, instead of going nicely "in", it just distorts like this, the image just sort of collapses onto itself instead of actually diving into the fractal.
I would really appreciate help with this one as I've been stuck on it for a while now.
If you please could also explain the actual math behind your solutions, I would be overjoyed!
Thank you!
After quite a bit of headache and a lot of paper wasted on recalculation interpolation methods, I've realized that the way I've mapped my complex numbers on-screen was incorrect, to begin with. Reworking my mapping method solved my problem, so I'll share what have I done.
-------------------------------OLD WAY--------------------------------------
I've initialized my Re_max, Re_min, Im_Max, Im_Min values, which define the visible area in the following way:
re_max = 2.0;
re_min = -2.0;
im_max = 2.0;
im_min = -2.0;
Then, I used this method to convert my on-screen coordinates to the complex numbers used to calculate the fractal (note that the coordinates used for mapping the mouse position for zoom interpolation and coordinates used to calculate the fractal itself use the same method):
Re = x / (WIDTH / (re_max - re_min)) + re_min;
Im = y / (HEIGHT / (im_max - re_min)) + im_min;
However, this way I didn't take the screen ratio into account and I've neglected the fact (due to a lack of knowledge) that the y coordinate on-screen is inverse (at least in my program) - negative direction is up, positive is down.
This way, when I tried to zoom in with my interpolation, naturally, the image distorted.
------------------------------CORRECT WAY-----------------------------------
When defining the bounding rectangle of the set, maximum imaginary im_max) part should be calculated, based on the screen ratio, to avoid image distortion when the display window isn't a square:
re_max = 2.0;
re_min = -2.0;
im_min = -2.0;
im_max = im_min + (re_max - re_min) * HEIGHT / WIDTH;
To map the on-screen coordinates to the complex numbers, I first found the "coordinate-to-number* ratio, which is equal to *rectangle length / screen width*:
re_factor = (re_max - re_min) / (WIDTH - 1);
im_factor = (im_max - im_min) / (HEIGHT - 1);
Then, I've mapped my pixel coordinates to the real and imaginary part of a complex number used in calculations like so:
c_re = re_min + x * re_factor;
c_im = im_max - y * im_factor;
After implementing those changes, I was finally able to smoothly zoom into the mouse position without any distortion or image "jumps".
If I understand you correctly, you want to make the point where the mouse is located a new center of the image, and change the scale of the image by a factor of 1.03. I would try something like that:
Your position() and mouse_move() functions remain the same.
in zoom_control() just change the way how you set the new value of interpolation, it should not be a fixed constant, but should be based on its current value. Also, pass the new scaling factor to the apply_zoom():
void zoom_control(int key, t_fract *fract)
{
double *interp;
interp = &fract->mandel->pos->interp;
double zoom_factor = 1.03;
if (key == 5) // zoom in
{
*interp /= zoom_factor;
apply_zoom(fract->mandel, fract->mouse->Re, fract->mouse->Im, 1.0 / zoom_factor);
}
else if (key == 4) // zoom out
{
*interp *= zoom_factor;
apply_zoom(fract->mandel, fract->mouse->Re, fract->mouse->Im, zoom_factor);
}
}
modify the apply zoom function:
void apply_zoom(t_mandel *man, double m_re, double m_im, double zoom_factor)
{
// Calculate the new ranges along the real and imaginary axes.
// They are equal to the current ranges multiplied by the zoom_factor.
double re_range = (man->re_max - man->re_min) * zoom_factor;
double im_range = (man->im_max - man->im_min) * zoom_factor;
// Set the new min/max values for real and imaginary axes with the center at
// mouse coordinates m_re and m_im.
man->re_min = m_re - re_range / 2;
man->re_max = m_re + re_range / 2;
man->im_min = m_im - im_range / 2;
man->im_max = m_im + im_range / 2;
}
I have a TopoDS_Face object which comes from a translation of an IGES file. If I parse the IGES file using my own algorithm (written in C) which search for the faces then the loop(s) pointed to by the face and finally the edges in the loop, I can determine whether the face is planar or non-planar(semi-cylindrical in bends). This is done by checking if the edge is a line or an arc based on the form number in the underlying NURBS(entity 126). A line has form 1 and an arc has form 2.
What methods/functions or other mechanism can be used in Open Cascade to determine whether a TopoDS_Face is planar or semi-cylindrical(bends)?
You can use BRepAdaptor_Surface class to get the type of TopoDS_Face surface:
BRepAdaptor_Surface surface = BRepAdaptor_Surface(face);
if (surface.GetType() == GeomAbs_Plane)
{
// Surface is a plane
}
else
{
// Surface is not a plane
}
Update:
The alternate way to define planar surface or not is using a curvature value. For planar surfaces a mean curvature should be equal to 0.
BRepAdaptor_Surface surface = BRepAdaptor_Surface(face);
double u = (surface.FirstUParameter() + surface.LastUParameter()) / 2.0;
double v = (surface.FirstVParameter() + surface.LastVParameter()) / 2.0;
BRepLProp_SLProps surfaceProps(surface, u, v, 2, gp::Resolution());
if (surfaceProps.MeanCurvature() == 0.0)
{
// Surface is a plane
}
else
{
// Surface is not a plane
}
I want to get an earth texture on sphere. My sphere is an icosphere built with many triangles (100+) and I found it confusing to set the UV coordinates for whole sphere. I tried to use glTexGen and effects are quite close but I got my texture repeated 8 times (see image) . I cannot find a way to make it just wrap the whole object once. Here is my code where the sphere and textures are created.
glEnable(GL_TEXTURE_2D);
glTexGeni(GL_T, GL_TEXTURE_GEN_MODE, GL_OBJECT_LINEAR);
glTexGeni(GL_S, GL_TEXTURE_GEN_MODE, GL_OBJECT_LINEAR);
glBegin(GL_TRIANGLES);
for (int i = 0; i < new_sphere->NumOfTrians; i++)
{
Triangle *draw_Trian = new_sphere->Trians+i;
glVertex3f(draw_Trian->pnts[0].coords[0], draw_Trian->pnts[0].coords[1], draw_Trian->pnts[0].coords[2]);
glVertex3f(draw_Trian->pnts[1].coords[0], draw_Trian->pnts[1].coords[1], draw_Trian->pnts[1].coords[2]);
glVertex3f(draw_Trian->pnts[2].coords[0], draw_Trian->pnts[2].coords[1], draw_Trian->pnts[2].coords[2]);
}
glDisable(GL_TEXTURE_2D);
free(new_sphere->Trians);
free(new_sphere);
glEnd();
You need to define how your texture is supposed to map to your triangles. This depends on the texture you're using. There are a multitude of ways to map the surface of a sphere with a texture (since no one mapping is free of singularities). It looks like you have a cylindrical projection texture there. So we will emit cylindrical UV coordinates.
I've tried to give you some code here, but it's assuming that
Your mesh is a unit sphere (i.e., centered at 0 and has radius 1)
pnts.coords is an array of floats
You want to use the second coordinate (coord[1]) as the 'up' direction (or the height in a cylindrical mapping)
Your code would look something like this. I've defined a new function for emitting cylindrical UVs, so you can put that wherever you like.
/* Map [(-1, -1, -1), (1, 1, 1)] into [(0, 0), (1, 1)] cylindrically */
inline void uvCylinder(float* coord) {
float angle = 0.5f * atan2(coord[2], coord[0]) / 3.14159f + 0.5f;
float height = 0.5f * coord[1] + 0.5f;
glTexCoord2f(angle, height);
}
glEnable(GL_TEXTURE_2D);
glBegin(GL_TRIANGLES);
for (int i = 0; i < new_sphere->NumOfTrians; i++) {
Triangle *t = new_sphere->Trians+i;
uvCylinder(t->pnts[0].coords);
glVertex3f(t->pnts[0].coords[0], t->pnts[0].coords[1], t->pnts[0].coords[2]);
uvCylinder(t->pnts[1].coords);
glVertex3f(t->pnts[1].coords[0], t->pnts[1].coords[1], t->pnts[1].coords[2]);
uvCylinder(t->pnts[2].coords);
glVertex3f(t->pnts[2].coords[0], t->pnts[2].coords[1], t->pnts[2].coords[2]);
}
glEnd();
glDisable(GL_TEXTURE_2D);
free(new_sphere->Trians);
free(new_sphere);
Note on Projections
The reason it's confusing to build UV coordinates for the whole sphere is that there isn't one 'correct' way to do it. Mathematically-speaking, there's no such thing as a perfect 2D mapping of a sphere; hence why we have so many different types of projections. When you have a 2D image that's a texture for a spherical object, you need to know what type of projection that image was built for, so that you can emit the correct UV coordinates for that texture.
I have to draw a circle for several images. For each image to the radius of curvature is different with a constant center.
The problem is : no matter how big the circle is it shouldn't cross to upper half of the image. It's OK if it becomes invisible or only a part of it is visible in the lower half.
I am using OpenCV 2.4.4 in C lang.
The values for the circle is found by:
for(angle1 = 0; angle1<360; angle1++)
{
x [angle1]= r * sin(angle1) + axis_x;
y [angle1]= r * cos(angle1) + axis_y;
}
FYI:
cvCircle( img,center_circle, r,cvScalar( 0, 0, 255,0 ),2,8,0);
Draws circle in the entire image. Which I don't want to happen.
How can I do it? Rem: no part of the circle should appear in upper half of the image.
And the code should be in OpenCV's C lang.
In MALTAB is pretty easy. I only have to select the pixels and map them on the image.
I am new to OpenCV and operations like img->data.i/f/s/db[50] =50; is showing error.
A pretty naive approach is to create a copy of the upper half of image, draw the complete circle, and then copy back the upper half to original image. This may not be the best approach but it works. Here is how it can be achieved:
void drawCircleLowerHalf(IplImage* image, CvPoint center, int radius, CvScalar color, int thickness, int line_type, int shift)
{
CvRect roi = cvRect(0,0,image->width, image->height/2);
IplImage* upperHalf = cvCreateImage(cvSize(image->width, image->height/2), image->depth, image->nChannels);
cvSetImageROI(image, roi);
cvCopy(image,upperHalf);
cvResetImageROI(image);
cvCircle(image, center, radius, color, thickness, line_type, shift);
cvSetImageROI(image, roi);
cvCopy(upperHalf, image);
cvResetImageROI(image);
cvReleaseImage(&upperHalf);
}
Just call this function with the same arguments as of cvCircle.
I am trying to model the Lorenz attractor in 3D space using OpenGL. I have written the following code in my display function:
void display()
{
// Clear the image
glClear(GL_COLOR_BUFFER_BIT);
// Reset previous transforms
glLoadIdentity();
// Set view angle
glRotated(ph,1,0,0);
glRotated(th,0,1,0);
glColor3f(1,1,0);
glPointSize(1);
float x = 0.1, y = 0.1, z = 0.1;
glBegin(GL_POINTS);
int i;
for (i = 0; i < initialIterations; i++) {
// compute a new point using the strange attractor equations
float xnew = sigma*(y-x);
float ynew = x*(r-z) - y;
float znew = x*y - b*z;
// save the new point
x = x+xnew*dt;
y = y+ynew*dt;
z = z+znew*dt;
glVertex4f(x,y,z,i);
}
glEnd();
// Draw axes in white
glColor3f(1,1,1);
glBegin(GL_LINES);
glVertex3d(0,0,0);
glVertex3d(1,0,0);
glVertex3d(0,0,0);
glVertex3d(0,1,0);
glVertex3d(0,0,0);
glVertex3d(0,0,1);
glEnd();
// Label axes
glRasterPos3d(1,0,0);
Print("X");
glRasterPos3d(0,1,0);
Print("Y");
glRasterPos3d(0,0,1);
Print("Z");
// Display parameters
glWindowPos2i(5,5);
Print("View Angle=%d,%d %s",th,ph,text[mode]);
// Flush and swap
glFlush();
glutSwapBuffers();
}
However, I can't get the right attractor. I believe my equations for x, y, z are correct. I am just not sure how to display it the right way to get the right attractor. Thanks for any help. below is what my program is currently putting out:
Hello
Okay so I had this problem and there are a few things you want to do,
First off when you go do draw the point with glVertex4f() you want to either change it to glVertex3f or change your w value to 1. with glVertex3f it will set w to 1 by default. The w value changes the scaling of the points so you will end up with some crazy number all the way out with an i of 50000 or so.
Second after fixing that you're going to find that the values are way out of your visual range so you need to scale it down. I would do this at the time you draw the points so in your case I would use glVertex3f(x*.05,y*.05,z*.05). if .05 is too large or too small adjust it to fit your needs.
finally make sure that your dt value is .001 and your starting point should be around 1 for x,y,and z.
Then ideally you want to put all these points in an array then read that array to draw your points instead of doing the calculations each time you call display. So do your calculations elsewhere and just send the points to display. Hope this helped.