I'm writing a C program to render a Mandelbrot set and currently, I'm stuck with trying out to figure out how to zoom in properly.
I want for the zoom to be able to follow the mouse pointer on the screen - so that the fractal zooms in into the cursor position.
I have a window defined by:
# define WIDTH 800
# define HEIGHT 600
My Re_max, Re_min, Im_Max, Im_Min are defined and initialized as follows:
man->re_max = 2.0;
man->re_min = -2.0;
man->im_max = 2.0;
man->im_min = -2.0;
The interpolation value (more on in later) is defined and initialized as follows:
pos->interp = 1.0;
To map the pixel coordinates to the center of the screen, I'm using the position function:
void position(int x, int y, t_mandel *man)
{
double *s_x;
double *s_y;
s_x = &man->pos->shift_x;
s_y = &man->pos->shift_y;
man->c_re = (x / (WIDTH / (man->re_max - man->re_min)) + man->re_min) + *s_x;
man->c_im =(y / (HEIGHT / (man->im_max - man->re_min)) + man->im_min) + *s_y;
man->c_im *= 0.8;
}
To zoom in, I first get the coordinates of the mouse pointer and map them to the visible area given by the rectangle defined by the (Re_Max, Re_Min, Im_Max, Im_Min) using this function, where x and y are coordinates of the pointer on a screen:
int mouse_move(int x, int y, void *p)
{
t_fract *fract;
t_mandel *man;
fract = (t_fract *)p;
man = fract->mandel;
fract->mouse->Re = x / (WIDTH / (man->re_max - man->re_min)) + man->re_min;
fract->mouse->Im = y / (HEIGHT / (man->im_max - man->re_min)) + man->im_min;
return (0);
}
This function is called when a mouse wheel scroll is registered. The actual zooming is achieved by this function:
void zoom_control(int key, t_fract *fract)
{
double *interp;
interp = &fract->mandel->pos->interp;
if (key == 5) // zoom in
{
*interp = 1.0 / 1.03;
apply_zoom(fract->mandel, fract->mouse->Re, fract->mouse->Im, *interp);
}
else if (key == 4) // zoom out
{
*interp = 1.0 * 1.03;
apply_zoom(fract->mandel, fract->mouse->Re, fract->mouse->Im, *interp);
}
}
Which calls this:
void apply_zoom(t_mandel *man, double m_re, double m_im, double interp)
{
man->re_min = interpolate(m_re, man->re_min, interp);
man->im_min = interpolate(m_im, man->im_min, interp);
man->re_max = interpolate(m_re, man->re_max, interp);
man->im_max = interpolate(m_im, man->im_max, interp);
}
I have a simple interpolate function to redefine the area bounding rectangle:
double interpolate(double start, double end, double interp)
{
return (start + ((end - start) * interp));
}
So the problem is:
My code renders the fractal like this -
Mandelbrot set
But when I try to zoom in as described with the mouse, instead of going nicely "in", it just distorts like this, the image just sort of collapses onto itself instead of actually diving into the fractal.
I would really appreciate help with this one as I've been stuck on it for a while now.
If you please could also explain the actual math behind your solutions, I would be overjoyed!
Thank you!
After quite a bit of headache and a lot of paper wasted on recalculation interpolation methods, I've realized that the way I've mapped my complex numbers on-screen was incorrect, to begin with. Reworking my mapping method solved my problem, so I'll share what have I done.
-------------------------------OLD WAY--------------------------------------
I've initialized my Re_max, Re_min, Im_Max, Im_Min values, which define the visible area in the following way:
re_max = 2.0;
re_min = -2.0;
im_max = 2.0;
im_min = -2.0;
Then, I used this method to convert my on-screen coordinates to the complex numbers used to calculate the fractal (note that the coordinates used for mapping the mouse position for zoom interpolation and coordinates used to calculate the fractal itself use the same method):
Re = x / (WIDTH / (re_max - re_min)) + re_min;
Im = y / (HEIGHT / (im_max - re_min)) + im_min;
However, this way I didn't take the screen ratio into account and I've neglected the fact (due to a lack of knowledge) that the y coordinate on-screen is inverse (at least in my program) - negative direction is up, positive is down.
This way, when I tried to zoom in with my interpolation, naturally, the image distorted.
------------------------------CORRECT WAY-----------------------------------
When defining the bounding rectangle of the set, maximum imaginary im_max) part should be calculated, based on the screen ratio, to avoid image distortion when the display window isn't a square:
re_max = 2.0;
re_min = -2.0;
im_min = -2.0;
im_max = im_min + (re_max - re_min) * HEIGHT / WIDTH;
To map the on-screen coordinates to the complex numbers, I first found the "coordinate-to-number* ratio, which is equal to *rectangle length / screen width*:
re_factor = (re_max - re_min) / (WIDTH - 1);
im_factor = (im_max - im_min) / (HEIGHT - 1);
Then, I've mapped my pixel coordinates to the real and imaginary part of a complex number used in calculations like so:
c_re = re_min + x * re_factor;
c_im = im_max - y * im_factor;
After implementing those changes, I was finally able to smoothly zoom into the mouse position without any distortion or image "jumps".
If I understand you correctly, you want to make the point where the mouse is located a new center of the image, and change the scale of the image by a factor of 1.03. I would try something like that:
Your position() and mouse_move() functions remain the same.
in zoom_control() just change the way how you set the new value of interpolation, it should not be a fixed constant, but should be based on its current value. Also, pass the new scaling factor to the apply_zoom():
void zoom_control(int key, t_fract *fract)
{
double *interp;
interp = &fract->mandel->pos->interp;
double zoom_factor = 1.03;
if (key == 5) // zoom in
{
*interp /= zoom_factor;
apply_zoom(fract->mandel, fract->mouse->Re, fract->mouse->Im, 1.0 / zoom_factor);
}
else if (key == 4) // zoom out
{
*interp *= zoom_factor;
apply_zoom(fract->mandel, fract->mouse->Re, fract->mouse->Im, zoom_factor);
}
}
modify the apply zoom function:
void apply_zoom(t_mandel *man, double m_re, double m_im, double zoom_factor)
{
// Calculate the new ranges along the real and imaginary axes.
// They are equal to the current ranges multiplied by the zoom_factor.
double re_range = (man->re_max - man->re_min) * zoom_factor;
double im_range = (man->im_max - man->im_min) * zoom_factor;
// Set the new min/max values for real and imaginary axes with the center at
// mouse coordinates m_re and m_im.
man->re_min = m_re - re_range / 2;
man->re_max = m_re + re_range / 2;
man->im_min = m_im - im_range / 2;
man->im_max = m_im + im_range / 2;
}
Related
My problem effectively boils down to accurate mouse movement detection.
I need to create my own implementation of an InkCanvas and have succeeded for the most part, except for drawing strokes accurately.
void OnMouseMove(object sneder, MouseEventArgs e)
{
var position = e.GetPosition(this);
if (!Rect.Contains(position))
return;
var ratio = new Point(Width / PixelDisplay.Size.X, Height / PixelDisplay.Size.Y);
var intPosition = new IntVector(Math2.FloorToInt(position.X / ratio.X), Math2.FloorToInt(position.Y / ratio.Y));
DrawBrush.Draw(intPosition, PixelDisplay);
UpdateStroke(intPosition); // calls CaptureMouse
}
This works. The Bitmap (PixelDisplay) is updated and all is well. However, any kind of quick mouse movement causes large skips in the drawing. I've narrowed down the problem to e.GetPosition(this), which blocks the event long enough to be inaccurate.
There's this question which is long beyond revival, and its answers are unclear or simply don't have a noticeable difference.
After some more testing, the stated solution and similar ideas fail specifically because of e.GetPosition.
I know InkCanvas uses similar methods after looking through the source; detect the device, if it's a mouse, get its position and capture. I see no reason for the same process to not work identically here.
I ended up being able to partially solve this.
var position = e.GetPosition(this);
if (!Rect.Contains(position))
return;
if (DrawBrush == null)
return;
var ratio = new Point(Width / PixelDisplay.Size.X, Height / PixelDisplay.Size.Y);
var intPosition = new IntVector(Math2.FloorToInt(position.X / ratio.X), Math2.FloorToInt(position.Y / ratio.Y));
// Calculate pixel coordinates based on the control height
var lastPoint = CurrentStroke?.Points.LastOrDefault(new IntVector(-1, -1));
// Uses System.Linq to grab the last stroke, if it exists
PixelDisplay.Lock();
// My special locking mechanism, effectively wraps Bitmap.Lock
if (lastPoint != new IntVector(-1, -1)) // Determine if we're in the middle of a stroke
{
var alphaAdd = 1d / new IntVector(intPosition.X - lastPoint.Value.X, intPosition.Y - lastPoint.Value.Y).Magnitude;
// For some interpolation, calculate 1 / distance (magnitude) of the two points.
// Magnitude formula: Math.Sqrt(Math.Pow(X, 2) + Math.Pow(Y, 2));
var alpha = 0d;
var xDiff = intPosition.X - lastPoint.Value.X;
var yDiff = intPosition.Y - lastPoint.Value.Y;
while (alpha < 1d)
{
alpha += alphaAdd;
var adjusted = new IntVector(
Math2.FloorToInt((position.X + (xDiff * alpha)) / ratio.X),
Math2.FloorToInt((position.Y + (yDiff * alpha)) / ratio.Y));
// Inch our way towards the current intPosition
DrawBrush.Draw(adjusted, PixelDisplay); // Draw to the bitmap
UpdateStroke(intPosition);
}
}
DrawBrush.Draw(intPosition, PixelDisplay); // Draw the original point
UpdateStroke(intPosition);
PixelDisplay.Unlock();
This implementation interpolates between the last point and the current one to fill in any gaps. It's not perfect when using a very small brush size for example, but is a solution nonetheless.
Some remarks
IntVector is a lazily implemented Vector2 by me, just using integers instead.
Math2 is a helper class. FloorToInt is short for (int)MathF.Round(...))
... I can't speak english...
I have a problem.
I want to resize after rotate using 8 ResizeThumb.(like PowerPoint)
But when I applied RotateTransform(orgin = 0.5), Resize method is strangely work.
Item ViewModel ViewModel has properties(X, Y, Angle, Width, Height) like this.
private double _X;
public double X
{
get
{
return_X;
}
set
{
value = Math.Round(value);
if(_X == value) { return; }
_X = value;
RaisePropertyChanged("X");
}
}
... and Y, Angle, Height, Width.
And I tried Binding like this.
Angle is binding at RotateTransform.
X, Y is binding at Canvas.Left/Top or TranslateTransform.
But both case strangely work.
So when I calculate X,Y after rotate, I think calculate angle together.
public void ResizeTopCenter(DragDeltaEventArgs e)
{
Matrix m = Matrix.Identity;
m.Rotate(Angle * Math.PI/180);
Point rotated = m.Transform(new Point(e.HorizontalChanged, e.VerticalChange));
// little bit different every direction.
ViewModel.X += rotated.X;
ViewModel.Y += rotated.Y;
ViewModel.Height += rotated.Y;
}
Another way
Canvas.SetTop(this.designerItem, Canvas.GetTop(this.designerItem) + (this.transformOrigin.Y * deltaVertical * (1 - Math.Cos(-this.angle))));
Canvas.SetLeft(this.designerItem, Canvas.GetLeft(this.designerItem) - deltaVertical * this.transformOrigin.Y * Math.Sin(-this.angle));
... strange as expected.
I can't write full source because I was off work.
Had suffered for eight hours, tomorrow is also continuing.
Please help me.
thank you.
and I'm sorry to read strage english.
I'm a little stuck. I'm trying to achieve a basic polar to rectangular conversion to match that of Photoshop's but I'm not getting the same results.
Converting from rectangular to polar matches Photoshop's but going from polar back to rectangular does not.
You can see in this image the differences between Photoshop's and mine:
float a, b, ang, dist;
int px, py;
const PI=3.141592653589793;
// Convert from cartesian to polar
for (y=y_start; y<y_end; ++y)
{
for (x=x_start; x<x_end; ++x)
{
a = (float)(x-X/2);
b = (float)(y-Y/2);
dist = (sqr(a*a + b*b)*2.0);
ang = atan2(b,-a)*(58);
ang = fmod(ang + 450.0,360.0);
px = (int)(ang*X/360.0);
py = (int)(dist);
pset(x, y, 0, src(px,py,0));
pset(x, y, 1, src(px,py,1));
pset(x, y, 2, src(px,py,2));
}
}
// Convert back to cartesian
for (y=y_start; y<y_end; ++y)
{
for (x=x_start; x<x_end; ++x)
{
ang = ((float)x/X)*PI*2.0;
dist = (float)y*0.5;
px = (int)(cos(ang)*dist)+X/2;
py = (int)(sin(ang)*dist)+Y/2;
pset(x, y, 0, pget(px,py,0));
pset(x, y, 1, pget(px,py,1));
pset(x, y, 2, pget(px,py,2));
}
}
This is my code. I'm sure I've messed something up in the polar to cartesian. The language is based off C.
What am I doing wrong? Any suggestions?
There are two issues with your polar-to-cartesian transformation:
The axes of the coordinate system you use to define angles are pointing right (x) and down (y) while you used a coordinate system with upwards- (x) and left-pointing (y) axes for your cartesian-to-polar transformation. The code to convert the angle to cartesian should be (I've added a bit of rounding)
px = round(-sin(ang)*dist + X/2.)
py = round(-cos(ang)*dist + Y/2.)
With that code you move from red to green to blue instead of from gray to blue to green in the final picture when increasing the x coordinate there.
Assuming that pget and pset operate on the same bitmap, you're overwriting your source image. The loop construct takes you outward along concentric circles around the center of the source image while filling the target line by line, top to bottom. at some point the circles and the lines start to overlap and you start reading the data you modified earlier (happens at the apex of the parabola-like shape). It gets even more convoluted because at some point you start reading the transform of that modified data, so that it is effectively transformed again (I guess that causes the irregular triangular region on the right).
I'm drawing some Bezier curves in WPF and for the most part it's working but I'm getting some faint separations between each segment. As you can see they even appear in straight sections so I don't believe the issue is due to an insufficient number of segments. (This image is at 4x magnification.)
I'm using a collection of System.Windows.Shapes.Line objects to paint them. They are instantiated in code like so:
Shapes.Line Line = new Shapes.Line();
Line.Stroke = Brush;
Line.HorizontalAlignment = Windows.HorizontalAlignment.Left;
Line.VerticalAlignment = Windows.VerticalAlignment.Center;
Line.StrokeThickness = 10;
My theory is that this separation is due to the fact that the point where one line ends is the same point where the next begins but I unsure what's the best way to fix this. I'm fairly new at this so I don't want to go hacking away before I ask if anyone has any tried and true solutions to make these faint separations disappear.
EDIT:
Here is the code I'm using to generate the segments. The ILine interface is something I created but it's point values are simply translated to the System.Windows.Shapes.Line respective values later on in the program.
public static void FormBezier(List<ILine> Lines, Point[] pt)
{
if (Lines.Count == 0) return;
double t, dt, x0, y0, x1, y1;
t = 0.0;
dt = 1.0 / Lines.Count;
x1 = X(t, new double[] { pt[0].X, pt[1].X, pt[2].X, pt[3].X });
y1 = X(t, new double[] { pt[0].Y, pt[1].Y, pt[2].Y, pt[3].Y });
t += dt;
for(int index = 1; index < Lines.Count - 1; index++)
{
x0 = x1;
y0 = y1;
x1 = X(t, new double[] { pt[0].X, pt[1].X, pt[2].X, pt[3].X });
y1 = X(t, new double[] { pt[0].Y, pt[1].Y, pt[2].Y, pt[3].Y });
Lines[index].Start.X = x0;
Lines[index].End.X = x1;
Lines[index].Start.Y = y0;
Lines[index].End.Y = y1;
t += dt;
}
t = 1.0;
x0 = x1;
y0 = y1;
x1 = X(t, new double[] { pt[0].X, pt[1].X, pt[2].X, pt[3].X });
y1 = X(t, new double[] { pt[0].Y, pt[1].Y, pt[2].Y, pt[3].Y });
Lines[Lines.Count - 1].Start.X = x0;
Lines[Lines.Count - 1].End.X = x1;
Lines[Lines.Count - 1].Start.Y = y0;
Lines[Lines.Count - 1].End.Y = y1;
}
public static double X(double t, double[] x)
{
return
x[0] * Math.Pow((1 - t), 3) +
x[1] * 3 * t * Math.Pow((1 - t), 2) +
x[2] * 3 * Math.Pow(t, 2) * (1 - t) +
x[3] * Math.Pow(t, 3);
}
At a wild guess, it's probably a rounding error. The units used in WPF aren't pixels, they are resolution independent units. When WPF actually draws something it has to convert those units to real pixels on whatever screen it's drawing to. If the conversion ends up being half way between real pixels, it'll shade those pixels to try and approximate half a pixel in each real pixel. Hence you sometimes get gray pixels around a supposedly black line (anti-aliasing).
The property SnapsToDevicePixels might help you.
Clearly a fault in drawing algorithm. I am not best at WPF but you may want to have a look at this blog post.
I have a canvas for diagramming, and want to join nodes in the diagram by directed lines (arrow ends).
I tried the anchor approach, where lines only attach at specific points on the nodes but that did not work for me, it looked like crap.
I simply want a line from the centre of each object to the other, and stop the line at the nodes' edge in order for the arrow end to show properly. But finding the edge of a canvas element to test intersections against has proven difficult.
Any ideas?
I got a method working using the bounding box of the element. It is not perfect, since my elements are not perfectly rectangular, but it looks OK.
Basically I find the bounding box of the element in Canvas coordinates by:
private static Rect GetBounds(FrameworkElement element, UIElement visual)
{
return new Rect(
element.TranslatePoint(new Point(0, 0), visual),
element.TranslatePoint(new Point(element.ActualWidth, element.ActualHeight), visual));
}
Then I find the intersection of the centre-to-centre line against each of the four sides of the bounding box, and use that intersection point to connect the two elements by a Line shape.
I found the intersection code at Third Party Ninjas:
http://thirdpartyninjas.com/blog/2008/10/07/line-segment-intersection/
private void ProcessIntersection()
{
float ua = (point4.X - point3.X) * (point1.Y - point3.Y) - (point4.Y - point3.Y) * (point1.X - point3.X);
float ub = (point2.X - point1.X) * (point1.Y - point3.Y) - (point2.Y - point1.Y) * (point1.X - point3.X);
float denominator = (point4.Y - point3.Y) * (point2.X - point1.X) - (point4.X - point3.X) * (point2.Y - point1.Y);
intersection = coincident = false;
if (Math.Abs(denominator) <= 0.00001f)
{
if (Math.Abs(ua) <= 0.00001f && Math.Abs(ub) <= 0.00001f)
{
intersection = coincident = true;
intersectionPoint = (point1 + point2) / 2;
}
}
else
{
ua /= denominator;
ub /= denominator;
if (ua >= 0 && ua <= 1 && ub >= 0 && ub <= 1)
{
intersection = true;
intersectionPoint.X = point1.X + ua * (point2.X - point1.X);
intersectionPoint.Y = point1.Y + ua * (point2.Y - point1.Y);
}
}
}
And voilá! The lines are now drawn as if they go from the centre of each node to the other, but stops approximately at the node's edge so the arrow end is visible.
An improvement of this method would be to test against the actual edge of the node itself, such as for elliptical nodes, but I have yet to find a WPF method that provides me with a Geometry or Path that I can test against.