Pseudo 3D walls (top-down raycasting, sort of) - c

See, I'm not posting code because I need logic, math, algorithms. Well:
I'm trying to achieve a 3d-looking visual for a top-down tile map using layers and parallax scrolling. The thing is: At the moment I simply set different "speeds" for each layer. But that would only work with some very specific camera positions, also, it makes so that the blocks have virtually an infinite height (as they will "increase in height" until they are out of the camera's FOV).
Is there a better (should be) to achieve the effect? Oh, and I'm using C with Allegro 5.
I thought about limiting each layer's offset, but I have no idea how.
My current method:
That's my current code for the layer "speed" (it repeats for up, down, left and right, changing coordinates):
if (key[ALLEGRO_KEY_UP])
camera_y[0] -= 1;
camera_y[1] -= 2;
camera_y[2] -= 3;
Then I run a loop to draw the map with the tiles relative to the current layer's offset.
By the way, that's the desired effect (example with 3 layers):

For parallax scrolling, layers that scroll faster must be correspondingly larger:
You can use unscaled tiles stacked on top of each other, offset by a fixed fraction of the distance from the center of the tile to the center of the viewport,
but the tops will not be continuous (unless the bottoms overlap). If all layer tiles are hand-drawn or rendered images, this is not an issue.
If the walls are box-shaped, and you have images of the top and each of the four sides, you can draw them in almost 3D,
where at most two sides of each box wall is drawn, skewed.
In all cases:
If the center of the viewport is at world coordinates (xc, yc), point (x, y, z) maps to coordinates (x', y') relative to the center of the viewport:
x' = (x - xc) × (z + z0) / z0
y' = (y - yc) × (z + z0) / z0
where z0 is a constant that determines the "size" of the parallax or depth effect.

I think you're on the right lines, but the "infinite height" issue can be solved by simply giving the camera an "altitude" property, and adjust the "speed" of each layer by calculating ...
layer.speed = (layer.altitude / camera.altitude) * ZOOM_FACTOR; //gives a float value.
Can't really suggest anything more until you show us some of your math code.

Related

Most efficient way to determine circle coordinates?

I am making a function for drawing a circle in 2d space.
For this, I have identified 2 approaches:
go through all the possible pixels and run them through a formula that will return a value that shows whether the pixel coordinates are inside the circle, outside (bonus: or intersecting it)
get all the circle pixels (basically draw the circle)
I tried to look at some math sources, but I have met with some problems:
in the second approach, the resolution at which I am incrementing the angle matters, so if it is too little, or radius is too small, there will be unnecessary duplication. On the other hand, if the angle gets incremented by more, or radius is too large, there will be gaps.
The formula I was using is:
struct vec2{int x; int y;};
void get_circle(int x, int y, int r, int angle, struct vec2 *coordinates) {
coordiantes->x = x + r * cos(angle);
coordinates->y = y + r * sin(angle);
}
This is obviously a bit much to run a lot of times.
I also want to make some kind of primitive anti-aliasing, so if I can get a value where a pixel only intersects the circle line by a half, it would be drawn as a half-pixel.
My final goal is to draw a nice circle with a line that can be thick. The thickness can be achieved with the area approach where I fill all pixels in a circle area, and then I remove pixels in the inner circle. Or it can be several iterations of the circle. I didn't write the array part of the computation, but yes, I would like each pixel identified. If we take a pixel as a rectangle, then I would like no pixel to be drawn if the theoretical circle goes through <33% of the surface, half-pixel 33-66, and full if >66%.
Please advise. I need some approach that will be computationally efficient.
First, "most efficient" depends on quite a few things. For most modern OpenGL systems you can usually get away with just computing points around the circumference using sine and cosine (and an appropriate aspect scale) with the native floating-point type, then plotting the points using any decent polyline algorithm.
Once you have things working, profile.
If profiling shows your algorithm to be holding things up (and compared to other normal and common computations, it shouldn't be), only then should you spend time and effort on trickier (read: more complicated) stuff, like the Midpoint Circle Algorithm to generate points to send to your polyline.
Also, don't forget to memoize into a sprite or texture or pixmap or whatever is appropriate for your hardware/software IFF profiling shows a worthwhile improvement.

Smooth Zoom with mouse in Mandelbrot set (C)

I've been working on a C Mandelbrot set program for the past few days and I managed to make it work fine, however, my end goal is to be able to smoothly zoom in the set with my mouse and that's something I haven't yet been able to do yet so I might need a bit of help !
Here's part of my code (well, the full mandelbrot function) :
//removed to free space
Here's a picture of the output :
(Sorry, it's not very pretty, colors were not my priority but I'll be sure to work on them as soon as I figure out the zoom !)
Mandelbrot
What I want to be able to do :
left click -> center of the image becomes mouse_x an mouse_y. Then, it starts zooming in as long as left click is held
right click -> [...] it starts zooming out as long as right click is held
move mouse -> if currently zooming in/out, the center of the image moves to the mouse's coordinates with it. Else nothing happens.
(already have a function that gets mouse's position and button being pressed)
Thanks a lot for your help !
The visible area is a rectangle defined by (Re.min, Im.min) and (Re.max, Im.max). When you click on a particular point, you can map the mouse position to a point (mouseRe, mouseIm) by using the same mapping as you use when rendering:
double mouseRe = (double)mouse_x / (WIN_L / (e->Re.max - e->Re.min)) + e->Re.min;
double mouseIm = (double)mouse_y / (WIN_H / (e->Im.max - e->Im.min)) + e->Im.min;
To zoom in, imagine drawing a line from the (mouseRe, mouseIm) zooming centerpoint to each of the corners of the visible area, forming a lopsided X. Based on the zoom amount, find 4 new points a certain fraction of the distance along these lines, these points will give you your new rectangle. For example, if you are zooming in by a factor of 3, find a point 1/3rd of the way from the centerpoint to the corners. This will produce a new rectangle with sides 1/3rd the size of the original and an area 1/9th the size.
To do this you can define a simple interpolation function:
double interpolate(double start, double end, double interpolation)
{
return start + ((end - start) * interpolation);
}
Then use the function to find your new points:
void applyZoom(t_fractal* e, double mouseRe, double mouseIm, double zoomFactor)
{
double interpolation = 1.0 / zoomFactor;
e->Re.min = interpolate(mouseRe, e->Re.min, interpolation);
e->Im.min = interpolate(mouseIm, e->Im.min, interpolation);
e->Re.max = interpolate(mouseRe, e->Re.max, interpolation);
e->Im.max = interpolate(mouseIm, e->Im.max, interpolation);
}
Based on my description, you might think you need to find 8 values (4 points for the 4 legs of the X with 2 dimension each) but in practise there are only 4 unique values because each of the sides is axis aligned.
For a smooth zoom, call it with a zoom factor of a little over 1.0 e.g. 1.01. To zoom out, pass the inverse e.g. 1.0 / 1.01.
Alternatively, if you want the center of the view to jump to a certain position when you click the mouse, calculate mouseRe and mouseIm as above and then offset the corners of the view rectangle by the difference between these values and the center of the view rectangle. You could store these values at the time the mouse button was first pressed down, and use them to zoom in as long as it is held.

simple plot algorithm with autoscale

I need to implement a simple plotting component in C#(WPF to be more precise). What i have is a collection of data samples containing time (X axis) and a value (both double types).
I have a drawing canvas of a fixed size (Width x Height) and a DrawLine method/function that can draw on it. The problem I am facing now is how do I draw the plot so that it is autoscaled? In other words how do I map the samples I have to actual pixels on my Width x Height canvas?
One hacky method that may work is to use a Viewbox control. This control will scale the rendering of its content to fit the size available. However, this might lead to your lines and labels looking too thick or thin.
The more sensible method that you're probably after, though, is how to work out at what scale to draw your graph at in the first place. To do that, work out the range of values on a given axis (for example, your Y-axis value might range from 0 to 100). Work out the available drawing space on that axis (for example, your canvas might have 400 pixels of height). Your Y-axis "scale factor" when drawing the graph would be <available space> / <data range> - or, in this case, 4.
Your canvas' coordinates start from zero in the top-left so, to calculate the Y-position for a given data point, you would calculate like this:
double availableSpace = 400.0; // the size of your canvas
double dataRange = 100.0; // the range of your values
double scaleFactor = availableSpace / dataRange;
double currentValue = 42.0; // the value we're trying to plot
double plottableY = availableSpace - (currentValue * scaleFactor); // the position on screen to draw at
The value of plottableY is the y-coordinate that you would use to draw this point on the canvas.
(Obviously this code would need to be spread out across your drawing method so you're not recalculating all of the values for each point, but it demonstrates the math).

OpenGL: How do I avoid rounding errors when specifying UV co-ordinates

I'm writing a 2D game using OpenGL. When I want to blit part of a texture as a sprite I use glTexCoord2f(u, v) to specify the UV co-ordinates, with u and v calculated like this:
GLfloat u = (GLfloat)xpos_in_texture/(GLfloat)width_of_texture;
GLfloat v = (GLfloat)ypos_in_texture/(GLfloat)height_of_texture;
This works perfectly most of the time, except when I use glScale to zoom the game in or out. Then floating point rounding errors cause some pixels to be drawn one to the right of or one below the intended rectangle within the texture.
What can be done about this? At the moment I'm subtracting an 'epsilon' value from the right and bottom edges of the rectangle, and it seems to work but this seems like a horrible kludge. Are there any better solutions?
Your issue is most likely not coming from rounding errors, but a misunderstanding on how OpenGL maps texels to pixels. If you notice off-by-one errors, it's probably because your UVs, your vertex positions or your projection matrix/viewport pair are not aligned to where they ought to be.
To simplify, I'll just talk about 1D, and be assuming you use a projection and a viewport that map X,Y coordinates to the equivalent pixel location (i.e. a glOrtho(0,width,0,height,zmin,zmax) and a glViewport(0,width,0,height).
Say you want to draw 5 texels (starting at 0 for simplicity) of your 64-wide texture showing on the 10 pixels (scale of 2) of your screen starting at pixel 20.
To get there, draw the triangle with X coordinates 20 and 30, and U (of the UV pair) of 10/64 and 15/64. The rasterization of OpenGL will generate 10 pixels to shade, with X coordinates 20.5, 21.5, ... 29.5. Note that the positions are not full integers. OpenGL rasterizes in the middle of the pixel.
Likewise, it will generate U coordinates of 10.25/64, 10.75/64, 11.25/64, 11.75/64 ... 14.25/64, 14.75/64. Note again that texel coordinates, brought back to texel positions in the texture space, are not full integers. OpenGL samples from the middle of texel locations, so this is fine.
How the samplers use these UVs to generate texel values depend on filtering modes, but be it nearest or linear, the pixels should be contained solely inside the texels of interest (0.25 with a size of 0.5 should only use color from 0 to 0.5, which is all inside the first texel).
In general, if you follow the general principles I laid out, you should never see artifacts.
Use Ortho and Viewport of exactly your frame buffer size
Use positions of X, X+width exactly
Use UVs that correspond to exactly the texels you want (if you want the 10 texels starting from the texel 0, use U=0 to U=10.
If you ever have a -1 somewhere in your math, it's likely not correct (for position or UVs).
To get back to your example, it's unclear how you link the uvs you compute to positions (since you don't show the position computation).
It's also unclear how you got xpos_in_texture (you should explain how you computed them for the corners of your sprite). My guess is that you computed that wrong.
A bit late, but for posterity I was having the same problem, with the pixels from adjacent regions of a texture atlas bleeding into sprites/tiles when scaling or zooming the view. I had my glOrtho, glViewport, etc dimensions all set correctly, then I realized the problem was I was scaling the view before translating the camera, which meant that even though I was snapping to integer pixels pre-zoom, after the zoom it would align to a fraction of a pixel and introduce the texel problem.
So if your code looks something like this, where camera.zoom is a non-integer (i.e. 0.75):
glScalef(camera.zoom, camera.zoom, 1.0f);
glTranslatef(camera.x, camera.y, 0.0f);
You'll want to make sure the result of the translation after scaling aligns to whole pixels on the screen, so you can do something like:
glScalef(camera.zoom, camera.zoom, 1.0f);
glTranslatef(
floor(camera.x * camera.zoom) / camera.zoom,
floor(camera.y * camera.zoom) / camera.zoom,
0.0f);
Do the division as a double, round the result down yourself to the desired level of precision, then cast it to GLFloat.
Your xpos/ypos must be based on 0 to (width or height) - 1 and then:
GLfloat u = (GLfloat)xpos_in_texture/(GLfloat)(width_of_texture - 1);
GLfloat v = (GLfloat)ypos_in_texture/(GLfloat)(height_of_texture - 1);

Finding center of 2D triangle?

I've been given a struct for a 2D triangle with x and y coordinates, a rotation variable, and so on. From the point created by those x and y coordinates, I am supposed to draw a triangle around the point and rotate it appropriately using the rotation variable.
I'm familiar with drawing triangles in OpenGl with GL_TRIANGLES. My problem is somehow extracting the middle of a triangle and drawing the vertices around it.
edit: Yes, what I am looking for is the centroid.
There are different "types" of centers of a triangle. Details on: The Centers of a Triangle. A quick method for finding a center of a triangle is to average all your point's coordinates. For example:
GLfloat centerX = (tri[0].x + tri[1].x + tri[2].x) / 3;
GLfloat centerY = (tri[0].y + tri[1].y + tri[2].y) / 3;
When you find the center, you will need to rotate your triangle about the center. To do this, translate so that the center is now at (0, 0). Perform your rotation. Now reverse the translation you performed earlier.
I guess you mean the centroid of the triangle!?
This can be easily computed by 1/3(A + B + C) where A, B and C are the respective points of the triangle.
If you have your points, you can simply multiply them by your rotation matrix as usual. Hope i got you right.
There are several points in a triangle that can be considered to be its center (orthocenter, centroid, etc.). This section of the Wikipedia article on triangles has more information. Just look at the pictures to get a quick overview.
By "middle" do you mean "centroid", a.k.a. the center of gravity if it were a 3D object of constant thickness and density?
If so, then pick two points, and find the midpoint between them. Then take this midpoint and the third point, and find the point 1/3 of the way between them (closer to the midpoint). That's your centroid. I'm not doing the math for you.

Resources