How do I subdivide a triangle in three dimensions? - c

I have a function that takes 3 points and I will use these points to draw a triangle, as if I were using the glVertex function.
But since I want to texture map this triangle while avoiding perspective distortion, I have to subdivide it, and use the vertices for texture mapping and calculation of normals.
I managed to do this for rectangles, spheres, cylinders and torus, but I can't, for the life of me, figure out how to do a triangle.
Every example of triangle mapping I've managed to find is only for 2D space and with predefined points, using glVertex.
As for rectangles, the code I'm using is this one:
void Rectangle::draw(float texS, float texT)
{
float x1, x2, y1, y2;
x1 = v.at(0); x2 = v.at(2);
y1 = v.at(1); y2 = v.at(3);
//glRectf(x1,y1,x2,y2);
int _numDivisions = 100;
float _xDim = abs(x2 - x1);
float _yDim = abs(y2 - y1);
float texMultiS, texMultiT;
texMultiS = _xDim / texS;// / _xDim;
texMultiT = _yDim / texT;// / _yDim;
glPushMatrix();
glTranslatef(x1, y1, 0);
glRotatef(-90.0,1,0,0);
glScalef( _xDim * (1.0/(double) _numDivisions), 1 , _yDim * (1.0/(double) _numDivisions));
glNormal3f(0,-1,0);
for (int bx = 0; bx<_numDivisions; bx++)
{
glBegin(GL_TRIANGLE_STRIP);
glTexCoord2f((bx * 1.0/_numDivisions) * texMultiS, 0.0 * texMultiT);
glVertex3f(bx+x1, 0, 0+y1);
for (int bz = 0; bz<_numDivisions; bz++)
{
glTexCoord2f(((bx+1) * 1.0/_numDivisions) * texMultiS, (bz * 1.0/_numDivisions) * texMultiT);
glVertex3f((bx + 1)+x1, 0, bz+y1);
glTexCoord2f(((bx+1) * 1.0/_numDivisions) * texMultiS, ((bz+1) * 1.0/_numDivisions) * texMultiT);
glVertex3f(bx+x1, 0, (bz + 1)+y1);
}
glTexCoord2f(((bx+1) * 1.0/_numDivisions) * texMultiS, 1.0 * texMultiT);
glVertex3d((bx+1)+x1, 0, _numDivisions+y1);
glEnd();
}
glPopMatrix();
}
And this I get. It's simple enough, since it's in 2D space. I was aiming for the same kind of logic, but for a 3D space triangle.
But I can't figure out the calculations needed for the points in a triangle in 3D space.
FOR EXAMPLE:
P1->(0,0,1); P2->(1,0,0); P3->(0,1,0);
My best idea so far is to draw it in 2D space with P1 as origin, I can just make every point in the P1->P2 line converge towards P3, and then calculate the angle to rotate according to the x-axis and then the angle for a rotation in the y-axis, but is really this the best way to go at it?
EDIT:
As sugested below, a way to rephrase the question might be:
"How do I subdivide a general triangle in three dimensions"?
Since the objective is to get an algorithm that builds a triangle in 3D space in sections (triangle strip or quad strip) so I can use the vertices for texture mapping and normal calculation.

As suggested in the comments, you could slice up a triangle very similar to a quad. With some fancy ASCII art:
/|
/ |
----
/| /|
/ |/ |
-------
/| /| /|
/ |/ |/ |
----------
/| /| /| /|
/ |/ |/ |/ |
-------------
Decomposing this into triangle strips that run bottom to top, we would have 4 strips. The triangle counts of the strips for 4 strips are:
1, 3, 5, 7
or 2 * i + 1 for strip i, with i from 0 to n - 1.
To calculate the vertices needed for the subdivision, you can calculate the points along the bottom edge by linear interpolation between the corresponding two vertices. Same thing for the points along the edge that goes diagonal in the diagram. Then along each strip, you again do a linear interpolation between the corresponding two points, splitting the left side of strip i into i pieces, and the right side into i + 1 pieces.

Related

How to calculate the sizes of a rectangle that contains rotated image (potentially with transparent pixels)

Given theta angles in radians, width and height of the rotated image, how do I calculate the new width and height of the outer rectangle that contains the rotated image?
In other words how do I calculate the new bonding box width/height?
Note that the image could actually be circle and have transparent pixels on the edges.
That would be: x1, y1.
I am actually rotating a pixbuf with the origin at center using cairo_rotate() and I need to know the newly allocated area. What I tried is this:
double geo_rotated_rectangle_get_width (double a, double b, double theta)
{
return abs(a*cos(theta)) + abs(b*sin(theta));
}
And it will work in the sense of always returning sufficient space to contain the rotated image, but it also always returns higher values than it should, when image is not rotated in a multiple of 90o and is a fully opaque image (a square).
EDIT:
This is the image I am rotating:
Interestingly enough, I just tried with a fully opaque image with the same size and it was OK. I use gdk_pixbuf_get_width() to get width and it returns the same value for both regardless. So I assume the formula is correct and the problem is that the transparency is not accounted for. When rotated with a diagonal orientation there are edges from the rectangle of the rotated image that are transparent.
I'll leave the above so that it is helpful to others :)
Now the question becomes how to account for transparent pixels on the edges
To determine the bbox of the rotated rectangle, you can compute the coordinates of the 4 vertices and take the bbox of these 4 points.
a is the width of the unrotated rectangle and b its height ;
let diag = sqrt(a * a + b * b) / 2 the distance from the center to the top right corner of this rectangle. You can use diag = hypot(a, b) / 2 for better precision ;
first compute the angle theta0 of the first diagonal for theta=0: theta0 = atan(b / a) or better theta0 = atan2(b, a) ;
the 4 vertices are:
{ diag * cos(theta0 + theta), diag * sin(theta0 + theta) }
{ diag * cos(pi - theta0 + theta), diag * sin(pi - theta0 + theta) }
{ diag * cos(pi + theta0 + theta), diag * sin(pi + theta0 + theta) }
{ diag * cos(-theta0 + theta), diag * sin(-theta0 + theta) }
which can be simplified as:
{ diag * cos(theta + theta0), diag * sin(theta + theta0) }
{ -diag * cos(theta - theta0), -diag * sin(theta - theta0) }
{ -diag * cos(theta + theta0), -diag * sin(theta + theta0) }
{ diag * cos(theta - theta0), diag * sin(theta - theta0) }
which gives x1 and y1:
x1 = diag * fmax(fabs(cos(theta + theta0)), fabs(cos(theta - theta0))
y1 = diag * fmax(fabs(sin(theta + theta0)), fabs(sin(theta - theta0))
and the width and height of the rotated rectangle follow:
width = 2 * diag * fmax(fabs(cos(theta + theta0)), fabs(cos(theta - theta0))
height = 2 * diag * fmax(fabs(sin(theta + theta0)), fabs(sin(theta - theta0))
This is the geometric solution, but you must take into account the rounding performed by the graphics primitive, so it is much preferable to use the graphics API and retrieve the pixbuf dimensions with gdk_pixbuf_get_width() and gdk_pixbuf_get_height(), which will allow for precise placement.
I'd say "let cairo compute those coordinates". If you have access to a cairo_t*, you can do something like the following (untested!):
double x1, y1, x2, y2;
cairo_save(cr);
cairo_rotate(cr, theta); // You can also do cairo_translate() and whatever your heart desires
cairo_new_path(cr);
cairo_rectangle(cr, x, y, width, height);
cairo_restore(cr); // Note: This preserved the path!
cairo_fill_extents(cr, &x1, &y1, &x2, &y2);
cairo_new_path(cr); // Clean up after ourselves
printf("Rectangle is inside of (%g,%g) to (%g,%g) (size %g,%g)\n",
x1, y1, x2, y2, x2 - x1, y2 - y1);
The above code applies some transformation, then constructs a path. This makes cairo apply the transformation to the given coordinates. Afterwards, the transformation is "thrown away" with cairo_restore(). Next, we ask cairo for the area covered by the current path, which it provides in the current coordinate system, i.e. without the transformation.

Culling works with extracting planes from view-projection matrix but not with projection matrix

I have implemented frustum culling by using the plane extraction method explained in this article.
The article mentions that if the matrix is a projection matrix, then the planes will be in view-space. So I need to transform my AABB coordinates to view space to do the culling tests. However, this doesn't work.
But if extract the planes from view-projection matrix and test with AABB coordinates in model space, everything works fine.
The only change I've done is update the frustum planes using view-projection matrix with every camera motion, and transform AABB coordinates to model space instead of view space.
Here is the relevant code. The lines commented with "diff" are the only changes between two versions.
Code for projection matrix based frustum culling:
// called only at initialization
void camera_set_proj_matrix(camera *c, mat4 *proj_matrix)
{
c->proj_matrix = *proj_matrix;
// diff
extract_frustum_planes(&c->frustum_planes, &c->proj_matrix);
}
void camera_update_view_matrix(camera *c)
{
mat4_init_look(&c->view_matrix, &c->pos, &c->dir, &VEC3_UNIT_Y);
mat4_mul(&c->vp_matrix, &c->proj_matrix, &c->view_matrix);
// diff
}
void chunk_render(const chunk *c, chunk_pos pos, const camera *camera, GLuint mvp_matrix_location)
{
mat4 model_matrix;
block_pos bp = chunk_pos_to_block_pos(pos);
mat4_init_translation(&model_matrix, bp.x, bp.y, bp.z);
mat4 mv_matrix;
mat4_mul(&mv_matrix, &camera->view_matrix, &model_matrix);
vec4 min = {0, 0, 0, 1};
vec4 max = {CHUNK_SIDE, CHUNK_HEIGHT, CHUNK_SIDE, 1};
// diff: using model view matrix here
mat4_mul_vec4(&min, &mv_matrix, &min);
// diff: using model view matrix here
mat4_mul_vec4(&max, &mv_matrix, &max);
AABB aabb = {{min.x, min.y, min.z}, {max.x, max.y, max.z}};
if (AABB_outside_frustum(&aabb, &camera->frustum_planes)) return;
//draw
}
Looks like it's culling too much. Also note: This abnormal culling only happens when I'm looking at positive z direction.
Code for view-projection based culling:
void camera_set_proj_matrix(camera *c, mat4 *proj_matrix)
{
c->proj_matrix = *proj_matrix;
// diff
}
void camera_update_view_matrix(camera *c)
{
mat4_init_look(&c->view_matrix, &c->pos, &c->dir, &VEC3_UNIT_Y);
mat4_mul(&c->vp_matrix, &c->proj_matrix, &c->view_matrix);
// diff: update frustum planes based on view projection matrix now
extract_frustum_planes(&c->frustum_planes, &c->vp_matrix);
}
void chunk_render(const chunk *c, chunk_pos pos, const camera *camera, GLuint mvp_matrix_location)
{
mat4 model_matrix;
block_pos bp = chunk_pos_to_block_pos(pos);
mat4_init_translation(&model_matrix, bp.x, bp.y, bp.z);
mat4 mv_matrix;
mat4_mul(&mv_matrix, &camera->view_matrix, &model_matrix);
vec4 min = {0, 0, 0, 1};
vec4 max = {CHUNK_SIDE, CHUNK_HEIGHT, CHUNK_SIDE, 1};
// diff: using model matrix now
mat4_mul_vec4(&min, &model_matrix, &min);
// diff: using model matrix now
mat4_mul_vec4(&max, &model_matrix, &max);
AABB aabb = {{min.x, min.y, min.z}, {max.x, max.y, max.z}};
if (AABB_outside_frustum(&aabb, &camera->frustum_planes)) return;
// draw
}
Perfect
I've no idea why projection only method would be working in a weird way when I'm transforming aabb appropriately :/
What does transforming an AABB actually mean?
Let's look at the problem in 2D. Say I have a 2D AABB (defined by a bottom-left and a top-right corner), and I want to rotate it by 45 degrees.
---------
| |
| | -> ???
| |
---------
The actual region of space this represents would obviously look like a diamond:
/ \
/ \
/ \
\ /
\ /
\ /
However, since we want to encode that as an AABB, the resulting AABB would have to look like this:
-------------
| / \ |
| / \ |
|/ \|
|\ /|
| \ / |
| \ / |
-------------
However, looking at your code:
mat4_mul_vec4(&min, &model_matrix, &min);
// diff: using model matrix now
mat4_mul_vec4(&max, &model_matrix, &max);
AABB aabb = {{min.x, min.y, min.z}, {max.x, max.y, max.z}};
What you are doing is build an AABB who's BL and TR are the transformed
BL and TR from the original AABB:
/ \
/ \
/ \
-----------
\ /
\ /
\ /
What you should be doing is transform all 8 corners of your original AABB and build a new AABB around that. But working with world-space culling planes is also absolutely fine for most cases.
Alternatively, if your problem suits itself well to bounding spheres, you can save yourself a lot of trouble by using that instead.

openGL(c) draw square

i need to draw a square using c (openGL),
i only have 1 coordinate which is the center of the square (lets say 0.5,0.5) and i need to draw a square ABCD with each side 0.2 length (AB,BC,CD,DA),
I tried using the next function but it does not draw anything for some reson,
void drawSquare(double x1,double y1,double radius)
{
glColor3d(0,0,0);
glBegin(GL_POLYGON);
double locationX = x1;
double locationY = x2;
double r = radius;
for(double i=0; i <= 360 ; i+=0.1)
{
glVertex2d(locationX + radius*i, locationY + radius*i);
}
glEnd();
}
can someone please tell me why its not working\point me to the right direction (i do not want to draw polygon with 4 coordinated normally, but with only 1 coordinate with a givven radius,
thanks!
Your code will not even draw a circle. If anything, it will draw a diagonal line extending out of the view area very quickly. A circle plot would need to use sine and cosine, based on the radius and angle.
I have not tried this code, but it needs to be more like this to draw a square.
void drawSquare(double x1, double y1, double sidelength)
{
double halfside = sidelength / 2;
glColor3d(0,0,0);
glBegin(GL_POLYGON);
glVertex2d(x1 + halfside, y1 + halfside);
glVertex2d(x1 + halfside, y1 - halfside);
glVertex2d(x1 - halfside, y1 - halfside);
glVertex2d(x1 - halfside, y1 + halfside);
glEnd();
}
There are no normals defined: perhaps I should have travelled counter-clockwise.
Simple way to draw a square is to use GL_QUADS and the four vertices for the four corners of the square. Sample code is below-
glBegin(GL_QUADS);
glVertex2f(-1.0f, 1.0f); // top left
glVertex2f(1.0f, 1.0f); // top right
glVertex2f(1.0f, -1.0f); // bottom right
glVertex2f(-1.0f, -1.0f); // bottom left
glEnd();
Since in the case you have to draw square from the mid point which is interaction of two diagonals of square. You use the following facts and draw the same.
length of diagonal = x*square root of 2 (x=side of square)
diagonals of a square are perpendicular
diagonals of a square are the same length
If your point is at 0.5,0.5 which coordinate of interaction point, and side is 0.2. So you can easily determine the point coordinate of four corners as in the figure given below and code it accordingly.

OpenGL: Wrapping texture around cylinder

I am trying to add textures to a cylinder to draw a stone well. I'm starting with a cylinder and then mapping a stone texture I found here but am getting some weird results. Here is the function I am using:
void draw_well(double x, double y, double z,
double dx, double dy, double dz,
double th)
{
// Set specular color to white
float white[] = {1,1,1,1};
float black[] = {0,0,0,1};
glMaterialfv(GL_FRONT_AND_BACK,GL_SHININESS,shinyvec);
glMaterialfv(GL_FRONT_AND_BACK,GL_SPECULAR,white);
glMaterialfv(GL_FRONT_AND_BACK,GL_EMISSION,black);
glPushMatrix();
// Offset
glTranslated(x,y,z);
glRotated(th,0,1,0);
glScaled(dx,dy,dz);
// Enable textures
glEnable(GL_TEXTURE_2D);
glTexEnvi(GL_TEXTURE_ENV,GL_TEXTURE_ENV_MODE,GL_MODULATE);
glBindTexture(GL_TEXTURE_2D,texture[0]); // Stone texture
glBegin(GL_QUAD_STRIP);
for (int i = 0; i <= 359; i++)
{
glNormal3d(Cos(i), 1, Sin(i));
glTexCoord2f(0,0); glVertex3f(Cos(i), -1, Sin(i));
glTexCoord2f(0,1); glVertex3f(Cos(i), 1, Sin(i));
glTexCoord2f(1,1); glVertex3f(Cos(i + 1), 1, Sin(i + 1));
glTexCoord2f(1,0); glVertex3f(Cos(i + 1), -1, Sin(i + 1));
}
glEnd();
glPopMatrix();
glDisable(GL_TEXTURE_2D);
}
// Later down in the display function
draw_well(0, 0, 0, 1, 1, 1, 0);
and the output I receive is
I'm still pretty new to OpenGL and more specifically textures so my understanding is pretty limited. My thought process here is that I would map the texture to each QUAD used to make the cylinder, but clearly I am doing something wrong. Any explanation on what is causing this weird output and how to fix it would be greatly appreciated.
There are possibly three main issues with your draw routine. quad-strip indexing, texture coordinates repeating too often and possible incorrect usage of the trig functions;
Trigonometric functions usually accept values which represent angles expressed in radians and not degrees. Double check what the parameters of the Sin and Cos functions you are using.
Quadstrip indexing is incorrect. Indexing should go like this...
Notice how the quad is defined in a clock-wise fashion, however the diagonal vertices are defined sequentially. You are defining the quad as v0, v1, v3, v2 instead of v0, v1, v2, v3 so swap the last two vertices of the four. This also leads to another error in not sharing the vertices correctly. You are duplicating them along each vertical edge since you draw the same set of vertices (i+1) in one loop, as you do in the next (i.e since i has now been incremented by 1).
Texture coordinates are in the range from 0, 1 for each quad which means you are defining a cylinder which is segmented 360 times and this texture is repeated 360 times around the cylinder. I'm assuming the texture should be mapped 1:1 to the Cylinder and not repeated?
Here is some example code using what you provided. I have reduced the number of segments down to 64, if you wish to still have 360 then ammend numberOfSegments accordingly.
float pi = 3.141592654f;
unsigned int numberOfSegments = 64;
float angleIncrement = (2.0f * pi) / static_cast<float>(numberOfSegments);
float textureCoordinateIncrement = 1.0f / static_cast<float>(numberOfSegments);
glBegin(GL_QUAD_STRIP);
for (unsigned int i = 0; i <= numberOfSegments; ++i)
{
float c = cos(angleIncrement * i);
float s = sin(angleIncrement * i);
glTexCoord2f( textureCoordinateIncrement * i, 0); glVertex3f( c, -1.0f, s);
glTexCoord2f( textureCoordinateIncrement * i, 1.0f); glVertex3f( c, 1.0f, s);
}
glEnd();
N.BYou are using an old version of OpenGL (the use of glBegin/glVertex etc).

Algorithm for triangulation of point travelling around a circle

Given the following system:
Where:
A: Point that exists anywhere on the edge of a circle with radius r on the xz plane.
θ: The angle between the positive-x-axis and a vector from the origin to point A. This should range from -PI/2 to PI/2.
B1: A point at the intersection of the circle and the positive x-axis at a height of h1.
B2: A point at the intersection of the circle and the positive z-axis at a height of h2.
d1: Distance between B1 and A.
d2: Distance between B2 and A.
Assuming:
h1, h2, and r are known constants.
d1 and d2 are known variables.
How do I find θ?
This will eventually be implemented in C in an embedded system where I have reasonably fast functions for arctan2, sine, and cosine. As such, performance is definitely a priority, and estimations can be used if they are correct to about 3 decimal places (which is how accurate my trig functions are).
However, even given a mathematical algorithm, I'm sure I could work out the specific implementation.
For what it's worth, I got about as far as:
(d1^2 - h1^2) / r = (sin(θ))^2 + (cos(θ))^2
(d2^2 - h2^2) / r = (sin(PI/4 - θ))^2 + (cos(PI/4 - θ))^2
Before I realized that, mathematically, this is way out of my league.
This isn't a full answer but a start of one.
There are two easy simplifications you can make.
Let H1 and H2 be the points in your plane below B1 and B2.
Since you know h1 and d1, h2 and d2, you can calculate the 2 distances A-H1 and A-H2 (with Pythagoras).
Now you have reduced the puzzle to a plane.
Furthermore, you don't really need to look at both H1 and H2. Given the distance A-H1, there are only 2 possible locations for A, which are mirrored in the x-axis. Then you can find which of the two it is by seeing if the A-H2 distance is above or below the threshold distance H2-H1.
That seems to be a good beginning :-)
Employing #Rhialto, additional simplifications and tests for corners cases:
// c1 is the signed chord distance A to (B1 projected to the xz plane)
// c1*c1 + h1*h1 = d1*d1
// c1 = +/- sqrt(d1*d1 - h1*h1) (choose sign later)
// c1 = Cord(r, theta) = fabs(r*2*sin(theta/2))
// theta = asin(c1/(r*2))*2
//
// theta is < 0 when d2 > sqrt(h2*h2 + sqrt(2)*r*sqrt(2)*r)
// theta is < 0 when d2 > sqrt(h2*h2 + 2*r*r)
// theta is < 0 when d2*d2 > h2*h2 + 2*r*r
#define h1 (0.1)
#define h2 (0.25)
#define r (1.333)
#define h1Squared (h1*h1)
#define rSquared (r*r)
#define rSquaredTimes2 (r*r*2)
#define rTimes2 (r*2)
#define d2Big (h2*h2 + 2*r*r)
// Various steps to avoid issues with d1 < 0, d2 < 0, d1 ~= h1 and theta near pi
double zashu(double d1, double d2) {
double c1Squared = d1*d1 - h1Squared;
if (c1Squared < 0.0)
c1Squared = 0.0; // _May_ be needed when in select times |d1| ~= |h1|
double a = sqrt(c1Squared) / rTimes2;
double theta = (a <= 1.0) ? asin(a)*2.0 : asin(1.0)*2.0; // Possible a is _just_ greater than 1.0
if (d2*d2 > d2Big) // this could be done with fabs(d2) > pre_computed_sqrt(d2Big)
theta = -theta;
return theta;
}

Resources