Texturing a sphere in OpenGL with glTexGen - c

I want to get an earth texture on sphere. My sphere is an icosphere built with many triangles (100+) and I found it confusing to set the UV coordinates for whole sphere. I tried to use glTexGen and effects are quite close but I got my texture repeated 8 times (see image) . I cannot find a way to make it just wrap the whole object once. Here is my code where the sphere and textures are created.
glEnable(GL_TEXTURE_2D);
glTexGeni(GL_T, GL_TEXTURE_GEN_MODE, GL_OBJECT_LINEAR);
glTexGeni(GL_S, GL_TEXTURE_GEN_MODE, GL_OBJECT_LINEAR);
glBegin(GL_TRIANGLES);
for (int i = 0; i < new_sphere->NumOfTrians; i++)
{
Triangle *draw_Trian = new_sphere->Trians+i;
glVertex3f(draw_Trian->pnts[0].coords[0], draw_Trian->pnts[0].coords[1], draw_Trian->pnts[0].coords[2]);
glVertex3f(draw_Trian->pnts[1].coords[0], draw_Trian->pnts[1].coords[1], draw_Trian->pnts[1].coords[2]);
glVertex3f(draw_Trian->pnts[2].coords[0], draw_Trian->pnts[2].coords[1], draw_Trian->pnts[2].coords[2]);
}
glDisable(GL_TEXTURE_2D);
free(new_sphere->Trians);
free(new_sphere);
glEnd();

You need to define how your texture is supposed to map to your triangles. This depends on the texture you're using. There are a multitude of ways to map the surface of a sphere with a texture (since no one mapping is free of singularities). It looks like you have a cylindrical projection texture there. So we will emit cylindrical UV coordinates.
I've tried to give you some code here, but it's assuming that
Your mesh is a unit sphere (i.e., centered at 0 and has radius 1)
pnts.coords is an array of floats
You want to use the second coordinate (coord[1]) as the 'up' direction (or the height in a cylindrical mapping)
Your code would look something like this. I've defined a new function for emitting cylindrical UVs, so you can put that wherever you like.
/* Map [(-1, -1, -1), (1, 1, 1)] into [(0, 0), (1, 1)] cylindrically */
inline void uvCylinder(float* coord) {
float angle = 0.5f * atan2(coord[2], coord[0]) / 3.14159f + 0.5f;
float height = 0.5f * coord[1] + 0.5f;
glTexCoord2f(angle, height);
}
glEnable(GL_TEXTURE_2D);
glBegin(GL_TRIANGLES);
for (int i = 0; i < new_sphere->NumOfTrians; i++) {
Triangle *t = new_sphere->Trians+i;
uvCylinder(t->pnts[0].coords);
glVertex3f(t->pnts[0].coords[0], t->pnts[0].coords[1], t->pnts[0].coords[2]);
uvCylinder(t->pnts[1].coords);
glVertex3f(t->pnts[1].coords[0], t->pnts[1].coords[1], t->pnts[1].coords[2]);
uvCylinder(t->pnts[2].coords);
glVertex3f(t->pnts[2].coords[0], t->pnts[2].coords[1], t->pnts[2].coords[2]);
}
glEnd();
glDisable(GL_TEXTURE_2D);
free(new_sphere->Trians);
free(new_sphere);
Note on Projections
The reason it's confusing to build UV coordinates for the whole sphere is that there isn't one 'correct' way to do it. Mathematically-speaking, there's no such thing as a perfect 2D mapping of a sphere; hence why we have so many different types of projections. When you have a 2D image that's a texture for a spherical object, you need to know what type of projection that image was built for, so that you can emit the correct UV coordinates for that texture.

Related

shadertoy GLSL - creating a large matrix and displaying it on the screen

I have a palette of 64 colors. I need to create a 512*512 table and write the color indexes in the palette into it, and then display everything on the screen. The problem is that glsl does not support two-dimensional arrays, and it is impossible to save a table between frames
The closest thing you can do is create a separate buffer and only use a part of it.
here's an example buffer A:
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
if(any(greaterThan(fragCoord, vec2(512)))) return;
fragCoord -= .5;
fragColor = vec4(mod(fragCoord.x,2.), 0, 0, 1); // generate a color at point.
}
then in the main shader, you can can access a pixel with:
// vec2 p; // p.x and p.y in range(0, 512)
texture(iChannel0, p/iResolution.xy);
If you are using openGL instead of shadertoy, you can use a texture 2d instead.

OpenCV in C - drawing a circle that should appear only in upper half of the image?

I have to draw a circle for several images. For each image to the radius of curvature is different with a constant center.
The problem is : no matter how big the circle is it shouldn't cross to upper half of the image. It's OK if it becomes invisible or only a part of it is visible in the lower half.
I am using OpenCV 2.4.4 in C lang.
The values for the circle is found by:
for(angle1 = 0; angle1<360; angle1++)
{
x [angle1]= r * sin(angle1) + axis_x;
y [angle1]= r * cos(angle1) + axis_y;
}
FYI:
cvCircle( img,center_circle, r,cvScalar( 0, 0, 255,0 ),2,8,0);
Draws circle in the entire image. Which I don't want to happen.
How can I do it? Rem: no part of the circle should appear in upper half of the image.
And the code should be in OpenCV's C lang.
In MALTAB is pretty easy. I only have to select the pixels and map them on the image.
I am new to OpenCV and operations like img->data.i/f/s/db[50] =50; is showing error.
A pretty naive approach is to create a copy of the upper half of image, draw the complete circle, and then copy back the upper half to original image. This may not be the best approach but it works. Here is how it can be achieved:
void drawCircleLowerHalf(IplImage* image, CvPoint center, int radius, CvScalar color, int thickness, int line_type, int shift)
{
CvRect roi = cvRect(0,0,image->width, image->height/2);
IplImage* upperHalf = cvCreateImage(cvSize(image->width, image->height/2), image->depth, image->nChannels);
cvSetImageROI(image, roi);
cvCopy(image,upperHalf);
cvResetImageROI(image);
cvCircle(image, center, radius, color, thickness, line_type, shift);
cvSetImageROI(image, roi);
cvCopy(upperHalf, image);
cvResetImageROI(image);
cvReleaseImage(&upperHalf);
}
Just call this function with the same arguments as of cvCircle.

How to correctly make a depth cubemap for shadow mapping?

I have written code to render my scene objects to a cubemap texture of format GL_DEPTH_COMPONENT and then use this texture in a shader to determine whether a fragment is being directly lit or not, for shadowing purposes. However, my cubemap appears to come out as black. I suppose I am not setting up my FBO or rendering context sufficiently, but fail to see what is missing.
Using GL 3.3 in compatibility profile.
This is my code for creating the FBO and cubemap texture:
glGenFramebuffers(1, &fboShadow);
glGenTextures(1, &texShadow);
glBindTexture(GL_TEXTURE_CUBE_MAP, texShadow);
for (int sideId = 0; sideId < 6; sideId++) {
// Make sure GL knows what this is going to be.
glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X + sideId, 0, GL_DEPTH_COMPONENT, 512, 512, 0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL);
}
// Don't interpolate depth value sampling. Between occluder and occludee there will
// be an instant jump in depth value, not a linear transition.
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE);
glBindTexture(GL_TEXTURE_CUBE_MAP, 0);
My full rendering function then looks like so:
void render() {
// --- MAKE DEPTH CUBEMAP ---
// Set shader program for depth testing
glUseProgram(progShadow);
// Get the light for which we want to generate a depth cubemap
PointLight p = pointLights.at(0);
// Bind our framebuffer for drawing; clean it up
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fboShadow);
glClear(GL_DEPTH_BUFFER_BIT);
// Make 1:1-ratio, 90-degree view frustum for a 512x512 texture.
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(90.0, 1, 16.0, 16384.0);
glViewport(0, 0, 512, 512);
glMatrixMode(GL_MODELVIEW);
// Set modelview and projection matrix uniforms
setShadowUniforms();
// Need 6 renderpasses to complete each side of the cubemap
for (int sideId = 0; sideId < 6; sideId++) {
// Attach depth attachment of current framebuffer to level 0 of currently relevant target of texShadow cubemap texture.
glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_CUBE_MAP_POSITIVE_X + sideId, texShadow, 0);
// All is fine.
GLenum status = glCheckFramebufferStatus(GL_DRAW_FRAMEBUFFER);
if (status != GL_FRAMEBUFFER_COMPLETE) {
std::cout << "Shadow FBO is broken with code " << status << std::endl;
}
// Push modelview matrix stack because we need to rotate and move camera every time
glPushMatrix();
// This does a switch-case with glRotatefs
rotateCameraForSide(GL_TEXTURE_CUBE_MAP_POSITIVE_X + sideId);
// Render from light's position.
glTranslatef(-p.getX(), -p.getY(), -p.getZ());
// Render all objects.
for (ObjectList::iterator it = objectList.begin(); it != objectList.end(); it++) {
(*it)->render();
}
glPopMatrix();
}
// --- RENDER SCENE ---
// Bind default framebuffer
glBindFramebuffer(GL_FRAMEBUFFER, 0);
// Setup proper projection matrix with 70 degree vertical FOV and ratio according to window frame dimensions.
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(70.0, ((float)vpWidth) / ((float)vpHeight), 16.0, 16384.0);
glViewport(0, 0, vpWidth, vpHeight);
glUseProgram(prog);
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
applyCameraPerspective();
// My PointLight class has both a position (world space) and renderPosition (camera space) Vec3f variable;
// The lights' renderPositions get transformed with the modelview matrix by this.
updateLights();
// And here, among other things, the lights' camera space coordinates go to the shader.
setUniforms();
// Render all objects
for (ObjectList::iterator it = objectList.begin(); it != objectList.end(); it++) {
// Object texture goes to texture unit 0
GLuint usedTexture = glTextureList.find((*it)->getTextureName())->second;
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, usedTexture);
glUniform1i(textureLoc, 0);
// Cubemap goes to texture unit 1
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_CUBE_MAP, texShadow);
glUniform1i(shadowLoc, 1);
(*it)->render();
}
glPopMatrix();
frameCount++;
}
The shader program for rendering depth values ("progShadow") is simple.
Vertex shader:
#version 330
in vec3 position;
uniform mat4 modelViewMatrix, projectionMatrix;
void main() {
gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1);
}
Fragment shader:
#version 330
void main() {
// OpenGL sets the depth anyway. Nothing to do here.
}
The shader program for final rendering ("prog") has a fragment shader which looks something like this:
#version 330
#define MAX_LIGHTS 8
in vec3 fragPosition;
in vec3 fragNormal;
in vec2 fragTexCoordinates;
out vec4 fragColor;
uniform sampler2D colorTexture;
uniform samplerCubeShadow shadowCube;
uniform uint activeLightCount;
struct Light {
vec3 position;
vec3 diffuse;
float cAtt;
float lAtt;
float qAtt;
};
// Index 0 to (activeLightCount - 1) need to be the active lights.
uniform Light lights[MAX_LIGHTS];
void main() {
vec3 lightColor = vec3(0, 0, 0);
vec3 normalFragmentToLight[MAX_LIGHTS];
float distFragmentToLight[MAX_LIGHTS];
float distEyeToFragment = length(fragPosition);
// Accumulate all light in "lightColor" variable
for (uint i = uint(0); i < activeLightCount; i++) {
normalFragmentToLight[i] = normalize(lights[i].position - fragPosition);
distFragmentToLight[i] = distance(fragPosition, lights[i].position);
float attenuation = (lights[i].cAtt
+ lights[i].lAtt * distFragmentToLight[i]
+ lights[i].qAtt * pow(distFragmentToLight[i], 2.0));
float dotProduct = dot(fragNormal, normalFragmentToLight[i]);
lightColor += lights[i].diffuse * max(dotProduct, 0.0) / attenuation;
}
// Shadow mapping only for light at index 0 for now.
float distOccluderToLight = texture(shadowCube, vec4(normalFragmentToLight[0], 1));
// My geometries use inches as units, hence a large bias of 1
bool isLit = (distOccluderToLight + 1) < distFragmentToLight[0];
fragColor = texture2D(colorTexture, fragTexCoordinates) * vec4(lightColor, 1.0f) * int(isLit);
}
I have verified that all uniform location variables are set to a proper value (i.e. not -1).
It might be worth noting I do no call to glBindFragDataLocation() for "progShadow" prior to linking it, because no color value should be written by that shader.
See anything obviously wrong here?
For shadow maps, depth buffer internal format is pretty important (too small and things look awful, too large and you eat memory bandwidth). You should use a sized format (e.g. GL_DEPTH_COMPONENT24) to guarantee a certain size, otherwise the implementation will pick whatever it wants. As for debugging a cubemap shadow map, the easiest thing to do is actually to draw the scene into each cube face and output color instead of depth. Then, where you currently try to use the cubemap to sample depth, write the sampled color to fragColor instead. You can rule out view issues immediately this way.
There is another much more serious issue, however. You are using samplerCubeShadow, but you have not set GL_TEXTURE_COMPARE_MODE for your cube map. Attempting to sample from a depth texture with this sampler type and without GL_TEXTURE_COMPARE_MODE = GL_COMPARE_REF_TO_TEXTURE will produce undefined results. Even if you did have this mode set properly, the 4th component of the texture coordinates are used as the depth comparison reference -- a constant value of 1.0 is NOT what you want.
Likewise, the depth buffer does not store linear distance, you cannot directly compare the distance you computed here:
distFragmentToLight[i] = distance(fragPosition, lights[i].position);
Instead, something like this will be necessary:
float VectorToDepth (vec3 Vec)
{
vec3 AbsVec = abs(Vec);
float LocalZcomp = max(AbsVec.x, max(AbsVec.y, AbsVec.z));
// Replace f and n with the far and near plane values you used when
// you drew your cube map.
const float f = 2048.0;
const float n = 1.0;
float NormZComp = (f+n) / (f-n) - (2*f*n)/(f-n)/LocalZcomp;
return (NormZComp + 1.0) * 0.5;
}
float LightDepth = VectorToDepth (fragPosition - lights [i].position);
float depth_compare = texture(shadowCube,vec4(normalFragmentToLight[0],LightDepth));
* Code for float VectorToDepth (vec3 Vec)borrowed from Omnidirectional shadow mapping with depth cubemap
Now depth_compare will be a value between 0.0 (completely in shadow) and 1.0 (completely out of shadow). If you have linear texture filtering enabled, the hardware will sample the depth at 4 points and may give you a form of 2x2 PCF filtering. If you have nearest texture filtering, then it will either be 1.0 or 0.0.

Orthographic Projection with OpenGL and how to implement camera or object movement in space

I have made a cube with display list using GL_POLYGON.I have initialised it in the origin of the coordinates that means in (0,0,0).In my display function which is called in glutDisplayFunc I use the code:
glLoadIdentity();
glOrtho(0,0,0,0,1,1);
glMatrixMode(GL_MODELVIEW);
I want to use orthographic projection using glOrtho.Well, my question is that: Is it normal that I still can see my cube considering that my window size is 600x600?
What's more, I would like some guidelines on how to move my cube or my camera with the relative OpenGL functions.Let's say I would like to move my camera back(to z axis) or my cube to the front(to -z axis).How can I do that?
First of you also need to set glMatrixMode() to GL_PROJECTION before you call glOrtho(), So it would look like this instead.
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(...); // Replace ... with your values
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
To move the sceen you can simply call one or more of the following functions.
glTranslate*()
glRotate*()
glScale*()
You can click the above links to read how and what each function does. But basically:
glTranslate*() translates/moves the current selected matrix.
glRotate*() rotates the current selected matrix.
glScale*() scales the current selected matrix.
You can also use glPushMatrix() and glPopMatrix() to push and pop the current matrix stack.
Extra
Also be aware that you're using old and deprecated functions. You shouldn't use them, instead you're now suppose to calculate and create your own Matrix Stack.
Edit
Camera & Objects
Basically you do that by combining the above functions. Might sound harder that it actually is.
I will create an example of 1 camera and 2 objects, basically to give you the idea of how it works.
void render()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
// The Camera Rotations & Translation
glRotatef(camera_pitch, -1f, 0f, 0f);
glRotatef(camera_yaw, 0f, 1f, 0f);
glTranslate(-camera_x, -camera_y, -camera_z);
// Object 1
glPushMatrix();
glRotatef(...);
glTranslate(...);
// Render Object 1
glPopMatrix();
// Object 2
glPushMatrix();
glRotatef(...);
glTranslate(...);
// Render Object 2
glPopMatrix();
}
Again replace the ... with your own values.
The reason why need to translate the camera coordinates negatively is because why aren't moving a camera, we are actually "pushing" (translating, etc) everything away from the camera/center (Thereby the camera is in the center at all times).
Important the order in which you rotate then translate or translate and then rotate, is important. When needing to the camera transformations you always need to rotate then translate.
Edit
gluLookAt ?
gluLookAt does 100% the same, like in my example.
Example:
// The Camera Rotations & Translation
glRotatef(camera_pitch, -1f, 0f, 0f);
glRotatef(camera_yaw, 0f, 1f, 0f);
glTranslate(-camera_x, -camera_y, -camera_z);
This is my own function which does 100% the same as gluLookAt. How do I know? Because I've looked at the original gluLookAt function, and then I made the following function.
void lookAt(float eyex, float eyey, float eyez, float centerx, float centery, float centerz)
{
float dx = eyex - centerx;
float dy = eyey - centery;
float dz = eyez - centerz;
float pitch = (float) Math.atan2(dy, Math.sqrt(dx * dx + dz * dz));
float yaw = (float) Math.atan2(dz, dx);
pitch = -pitch;
yaw = yaw - 1.57079633f;
// Here you could call glLoadIdentity() if you want to reset the matrix
// glLoadIdentity();
glRotatef(Math.toDegrees(pitch), -1f, 0f, 0f);
glRotatef(Math.toDegrees(yaw), 0f, 1f, 0f);
glTranslatef(-eyex, -eyey, -eyez);
}
You might need to change the Math.* calls, since the above code isn't written in C.

Lorenz Attractor in OpenGL

I am trying to model the Lorenz attractor in 3D space using OpenGL. I have written the following code in my display function:
void display()
{
// Clear the image
glClear(GL_COLOR_BUFFER_BIT);
// Reset previous transforms
glLoadIdentity();
// Set view angle
glRotated(ph,1,0,0);
glRotated(th,0,1,0);
glColor3f(1,1,0);
glPointSize(1);
float x = 0.1, y = 0.1, z = 0.1;
glBegin(GL_POINTS);
int i;
for (i = 0; i < initialIterations; i++) {
// compute a new point using the strange attractor equations
float xnew = sigma*(y-x);
float ynew = x*(r-z) - y;
float znew = x*y - b*z;
// save the new point
x = x+xnew*dt;
y = y+ynew*dt;
z = z+znew*dt;
glVertex4f(x,y,z,i);
}
glEnd();
// Draw axes in white
glColor3f(1,1,1);
glBegin(GL_LINES);
glVertex3d(0,0,0);
glVertex3d(1,0,0);
glVertex3d(0,0,0);
glVertex3d(0,1,0);
glVertex3d(0,0,0);
glVertex3d(0,0,1);
glEnd();
// Label axes
glRasterPos3d(1,0,0);
Print("X");
glRasterPos3d(0,1,0);
Print("Y");
glRasterPos3d(0,0,1);
Print("Z");
// Display parameters
glWindowPos2i(5,5);
Print("View Angle=%d,%d %s",th,ph,text[mode]);
// Flush and swap
glFlush();
glutSwapBuffers();
}
However, I can't get the right attractor. I believe my equations for x, y, z are correct. I am just not sure how to display it the right way to get the right attractor. Thanks for any help. below is what my program is currently putting out:
Hello
Okay so I had this problem and there are a few things you want to do,
First off when you go do draw the point with glVertex4f() you want to either change it to glVertex3f or change your w value to 1. with glVertex3f it will set w to 1 by default. The w value changes the scaling of the points so you will end up with some crazy number all the way out with an i of 50000 or so.
Second after fixing that you're going to find that the values are way out of your visual range so you need to scale it down. I would do this at the time you draw the points so in your case I would use glVertex3f(x*.05,y*.05,z*.05). if .05 is too large or too small adjust it to fit your needs.
finally make sure that your dt value is .001 and your starting point should be around 1 for x,y,and z.
Then ideally you want to put all these points in an array then read that array to draw your points instead of doing the calculations each time you call display. So do your calculations elsewhere and just send the points to display. Hope this helped.

Resources