I am trying to draw the intersection between two glut objects, I managed to draw each object separately but I was wondering if I can draw only the intersection between the two objects?
My Code below draws a solid cube and sphere:
/* draw a cube */
glTranslatef( 0.0, 0.0, 30.0 );
glutSolidSphere(30,12,6);
/* draw a wire sphere */
glTranslatef( 0.0, 0.0, 30.0 );
glutSolidCube(30);
Since OpenGL is not a scene graph (i.e. it doesn't maintain some kind of scene representation) but only draws simple primitives (points, line, triangles), one at a time, this is not immediately possible. There are methods to do this in image space, using multipass stencil buffer trickery. Here's a nice explanation: ftp://ftp.sgi.com/opengl/contrib/blythe/advanced99/notes/node22.html
Related
If you look at this picture:
You can see that the left and right walls are brighter than the others, along with the faces of the chair.
I was wondering, is this an issue with the normals? Or would it potentially be just the position of the light illuminating these surfaces?
In my main method I just do this:
//enable lighting
glEnable(GL_LIGHTING);
glEnable(GL_LIGHT0);
//setup lighting
float lightColor [] = {1.0f, 0.8f, 0.8f,1.0f};
glMaterialfv(GL_FRONT_AND_BACK, GL_AMBIENT_AND_DIFFUSE, lightColor);
GLfloat lightpos[] = {2,2,4,4};
glLightfv(GL_LIGHT0,GL_POSITION, lightpos);
If you need to see the normals I can upload it but I'm not sure if it is a problem with them or not.
It seems your normals are not computed as they should. Notice how same direction sides of different objects are lit differently.
I would guess that:
you are not transforming the normals right when transforming your objects;
your normals are not normalized to unit length (do you have glEnable(GL_NORMALIZE) in your code?)
normals computation is wrong in some other way (e.g. you round the values before sending them to render).
It is hard to suggest more possible causes without seeing your actual code.
I am new to OpenGL, and I am having trouble displaying a simple cube on my screen. The problem is that sides of the cube that should be hidden in the background still appear. I feel that the answer should be that I have to enable GL_DEPTH_TEST, but this causes the screen to display a complete white canvas with nothing on it. Here's a sample from a run I have done:
Each side is just a random color.
Here is a snippet of my code:
glutInit(&argc, argv);
glutInitWindowSize(600, 400);
glutInitDisplayMode(GLUT_RGB | GLUT_SINGLE | GLUT_DEPTH);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(50.0, 1.5, 1.0f, 100.0);
gluLookAt(10.0, 5.0, 2.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0);
//glEnable(GL_DEPTH_TEST);
glMatrixMode(GL_MODELVIEW);
glClearColor(1.0, 1.0, 1.0, 1.0); /* white */
I have commented out glEnable(GL_DEPTH_TEST) for now.
What else should I be doing so that there is no overlapping on this cube?
Thank you for any help!
Enable GL_DEPTH_TEST, and clear the depth buffer before rendering with glClear(GL_DEPTH_BUFFER_BIT);.
This can be combined with glClear(GL_COLOR_BUFFER_BIT); if you're using that, as glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
In addition to what #immibis said about depth buffers, you can also turn on back-face culling and make sure that you draw your primitives facing the correct direction. To do that:
glEnable (GL_CULL_FACE);
glCullFace (GL_BACK);
glFrontFace (GL_CW); // This says you define your primitives in clockwise order
That causes faces that are pointing away from the camera not to be drawn which can improve performance, and can eliminate this particular problem as well. (But you probably want to use a depth buffer, too.)
I have a program that renders a 3D wire mesh model using this code fragment in a loop.
glBegin(GL_LINES);
glColor3f(.0f, 0.0f, 0.0f);
glVertex3d(xs,ys,zs);
glVertex3d(xe,ye,ze);
glEnd();
I need to add functionality so that the vertices where the line starts and ends can be rendered if the user desires, probably using a small shaded circle. The circle should be of a constant screen size, probably 4-6 pixels across and rendered at a size that is independent of where the camera is, or how close it is.
Can anyone suggest how to render such a vertex?
You can use GL_POINTS in your glBegin together with glPointSize function.
There have been many tutorials where each suggests using gluPerspective or glFrustum with a combination of other things, yet I've had difficulties setting up the right matrix. What code do I need to set up a 45˚ perspective view looking down the +z axis?
So far I have:
glShadeModel(GL_SMOOTH);
glClearColor(0,0,0,0);
glClearDepth(1);
glDepthFunc(GL_LEQUAL);
glViewport(0,0,width,height);
glEnable(GL_DEPTH_TEST);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(45,1,0.1,100);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
But that doesn't seem to work. All I get is a black screen when I attempt to draw things.
EDIT: Here's the minimal drawing code:
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
glColor3ub(255,255,255);
glBegin(GL_TRIANGLE_STRIP);
glVertex3f(20,20,20);
glVertex3f(20,30,20);
glVertex3f(30,20,20);
glVertex3f(30,30,20);
glEnd();
Things such as points on (1,1,1) and (2,50,23). They do not appear.
Well there's your problem. The default OpenGL camera has the +Z axis pointing towards the camera. And since the camera is at Z=0, any position who's Z position is >0 is behind the camera.
Move your points in front of the camera. They need to at least have a -Z position.
EDIT: Here's the minimal drawing code:
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
glColor3ub(255,255,255);
glBegin(GL_TRIANGLE_STRIP);
glVertex3f(20,20,20);
glVertex3f(20,30,20);
glVertex3f(30,20,20);
glVertex3f(30,30,20);
glEnd();
Your vertex coordinates lie way outside the viewing volume. First, OpenGL by default "looks" down the negative Z axis, so your Z coordinates must be -100 < z < -0.1 for your choosen near and far clip plane.
But even if you flipped the sign on the Z coordinate, your vertices still would lie outside the 45° FOV. (20, 0, 20) is 45° from the viewing axis, and (30, 0, 20) even farther. Try centering your vertex coodinates around (0,0,-5) like (+/-1, +/-1, -5)
By (5, 5) I mean exactly the fifth row and fifth column.
I found it very hard to draw things using screen coordinates, all the coordinates in OpenGL is relative, and usually ranging from -1.0 to 1.0. Why it is so serious to prevent programmers from using screen coordinates / window coordinates?
The simplest way is probably to set the projection to match the pixel dimensions of the rendering space via glOrtho. Then vertices can be in pixel coordinates. The downside is that resizing the window could cause problems and you're mostly wasting the accelerated transforms.
Assuming a window that is 640x480:
// You can reverse the 0,480 arguments depending on you Y-axis
// direction preference
glOrtho(0, 640, 0, 480, -1, 1);
Frame buffer objects and textures are another avenue but you'll have to create your own rasterization routines (draw line, circle, bitmap, etc). There are problaby libs for this.
#dandan78 OpenGL is not a Vector Graphics renderer. Is a Rasterizer. And in a more precise way is a Standard described by means of a C language interface. A rasterizer, maps objects represented in 3D coordinated spaces (a car, a tree, a sphere, a dragon) into 2D coordinated spaces (say a plane, your app window or your display), these 2d coordinates belong to a discrete coordinated plane. The counter rendering method of rasterization is Ray Tracing.
Vector graphics is a way to represent by means of mathematical functions a set of curves, lines or similar geometrical primitives, in a nondiscrete way. So Vector graphics is in the "model representation" field rather than "rendering" field.
You can just change the "camera" to make 3D coordinates match screen coordinates by setting the modelview matrix to identity and the projection to an orthographic projection (see my answer on this question). Then you can just draw a single point primitive at the required screen coordinates.
You can also set the raster position with glWindowPos (which works in screen coordinates, unlike glRasterPos) and then just use glDrawPixels to draw a 1x1 pixel image.
glEnable( GL_SCISSOR_TEST );
glScissor( 5, 5, 1, 1 ); /// position of pixel
glClearColor( 1.0f, 1.0f, 1.0f, 0.0f ); /// color of pixel
glClear( GL_COLOR_BUFFER_BIT );
glDisable( GL_SCISSOR_TEST );
By changing last 2 arguments of glScissor you can also draw pixel perfect rectangle.
I did a bit of 3D programming several years back and, while I'm far from an expert, I think you are overlooking a very important difference between classical bitmapped DrawPixel(x, y) graphics and the type of graphics done with Direct3D and OpenGL.
Back in the days before 3D, computer graphics was mostly about bitmaps, which is to say collections of colored dots. These dots had a 1:1 relationship with the pixels on your monitor.
However, that had numerous drawbacks, including making 3D very difficult and requiring bitmaps of different sizes for different display resolutions.
In OpenGL/D3D, you are dealing with vector graphics. Lines are defined by points in a 3-dimensional coordinate space, shapes are defined by lines and so on. Surfaces can have textures, lights can be added, as can various types of lighting effects etc. This entire scene, or a part of it, can then be viewed through a virtual camera.
What you 'see' though this virtual camera is a projection of the scene onto a 2D surface. We're still dealing with vector graphics at this point. However, since computer displays consist of discrete pixels, this vector image has to be rasterized, which transforms the vector into a bitmap with actual pixels.
To summarize, you can't use screen/window coordinates because OpenGL is based on vector graphics.
I know I'm very late to the party, but just in case someone has this question in the future. I converted screen coordinates to OpenGL matrix coordinates using these:
double converterX (double x, int window_width) {
return 2 * (x / window_width) - 1;
}
double converterY (double y, int window_height) {
return -2 * (y / window_height) + 1;
}
Which are basically re-scaling methods.