No display (transparent window) with OpenGL 2 and Primusrun on single buffer - c

I’m trying to make a minimalist OpenGL program to run on both my Intel chipset (Mesa) and NVIDIA card through Bumblebee (Optimus).
My source code (using FreeGLUT):
#include <GL/freeglut.h>
void display(void);
void resized(int w, int h);
int main(int argc, char** argv)
{
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_RGBA | GLUT_SINGLE);
glutInitContextVersion(2, 1);
glutInitContextProfile(GLUT_CORE_PROFILE);
glutInitWindowSize(640, 480);
glutCreateWindow("Hello, triangle!");
glutReshapeFunc(resized);
glutDisplayFunc(display);
glClearColor(0.3, 0.3, 0.3, 1.0);
glutMainLoop();
return 0;
}
void display(void)
{
glClear(GL_COLOR_BUFFER_BIT);
glColor3f(1.0, 1.0, 1.0);
glBegin(GL_TRIANGLES);
glVertex3f(0, 0.75, 0.0);
glVertex3f(-0.75, -0.75, 0.0);
glVertex3f(0.75, -0.75, 0.0);
glEnd();
glFlush();
}
void resized(int w, int h)
{
glViewport(0, 0, w, h);
glutPostRedisplay();
}
When I launch directly the program (./a.out) on the Intel chipset, everything works. I don’t have that chance with primusrun ./a.out which displays a transparent window:
It is not really transparent, the image behind stays even if I move the window.
What's interesting is that when I change for a double color buffer (using GLUT_DOUBLE instead of GLUT_SINGLE, and glutSwapBuffers() instead of glFush()) this works both on Intel and primusrun.
Here's my glxinfo: http://pastebin.com/9DADif6X
and my primusrun glxinfo: http://pastebin.com/YCHJuWAA
Am I doing it wrong or is it a Bumblebee-related bug?

The window is probably not really transparent, it probably just shows whatever was beneath it when it showed up; try moving it around and watch if it "drags" along the picture.
When using a compositor, single buffered windows are a bit tricky, because there's no cue for the compositor to know, when the program is done rendering. Using a double buffered window performing a buffer swap does give the compositor that additional information.
In addition to that, to finish a single buffered drawing you call glFinish not glFlush; glFinish also acts as a cue that drawing has been, well, finished.
Note that there's little use for single buffered drawing these days. The only argument against double buffering was lack of available graphics memory. In times where GPUs have several hundreds of megabytes of RAM available this is no longer a grave argument.

Related

Infinite Loop on OpenGL

I decided that whenever the user tries to resize the screen the screen must go back to preset sizes, so it makes my life easier on graph nodes drawing. On MAC, my application is working properly but on Linux it is happening an infinte loop on the resize function, and I don't know why. After some loop calls I got a Segmentation fault (core dumped)
here's my opengl configuration(main funciton)
glutInit(&argc, argv);
glutInitDisplayMode ( GLUT_SINGLE | GLUT_RGB | GLUT_DEPTH);
glutInitWindowSize(WINDOW_WIDTH, WINDOW_HEIGHT);
glutInitWindowPosition(100, 100);
glutCreateWindow(APP_NAME);
glClearColor(0.0, 0.0, 0.0, 0.0); // black background
glMatrixMode(GL_PROJECTION); // setup viewing projection
glLoadIdentity(); // start with identity matrix
glOrtho(0.0, 50.0, 0.0, 50.0, 0.0, 0.1); // setup a 50x50 viewing world
glutDisplayFunc(display);
glutReshapeFunc(resize);
glutMainLoop();
and here's my display and resize functions implementations
void display() {
Matrix* distanceMatrix = NULL;
PalleteNodePosition* nodesPositions = NULL;
distanceMatrix = fromFile(inputFileName);
printf("Finish input parsing...\n");
nodesPositions = calculateNodesPositions(distanceMatrix);
printf("Finish calculating nodes position on screen...\n");
glClear( GL_COLOR_BUFFER_BIT);
drawNodes(nodesPositions, distanceMatrix->width);
drawLink(10, 10, 18, 18);
glFlush();
}
void resize(int w, int h){
glutReshapeWindow(WINDOW_WIDTH, WINDOW_HEIGHT);
}
When I print the resize call, I got w equals to WINDOW_WIDTH and h equals to WINDOW_HEIGHT as expected, so why the app is resizing the screen everytime?
Your resize() callback will be indirectly called by itself even if it
is in an asynchronous way.
You ask the windowing system to resize your window, then later
you receive the event that says that your window has been resized,
then your callback is triggered, which leads to a new resize
request...
If nobody stops this loop (apparently the windowing system
does not detect that the resize is not actually needed),
it is infinite.
May be should you consider comparing w and h to the expected
values, inside your resize() callback, and only invoke glutReshapeWindow() if it is actually needed?
You should also be aware that the inner size and the outer size
of the window are probably different (border, title-bar...).

Opengl hide parts of the screen

(code snippet. I know it's ugly but i wanted to make it work before making it better so please don't pay too much attention to the structure)
I modified slightly the glfw example present in the documentation to have a triangle that rotates when pressing the right arrow key and draws a circle described by the position of one of his vertices (the blue one in this case).
I clear the GL_COLOR_BUFFER_BIT only when initializing the window to avoid having to store all the coordinates that will be needed to draw the line (they would be hundreds of thousands in the final program), that means that on the screen every time i press the right arrow a "copy" of the triangle is draws rotated by 12 degrees and a line is drawn that connects the old blue angle position to the new one.
The problem now is that i would want to be able to press the escape key GLFW_KEY_ESCAPE and "delete" the triangles while keeping the lines drawn.
I tried using a z-buffer to hide the triangles behind a black rectangle but only the last line drawn is visualized (i think this is because opengl doesn't know the z of the previous lines since i don't store them).
Is there a way to do what i want without having to store all the point coordinates and then clearing the whole screen and redrawing only the lines? If this is the case, what would be the best way to store them?
Here is part of the code i have so far.
bool check = 0;
Vertex blue = {0.f, 0.6f, 0.5f};
Vertex green = {0.6f,-0.4f, 0.5f};
Vertex red = {-0.6f, -0.4f, 0.5f};
Vertex line = {0.f, 0.6f, 0.f};
Vertex line2 = {0.f, 0.6f, 0.f};
static void
key_callback(GLFWwindow *window, int key, int scancode, int action, int mods) {
if (key == GLFW_KEY_ESCAPE && action == GLFW_PRESS)
check = !check;
if (key == GLFW_KEY_RIGHT && action == GLFW_PRESS) {
line.x = line2.x;
line.y = line2.y;
rotation -= 12;
rad = DegToRad(-12);
double x = line.x*cos(rad) - line.y * sin(rad);
double y = line.y * cos(rad) + line.x * sin(rad);
line2.x = x;
line2.y = y;
}
int main(void) {
GLFWwindow *window;
glfwSetErrorCallback(error_callback);
if (!glfwInit())
exit(EXIT_FAILURE);
window = glfwCreateWindow(1280, 720, "Example", NULL, NULL);
if (!window) {
glfwTerminate();
exit(EXIT_FAILURE);
}
glfwMakeContextCurrent(window);
glfwSetKeyCallback(window, key_callback);
glClear(GL_COLOR_BUFFER_BIT);
while (!glfwWindowShouldClose(window)) {
glPolygonMode(GL_FRONT_AND_BACK,GL_LINE);
float ratio;
int width, height;
glfwGetFramebufferSize(window, &width, &height);
ratio = width / (float) height;
glViewport(0, 0, width, height);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(-ratio, ratio, -1.f, 1.f, 1.f, -1.f);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glRotatef(rotation, 0.f, 0.f, 1.f);
glBegin(GL_TRIANGLES);
glColor3f(1.f, 0.f, 0.f);
glVertex3f(red.x, red.y, red.z);
glColor3f(0.f, 1.f, 0.f);
glVertex3f(green.x, green.y, green.z);
glColor3f(0.f, 0.f, 1.f);
glVertex3f(blue.x, blue.y, blue.z);
glEnd();
glLoadIdentity();
glLineWidth(1.0);
glColor3f(1.0, 0.0, 0.0);
glBegin(GL_LINES);
glVertex3f(line.x, line.y, line.z);
glVertex3f(line2.x, line2.y, line2.z);
glEnd();
if (check){
//hide the triangles but not the lines
}
glEnd();
glfwSwapBuffers(window);
glfwPollEvents();
}
glfwDestroyWindow(window);
glfwTerminate();
exit(EXIT_SUCCESS);
}
I clear the GL_COLOR_BUFFER_BIT only when initializing the window
That's your problem right there. It's idiomatic in OpenGL to always start with a clear operation of the main framebuffer color bits. That is, because you don't know the state of your window main framebuffer when the operating system is asking for a redraw. For all you know it could have been all replaced with cat pictures in the background without your program knowing it. Seriously: If you have a cat video running and the OS felt the need to rearrange your window's main framebuffer memory this is what you might end up with.
Is there a way to do what i want without having to store all the point coordinates and then clearing the whole screen and redrawing only the lines?
For all intents and purposes: No. In theory one could come up with a contraption made out of a convoluted series of stencil buffer operations to implement that, but this would be barking up a very wrong tree.
Here's something for you to try out: Draw a bunch of triangles like you do, then resize your window down so there nothing remains, then resize it back to its original size… you see where the problem? There's a way to address this particular problem, but that's not what you should do here.
The correct thing is to redraw everything. If you feel that that's to slow you have to optimize your drawing process. On current generation hardware it's possible to churn out on the order of 100 million triangles per second.

Drawing a 2D texture in OpenGL [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 8 years ago.
Improve this question
I had a drawing function called DrawImage but it's really confusing and is only working with a specific form of the reshape function so I have 2 questions:
How do I draw a texture in OpenGL ? I just want to create a function that gets a texture, x, y, width, height and maybe angle and paint it and draws it according to the arguments. I want to draw it as a GL_QUAD regularly but I'm not sure how to do that anymore .-. People say I should use SDL or SFML to do so, is it recommended ? If it is, can you give me a simple function that loads a texture and one that draws it ? I'm currently using SOIL to load textures.
the function is as here:
void DrawImage(char filename, int xx, int yy, int ww, int hh, int angle)
{
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, filename);
glLoadIdentity();
glTranslatef(xx,yy,0.0);
glRotatef(angle,0.0,0.0,1.0);
glTranslatef(-xx,-yy,0.0);
// Draw a textured quad
glBegin(GL_QUADS);
glTexCoord2f(0, 0); glVertex2f(xx,yy);
glTexCoord2f(0, 1); glVertex2f(xx,yy + hh);
glTexCoord2f(1, 1); glVertex2f(xx + ww,yy + hh);
glTexCoord2f(1, 0); glVertex2f(xx + ww,yy);
glDisable(GL_TEXTURE_2D);
glPopMatrix();
glMatrixMode(GL_PROJECTION);
glPopMatrix();
glMatrixMode(GL_MODELVIEW);
glEnd();
}
Someone said to me that you can't call glDisable, glPopMatrix or glMatrixMode between glBegin and glEnd. The problem is - the code won't work without it. Any idea how to make it work without it ?
2. About the glutReshapeFunc, the documentation says it gets a pointer to a functions with 2 args, width and height - I created (up to now) a function that gets void - any idea how to write a reshape function that gets a width and height and actually does what reshape needs to do.
and one minor question: How better is C++ than C when it comes to GUIs like OpenGL ? As all as I can see, only OOP is the matter and I didn't went to any problem that OOP could solve and C couldn't (in OpenGL I mean).
No need to answer all of the question - question number 1 is basically the most important to me :P
Your DrawImage function looks pretty much just fine. Although, yes, you shouldn't be calling glMatrixMode etc. befor glEnd so remove them. I believe the issue is simply to do with setting up your projection matrix and the added calls just happen to fix an issue that shouldn't be there in the first place. glutReshapeFunc is used to capture window resize events so until you need it you don't have to use it.
SDL gives you a lot more control over events and glut, but takes a little longer to set up. GLFW is also a good alternative. I guess its not that important to change unless you see a feature you need. These are libs to create a GL context and do some event handling. SOIL can be used for them all.
OpenGL is a graphics API and gives a common interface for doing hardware accelerated 3D graphics, not a GUI lib. There are GUI libs written for OpenGL though.
Yes I believe many take OOP to the extreme. I like the term C++ as a better C, rather than completely restructuring the way you code. Maybe just keep using C, but with a C++ compiler. Then when you see a feature you like, use it. Eventually you may find you're using lots and then have a better appreciation for the reason for their existence and when to use them rather than blindly following coding practices. Just imo, this is all very subjective.
So, the projection matrix...
To draw stuff in 3D on a 2D screen you "project" the 3D points onto a plane. I'm sure you've seen images like this:
This allows you to define your arbitrary 3D coordinate system. Except for drawing stuff in 2D its natural to want to use pixel coordinates directly. After all that's what you monitor displays. Thus, you want to use kind of a bypass projection which doesn't do any perspective scaling and matches pixels in scale and aspect ratio.
The default projection (or "viewing volume") is an orthographic -1 to one cube. To change it,
glMatrixMode(GL_PROJECTION); //from now on all glOrtho, glTranslate etc affect projection
glOrtho(0, widthInPixels, 0, heightInPixels, -1, 1);
glMatrixMode(GL_MODELVIEW); //good to leave in edit-modelview mode
Call this anywhere really, but since the only affecting variables are window width/height it's normal to put it in some initialization code or, if you plan on resizing your window, a resize event handler such as:
void reshape(int x, int y) {... do stuff with x/y ...}
...
glutReshapeFunc(reshape); //give glut the callback
This will make the lower left corner of the screen the origin and values passed to glVertex can now be in pixels.
A couple more things: instead of glTranslatef(-xx,-yy,0.0); you could just use glVertex2f(0,0) after. Push/pop matrix should always be paired within a function so the caller isn't expected to match it.
I'll finish with a full example:
#include <GL/glut.h>
#include <GL/gl.h>
#include <stdio.h>
int main(int argc, char** argv)
{
//create GL context
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_RGBA);
glutInitWindowSize(800, 600);
glutCreateWindow("windowname");
//create test checker image
unsigned char texDat[64];
for (int i = 0; i < 64; ++i)
texDat[i] = ((i + (i / 8)) % 2) * 128 + 127;
//upload to GPU texture
GLuint tex;
glGenTextures(1, &tex);
glBindTexture(GL_TEXTURE_2D, tex);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, 8, 8, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, texDat);
glBindTexture(GL_TEXTURE_2D, 0);
//match projection to window resolution (could be in reshape callback)
glMatrixMode(GL_PROJECTION);
glOrtho(0, 800, 0, 600, -1, 1);
glMatrixMode(GL_MODELVIEW);
//clear and draw quad with texture (could be in display callback)
glClear(GL_COLOR_BUFFER_BIT);
glBindTexture(GL_TEXTURE_2D, tex);
glEnable(GL_TEXTURE_2D);
glBegin(GL_QUADS);
glTexCoord2i(0, 0); glVertex2i(100, 100);
glTexCoord2i(0, 1); glVertex2i(100, 500);
glTexCoord2i(1, 1); glVertex2i(500, 500);
glTexCoord2i(1, 0); glVertex2i(500, 100);
glEnd();
glDisable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, 0);
glFlush(); //don't need this with GLUT_DOUBLE and glutSwapBuffers
getchar(); //pause so you can see what just happened
//System("pause"); //I think this works on windows
return 0;
}
If you're ok with using OpenGL 3.0 or higher, an easier way to draw a texture is glBlitFramebuffer(). It won't support rotation, but only copying the texture to a rectangle within your framebuffer, including scaling if necessary.
I haven't tested this code, but it would look something like this, with tex being your texture id:
GLuint readFboId = 0;
glGenFramebuffers(1, &readFboId);
glBindFramebuffer(GL_READ_FRAMEBUFFER, readFboId);
glFramebufferTexture2D(GL_READ_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
GL_TEXTURE_2D, tex, 0);
glBlitFramebuffer(0, 0, texWidth, texHeight,
0, 0, winWidth, winHeight,
GL_COLOR_BUFFER_BIT, GL_LINEAR);
glBindFramebuffer(GL_READ_FRAMEBUFFER, 0);
glDeleteFramebuffers(1, &readFboId);
You can of course reuse the same FBO if you want to draw textures repeatedly. I only create/destroy it here to make the code self-contained.

What could cause polygons in OpenGL to be rendered out of order?

I'm trying to get some hands-on experience with OpenGL so I've been writing a few basic programs. The short program below is my first attempt at rendering a solid object --a rotating cube-- but for some reason some back polygons seem to be getting drawn over front polygons. My question is what could cause this? Does it have something to do with the depth buffer? I've found that enabling face culling will hide the effect in this case, but why should that be necessary? Shouldn't a face which is occluded by a nearer face be hidden regardless?
#include <GL/gl.h>
#include <GL/glu.h>
#include <GL/glut.h>
typedef struct{
int width;
int height;
char * title;
} window;
window win;
float theta = 0;
const float rotRate = 0.05;//complete rotations per second
int lastTime;
const float verts[][3] = {
{0.0,0.0,0.0},
{1.0,0.0,0.0},
{0.0,1.0,0.0},
{0.0,0.0,1.0},
{0.0,1.0,1.0},
{1.0,0.0,1.0},
{1.0,1.0,0.0},
{1.0,1.0,1.0}};
const int faceIndices[][4] = {
{3,5,7,4},//front
{1,0,2,6},//back
{4,7,6,2},//top
{0,1,5,3},//bottom
{5,1,6,7},//right
{0,3,4,2}};//left
void display(){
//timing and rotation
int currentTime = glutGet(GLUT_ELAPSED_TIME);
int dt = lastTime - currentTime;
lastTime = currentTime;
theta += (float)dt/1000.0*rotRate*360.0;
if (theta > 360.0) theta += -360.0;
//draw
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glTranslatef(0.0, 0.0, -5.0);
glRotatef(theta, 0.0, 1.0, 0.0);
glTranslatef(-1.0,-1.0,-1.0);
glScalef(2.0, 2.0, 2.0);
int f;
for(f=0; f<6;f++){
glBegin(GL_POLYGON);
int v;
for(v=0; v<4; v++){
glColor3fv(verts[faceIndices[f][v]]);
glVertex3fv(verts[faceIndices[f][v]]);
}
glEnd();
}
glutSwapBuffers();
}
void initializeGLUT(int * argc, char ** argv){
glutInit(argc, argv);
glutInitDisplayMode(GLUT_RGB | GLUT_DEPTH | GLUT_DOUBLE);
glutInitWindowSize(win.width, win.height);
glutCreateWindow("OpenGL Cube");
glutDisplayFunc(display);
glutIdleFunc(display);
}
void initializeGL(){
//Setup Viewport matrix
glViewport(0,0,win.width, win.height);
//Setup Projection matrix
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(45,(float) win.width/win.height, 0.1, 100.0);
//Initialize Modelview matrix
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
//Other
glClearColor(0.0, 0.0, 0.0, 0.0);
glClearDepth(1.0);
}
int main(int argc, char** argv){
win.width = 640;
win.height = 480;
initializeGLUT(&argc, argv);
initializeGL();
glutMainLoop();
return 0;
}
Does it have something to do with the depth buffer?
Yes, this is a depth buffer issue, you enabled depth buffer in your code, but obviously you lost some steps, to use depth buffer
Enable depth test by calling glEnable(GL_DEPTH_TEST)
Set the depth test function by glDepthFunc, GL_LEQUAL is the recommended choice for most case, glDepthFunc(GL_LEQUAL); the default value is GL_LESS.
Call glClearDepth to set the cleared value, the initial value is 1.0, this step is not mandatory if you want the default value.
Don't forgot to clear the depth bit before drawing, glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
I've found that enabling face culling will hide the effect in this
case, but why should that be necessary?
By default, OpenGL does't cull any face, the recommend option is
Define vertices in Clock Wise order as back face(this is also OpenGL's choice)
Cull back faces when necessary.
In your case, you defined the polygon vertices all in CCW order, so they are all front faces by default, you just need to cull the back-faces to prevent them from drawing, so the following code also solves your problem.
glEnable(GL_CULL_FACE);
glFrontFace(GL_CCW);
glCullFace(GL_BACK);
Shouldn't a face which is occluded by a nearer face be hidden
regardless?
Yes, that make sense as we are human being, but for the computer, it's your responsibility to tell it how to do that.
References:
Depth buffer
Face culling

Basic rendering of comet Wild 2 shape data using OpenGL

I want to learn OpenGL, and decided to start with a very simple example - rendering the shape of comet Wild 2 as inferred from measurements from the Stardust spacecraft (details about the data in: http://nssdc.gsfc.nasa.gov/nmc/masterCatalog.do?ds=PSSB-00133). Please keep in mind that I know absolutely NOTHING about OpenGL. Some Google-fu helped me get as far as the code presented below. Despite my best efforts, my comet sucks:
I would like for it to look prettier, and I have no idea how to proceed (besides reading the Red book, or similar). For example:
How can I make a very basic "wireframe" rendering of the shape?
Suppose the Sun is along the "bottom" direction (i.e., along -Y), how can I add the light and see the shadow on the other side?
How can I add "mouse events" so that I can rotate my view by, and zoom in/out?
How can I make this monster look prettier? Any references to on-line tutorials, or code examples?
I placed the source code, data, and makefile (for OS X) in bitbucket:
hg clone https://arrieta#bitbucket.org/arrieta/learning-opengl
The data consists of 8,761 triplets (the vertices, in a body-fixed frame) and 17,518 triangles (each triangle is a triplet of integers referring to one of the 8,761 vertex triplets).
#include<stdio.h>
#include<stdlib.h>
#include<OpenGL/gl.h>
#include<OpenGL/glu.h>
// I added this in case you want to "copy/paste" the program into a
// non-Mac computer
#ifdef __APPLE__
# include <GLUT/glut.h>
#else
# include <GL/glut.h>
#endif
/* I hardcoded the data and use globals. I know it sucks, but I was in
a hurry. */
#define NF 17518
#define NV 8761
unsigned int fs[3 * NF];
float vs[3 * NV];
float angle = 0.0f;
/* callback when the window changes size (copied from Internet example) */
void changeSize(int w, int h) {
if (h == 0) h = 1;
float ratio = w * 1.0 / h;
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glViewport(0, 0, w, h);
gluPerspective(45.0f, ratio, 0.2f, 50000.0f); /* 45 degrees fov in Y direction; 50km z-clipping*/
glMatrixMode(GL_MODELVIEW);
}
/* this renders and updates the scene (mostly copied from Internet examples) */
void renderScene() {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
gluLookAt(0.0f, 0.0f, 10000.0f, /* eye is looking down along the Z-direction at 10km */
0.0f, 0.0f, 0.0f, /* center at (0, 0, 0) */
0.0f, 1.0f, 0.0f); /* y direction along natural y-axis */
/* just add a simple rotation */
glRotatef(angle, 0.0f, 0.0f, 1.0f);
/* use the facets and vertices to insert triangles in the buffer */
glBegin(GL_TRIANGLES);
unsigned int counter;
for(counter=0; counter<3 * NF; ++counter) {
glVertex3fv(vs + 3 * fs[counter]); /* here is where I'm loading
the data - why do I need to
load it every time? */
}
glEnd();
angle += 0.1f; /* update the rotation angle */
glutSwapBuffers();
}
int main(int argc, char* argv[]) {
FILE *fp;
unsigned int counter;
/* load vertices */
fp = fopen("wild2.vs", "r");
counter = 0;
while(fscanf(fp, "%f", &vs[counter++]) > 0);
fclose(fp);
/* load facets */
fp = fopen("wild2.fs", "r");
counter = 0;
while(fscanf(fp, "%d", &fs[counter++]) > 0);
fclose(fp);
/* this initialization and "configuration" is mostly copied from Internet */
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_DEPTH | GLUT_DOUBLE | GLUT_RGBA);
glutInitWindowPosition(0, 0);
glutInitWindowSize(1024, 1024);
glutCreateWindow("Wild-2 Shape");
glEnable(GL_LIGHTING);
glEnable(GL_LIGHT0);
glEnable(GL_DEPTH_TEST);
GLfloat mat_specular[] = { 1.0, 1.0, 1.0, 1.0 };
GLfloat mat_shininess[] = { 30.0 };
GLfloat light_position[] = {3000.0, 3000.0, 3000.0, 0.0 };
glClearColor (0.0, 0.0, 0.0, 0.0);
glShadeModel (GL_SMOOTH);
glMaterialfv(GL_FRONT, GL_SPECULAR, mat_specular);
glMaterialfv(GL_FRONT, GL_SHININESS, mat_shininess);
glLightfv(GL_LIGHT0, GL_POSITION, light_position);
glutDisplayFunc(renderScene);
glutReshapeFunc(changeSize);
glutIdleFunc(renderScene);
glutMainLoop();
return 0;
}
EDIT
It is starting to look better, and I have now plenty of resources to look into for the time being. It still sucks, but my questions have been answered!
I added the normals, and can switch back and forth between the "texture" and the wireframe:
PS. The repository shows the changes made as per SeedmanJ's suggestions.
It's really easy to change to a wireframe rendering in OpenGL, you'll have to use
glPolygonMode(GL_FRONT_AND_BACK, GL_LINE);
and to switch back to a fill rendering,
glPolygonMode(GL_FRONT_AND_BACK, GL_FILL);
About the lights, OpenGL allows you to use at most 8 different lights, generating your final rendering thanks to the normals, and materials. You can activate a lighting mode with:
glEnable(GL_LIGHTING);
and then activate each of your lights with either:
glEnable(GL_LIGHT0);
glEnable(GL_LIGHT1);
to change a light property like its position, please look at
http://linux.die.net/man/3/gllightfv
You'll have to set up your normals for each vertices you define, if your using the glBegin() method. In VBO rendering it's the same but normals are also contained in the vram. In the glBegin() method, you can use
glNormal3f(x, y, z); for example
for each vertex you define.
And for more information about what you can do, the redbook is a good way to begin.
Moving your "scene" is one more thing OpenGL indirectly allows you to do. As it all works with matrix,
you can either use
glTranslate3f(x, y, z);
glRotate3f(num, x, y, z);
....
Managing key events and mouse events has (i'm almost sure about that) nothing to do with OpenGL, it depends on the lib your using, for example glut/SDL/... so you'll have to refer to their own documentations.
Finaly, for more further information about some of the functions you can use, http://www.opengl.org/sdk/docs/man/, and there's also a tutorial part, leading you to different interesting websites.
Hope this helps!
How can I make a very basic "wireframe" rendering of the shape?
glPolygonMode( GL_FRONT, GL_LINE );
Suppose the Sun is along the "bottom" direction (i.e., along -Y), how can I add the light and see the shadow on the other side?
Good shadows are hard, especially with the fixed-function pipeline.
But before that you need normals to go with your vertices. You can calculate per-face normals pretty easily.
How can I add "mouse events" so that I can rotate my view by, and zoom in/out?
Try the mouse handlers I did here.
Though I some like to say "Start with something simpler", I think, sometimes you need to "dive in" to get a good understanding, on a small time span! Well done!
Also if you would like an example, please ask...
I have written a WELL DOCUMENTED, and efficient,
but readable pure Win32 (No .NET, or MFC) OpenGL FPS!
Though it appears other people answered most of you questions...
I can help you if you would like, maybe make a cool texture (if you don't have one)...
To answer this question:
glBegin(GL_TRIANGLES);
unsigned int counter;
for(counter=0; counter<3 * NF; ++counter) {
glVertex3fv(vs + 3 * fs[counter]); /* here is where I'm loading
the data - why do I need to
load it every time? */
}
glEnd();
That is rendering the vertices of the 3D Model (in the case the view has changed)
and using the DC (Device Context), BitBlt's it- onto the Window!
It has to be done repeatedly (in case something has caused the window to clear)...

Resources