I'm having trouble getting a texture loaded with SOIL to show up properly on this quad. In case it's not clear, I'm just writing up a little 2D sprite engine, and this is the rendering portion (needs a bit of optimization without a doubt). I haven't done any OpenGL in a couple months, and I'm admittedly quite rusty.
#include <OpenGL/OpenGL.h>
#include <GLUT/GLUT.h>
#include "SOIL.h"
#include <stdio.h>
GLuint linktex;
void drawSprite(GLint left, GLint right, GLint bottom, GLint top, GLuint texture){
//Draw clockwise
glColor3f(1.0, 1.0, 1.0);
glBindTexture(GL_TEXTURE_2D, texture);
glBegin(GL_QUADS);
glTexCoord2i(1,1); glVertex2i(right , top);
glTexCoord2i(1,0); glVertex2i(right , bottom);
glTexCoord2i(0,0); glVertex2i(left , bottom);
glTexCoord2i(0,1); glVertex2i(left , top);
glEnd();
}
void display(void){
glClearColor(0.0, 0.0, 0.0, 1.0);
glClear(GL_COLOR_BUFFER_BIT);
glLoadIdentity();
glOrtho(0.0, 240.0, 0.0, 160.0, -1.0, 1.0);
drawSprite(50, 82, 50, 82, linktex);
glFlush();
}
void reshape(int w, int h){
glViewport(0, 0, (GLsizei)w, (GLsizei)h);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glMatrixMode(GL_MODELVIEW);
}
void init(){
glEnable(GL_TEXTURE_2D);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glFrontFace(GL_CW);
GLuint linktex = SOIL_load_OGL_texture(
"link.png",
SOIL_LOAD_AUTO,
SOIL_CREATE_NEW_ID,
SOIL_FLAG_INVERT_Y
);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR_MIPMAP_LINEAR);
if( 0 == linktex )
{
printf( "SOIL loading error: '%s'\n", SOIL_last_result());
}
}
int main (int argc, char **argv) {
glutInit (&argc, argv);
glutInitDisplayMode (GLUT_SINGLE);
glutInitWindowSize (240, 160);
glutInitWindowPosition (100, 100);
glutCreateWindow ("Test");
glutDisplayFunc (display);
glutReshapeFunc (reshape);
glutMainLoop();
init();
return 0;
}
It looks like when you are loading the texture, you are assigning the id to a local variable linktex instead of the global you declared at the beginning of the file.
When you reference linktex in the void display(void); method, the texture is un-initialized.
Try changing your call to load the texture to :
// comment out the type declaration, to assign to the global instead of a local
/*GLuint*/ linktex = SOIL_load_OGL_texture(
"link.png",
SOIL_LOAD_AUTO,
SOIL_CREATE_NEW_ID,
SOIL_FLAG_INVERT_Y
);
Related
I do not understand how this main function works. I have a display function, which uses glDrawArrays, but I do not see it being called. I only see it being used as a parameter for glutDisplayFunction.
Here is my main:
int main(int argc, char** argv){
// Set up the window
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_DOUBLE|GLUT_RGB);
glutInitWindowSize(800, 600);
glutCreateWindow("Hello Triangle");
// Tell glut where the display function is
glutDisplayFunc(display);
// A call to glewInit() must be done after glut is initialized!
GLenum res = glewInit();
// Check for any errors
if (res != GLEW_OK) {
fprintf(stderr, "Error: '%s'\n", glewGetErrorString(res));
return 1;
}
// Set up your objects and shaders
init();
// Begin infinite event loop
glutMainLoop();
return 0;
}
The problem is, I need to create two different triangles, on the same window, using seperate VAOs and VBOs. I've created the seperate VAO and VBO for my second triangle. However, I do not see how I am meant to generate and link my buffers, draw my arrays, switch to my second buffer, and draw those arrays, when I do not even know when my display function is being called.
My display function looks like this:
void display(){
glClear(GL_COLOR_BUFFER_BIT);
// NB: Make the call to draw the geometry in the currently activated vertex buffer. This is where the GPU starts to work!
glDrawArrays(GL_TRIANGLES, 0, 3);
glutSwapBuffers();
}
All operations could be done in separate function named asyouwant called from main
example:
#include <GL/glut.h>
int main(int argc, char** argv)
{
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_SINGLE);
glutInitWindowSize(300, 300);
glutInitWindowPosition(100, 100);
glutCreateWindow("Hello world :D");
glutDisplayFunc(displayMe); // = > draw in displayme function
glutMainLoop();
return 0;
}
void displayMe(void)
{
glClear(GL_COLOR_BUFFER_BIT);
glBegin(GL_POLYGON);
glVertex3f(0.0, 0.0, 0.0);
glVertex3f(0.5, 0.0, 0.0);
glVertex3f(0.5, 0.5, 0.0);
glVertex3f(0.0, 0.5, 0.0);
glEnd();
// a second geoform
glBegin(GL_POLYGON);
glVertex3f(0.0, 0.0, 0.0);
glVertex3f(-0.5, 0.0, 0.0);
glVertex3f(-0.5, -0.5, 0.0);
glVertex3f(0.0, -0.5, 0.0);
glEnd();
glFlush();
}
As complement: for VAO and buffer
1- Init (declare VAO , declare buffer of vertices, ...)
GLuint VaoID;
glGenVertexArrays(1, &VaoID);
glBindVertexArray(VaoID);
// An array of 3 vectors which represents 3 vertices
static const GLfloat g_vertex_buffer_data[] = {
-1.0f, -1.0f, 0.0f,
1.0f, -1.0f, 0.0f,
0.0f, 1.0f, 0.0f,
};
Once time only
// This will identify our vertex buffer
GLuint vertexbuffer;
// Generate 1 buffer, put the resulting identifier in vertexbuffer
glGenBuffers(1, &vertexbuffer);
// The following commands will talk about our 'vertexbuffer' buffer
glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer);
// Give our vertices to OpenGL.
glBufferData(GL_ARRAY_BUFFER, sizeof(g_vertex_buffer_data), g_vertex_buffer_data, GL_STATIC_DRAW);
2- Use it (bind and draw in display fucntion)
// 1st attribute buffer : vertices
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer);
glVertexAttribPointer(
0, // attribute 0. No particular reason for 0, but must match the layout in the shader.
3, // size
GL_FLOAT, // type
GL_FALSE, // normalized?
0, // stride
(void*)0 // array buffer offset
);
// Draw the triangle !
glDrawArrays(GL_TRIANGLES, 0, 3); // Starting from vertex 0; 3 vertices total -> 1 triangle
glDisableVertexAttribArray(0);
I'm trying my hand at making a chip8 emulator using just plain C. I got most of the opcodes set so I just need to work on the display. I decided on using GLUT for the display since it appeared quick setup. The main idea is to display the chip8 sprites onto the 2D texture. I got it to display some digits though some opcode execution but it's flickering. I'm using double buffering since from what I read it's the best way to make sure something is ready to be displayed but I'm still getting flickering. Any ideas on what can cause it?
Here's all the code for OpenGL and GLUT for the display.
int main(int argc, char * argv[]) {
// if(argc < 2){
// printf("Usage: %s <bin file>\n", argv[0]);
// exit(0);
// }
//initialize chip 8
chip8_Init();
//load file if exists
chip8_load("test6.bin");
//setup graphics
glutInit(&argc, argv);
glutInitWindowSize(display_width, display_height);
glutInitWindowPosition(320, 320);
glutInitDisplayMode(GLUT_RGB|GLUT_DOUBLE|GLUT_DEPTH);
glutCreateWindow("Chip8 GL");
setupTexture();
glutDisplayFunc(renderScene);
glutReshapeFunc(changeSize);
glutIdleFunc(renderScene);
glutKeyboardFunc(chip8_keypad);
glutKeyboardUpFunc(chip8_keypadUp);
glutMainLoop();
return 0;
}
void setupTexture(){
glClearColor (0.0, 0.0, 0.0, 0.0);
glShadeModel(GL_FLAT);
glEnable(GL_DEPTH_TEST);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glGenTextures(1, &texName);
glBindTexture(GL_TEXTURE_2D, texName);
//set texture parameters
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER,
GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER,
GL_NEAREST);
//update texture
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, SCREEN_WIDTH,
SCREEN_HEIGHT, 0, GL_RGB, GL_UNSIGNED_BYTE,
screenData);
}
void updateTexture(){ //display()
//update pixels
for (int i = 0; i < SCREEN_HEIGHT; i++) {
for (int j = 0; j < SCREEN_WIDTH; j++) {
if(chip8.gfx[(i * SCREEN_WIDTH)+j] == 0){
screenData[i][j][0] = 0;
screenData[i][j][1] = 0;
screenData[i][j][2] = 0;
}
else{
screenData[i][j][0] = 255;
screenData[i][j][1] = 255;
screenData[i][j][2] = 255;
}
}
}
//update texture
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, SCREEN_WIDTH,
SCREEN_HEIGHT, 0, GL_RGB, GL_UNSIGNED_BYTE,
screenData);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnable(GL_TEXTURE_2D);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_DECAL);
glBindTexture(GL_TEXTURE_2D, texName);
glBegin(GL_QUADS);
glTexCoord2f(0.0, 0.0); glVertex2d(0.0, 0.0);
glTexCoord2f(1.0, 0.0); glVertex2d(display_width, 0.0);
glTexCoord2f(1.0, 1.0); glVertex2d(display_width, display_height);
glTexCoord2f(0.0, 1.0); glVertex2d(0.0, display_height);
glEnd();
glutSwapBuffers();
}
void renderScene(){
chip8_emulateCycle();
if(drawFlag == 1){
updateTexture();
drawFlag = 0;
}
}
void changeSize(int w, int h){
glClearColor(0.0f, 0.0f, 0.5f, 0.0f);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(0, w, h, 0);
glMatrixMode(GL_MODELVIEW);
glViewport(0, 0, w, h);
// Resize quad
display_width = w;
display_height = h;
}
Edit: I might have figured it out. In the draw opcode, it uses XOR. So when the draw command for the first time the sprite is displayed and when called again it disappears and so on so forth.
My opengl is very dusty, but I saw some points in your code:
You should enable GL_TEXTURE_2D before initialize it:
void setupTexture()
{
glClearColor (0.0, 0.0, 0.0, 0.0);
glShadeModel(GL_FLAT);
glEnable(GL_DEPTH_TEST);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
/* enable texturing */
glEnable(GL_TEXTURE_2D);
/* create texture */
glGenTextures(1, &texName);
/* select texture */
glBindTexture(GL_TEXTURE_2D, texName);
/* select behavior */
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_DECAL);
/* ... */
}
You should select the texture before modifying it
void updateTexture(){
/* update pixels */
/* ... */
/*select texture */
glBindTexture(GL_TEXTURE_2D, texName);
/*update texture */
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, SCREEN_WIDTH,
SCREEN_HEIGHT, 0, GL_RGB, GL_UNSIGNED_BYTE,
screenData);
/* clear screen */
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
/* draw texture */
glBegin(GL_QUADS);
glTexCoord2f(0.0, 0.0); glVertex2d(0.0, 0.0);
glTexCoord2f(1.0, 0.0); glVertex2d(display_width, 0.0);
glTexCoord2f(1.0, 1.0); glVertex2d(display_width, display_height);
glTexCoord2f(0.0, 1.0); glVertex2d(0.0, display_height);
glEnd();
glutSwapBuffers();
}
Moreover, you may test for the result of called functions, by using glGetError():
void check_for_error(void)
{
GLenum err = glGetError();
if (GL_NO_ERROR != err)
{
fprintf("Error %d\n", err);
exit(EXIT_FAILURE);
}
}
Said that, I don't know the data inside chip8.gfx[], may be they are not as you expect.
I'm having issues loading a texture onto my triangle strips. I'm following Anton Gerdelan's tutorial, and after failing with my main program, I went back to the basics and just tried to make a plain square and put his texture on it (the skull and crossbones).
I completely copy and pasted code from his "Hello Triangle" page, which worked, but once trying to fit in code from his texture tutorial above (and changing the triangle to a square), all I'm getting is a big white square with no texture.
I've checked the status of my shaders with glGetShaderiv() and they returned positive, I checked the image I loaded to see if the pixel data was sensible, so I believe my error is in declaring my VBOs, or the order/parameters in which I'm using them.
Here's the complete code which I copied, which compiles fine in Visual Studio 2013, except the output isn't what is expected.
I am using the static libraries of GLEW and GLFW, along with the STBI Image header
#include <GL/glew.h> // include GLEW and new version of GL on Windows
#include <GL/glfw3.h> // GLFW helper library
#include <stdio.h>
#define STB_IMAGE_IMPLEMENTATION
#include <stb/stb_image.h>
const char* vertex_shader =
"#version 400\n"
"in vec3 vp;"
"layout (location=1) in vec2 vt; // per-vertex texture co-ords"
"out vec2 texture_coordinates; "
"void main () {"
" gl_Position = vec4 (vp, 1.0);"
" texture_coordinates = vt; "
"}";
const char* fragment_shader =
"#version 400\n"
"in vec2 texture_coordinates;"
"uniform sampler2D basic_texture;"
"out vec4 frag_colour;"
"void main () {"
"vec4 texel = texture(basic_texture, texture_coordinates);"
"frag_colour = texel; "
"}";
float points[] = {
-0.5f, -0.5f, 0.0f,
-0.5f, 0.5f, 0.0f,
0.5f, -0.5f, 0.0f,
0.5f, 0.5f, 0.0f
};
float texcoords[] = {
0.0f, 1.0f,
0.0f, 0.0f,
1.0, 0.0,
1.0, 0.0,
1.0, 1.0,
0.0, 1.0
};
GLFWwindow* window;
unsigned int vt_vbo;
unsigned int tex = 0;
GLuint vao = 0;
GLuint vbo = 0;
GLuint shader_programme;
void initializeGL(){
// start GL context and O/S window using the GLFW helper library
if (!glfwInit()) {
printf("ERROR: could not start GLFW3\n");
return;
}
window = glfwCreateWindow(640, 480, "Texture Test", NULL, NULL);
if (!window) {
printf("ERROR: could not open window with GLFW3\n");
glfwTerminate();
return;
}
glfwMakeContextCurrent(window);
// start GLEW extension handler
glewExperimental = GL_TRUE;
glewInit();
// get version info
const GLubyte* renderer = glGetString(GL_RENDERER); // get renderer string
const GLubyte* version = glGetString(GL_VERSION); // version as a string
printf("Renderer: %s\n", renderer);
printf("OpenGL version supported %s\n", version);
// tell GL to only draw onto a pixel if the shape is closer to the viewer
glEnable(GL_DEPTH_TEST); // enable depth-testing
glDepthFunc(GL_LESS); // depth-testing interprets a smaller value as "closer"
}
void startShaders(){
GLuint vs = glCreateShader(GL_VERTEX_SHADER);
glShaderSource(vs, 1, &vertex_shader, NULL);
glCompileShader(vs);
GLuint fs = glCreateShader(GL_FRAGMENT_SHADER);
glShaderSource(fs, 1, &fragment_shader, NULL);
glCompileShader(fs);
shader_programme = glCreateProgram();
glAttachShader(shader_programme, fs);
glAttachShader(shader_programme, vs);
glLinkProgram(shader_programme);
GLint vsstat;
glGetShaderiv(vs, GL_COMPILE_STATUS, &vsstat);
GLint fsstat;
glGetShaderiv(fs, GL_COMPILE_STATUS, &fsstat);
printf("%i\n%i\n", vsstat, fsstat);
}
void loadImage(){
int x, y, n;
int force_channels = 4;
unsigned char* image_data = stbi_load("skulluvmap.png", &x, &y, &n, force_channels);
if (!image_data) {
printf("ERROR: could not load %s\n", "skulluvmap.png");
}
int width_in_bytes = x * 4;
unsigned char *top = NULL;
unsigned char *bottom = NULL;
unsigned char temp = 0;
int half_height = y / 2;
for (int row = 0; row < half_height; row++) {
top = image_data + row * width_in_bytes;
bottom = image_data + (y - row - 1) * width_in_bytes;
for (int col = 0; col < width_in_bytes; col++) {
temp = *top;
*top = *bottom;
*bottom = temp;
top++;
bottom++;
}
}
printf("first 4 bytes are: %i %i %i %i\n",
image_data[0], image_data[1], image_data[2], image_data[3]
);
glGenTextures(1, &tex);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, tex);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, x, y, 0, GL_RGBA, GL_UNSIGNED_BYTE, image_data);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
}
void generateBuffers(){
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);
glGenBuffers(1, &vbo);
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glBufferData(GL_ARRAY_BUFFER, 12 * sizeof(float), points, GL_STATIC_DRAW);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, NULL);
glEnableVertexAttribArray(0); // don't forget this!
glGenBuffers(1, &vt_vbo);
glBindBuffer(GL_ARRAY_BUFFER, vt_vbo);
glBufferData(GL_ARRAY_BUFFER, 12 * sizeof(float), texcoords, GL_STATIC_DRAW);
glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, 0, NULL);
glEnableVertexAttribArray(1); // don't forget this!
}
void mainLoop(){
while (!glfwWindowShouldClose(window)) {
// wipe the drawing surface clear
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
int tex_loc = glGetUniformLocation(shader_programme, "basic_texture");
glUseProgram(shader_programme);
glUniform1i(tex_loc, 0); // use active texture 0
// draw points 0-4 from the currently bound VAO with current in-use shader
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
// update other events like input handling
glfwPollEvents();
// put the stuff we've been drawing onto the display
glfwSwapBuffers(window);
}
}
int main() {
initializeGL();
startShaders();
loadImage();
generateBuffers();
mainLoop();
// close GL context and any other GLFW resources
glfwTerminate();
return 0;
}
You're misusing your second buffer which is supposed to be the buffer with texcoords. So what you really want to achieve is having a pair of texture coordinates for every vertex. It means that you texcoords array should in fact store 4 pairs because you have 4 triples in the points array. So that's the first fix. You probably want it to look like:
float texcoords[] = {
0.0f, 1.0f,
0.0f, 0.0f,
1.0, 0.0,
1.0, 1.0,
};
Then in the generateBuffers, your vt_vbo is wrong. The data should be passed this way:
glBufferData(GL_ARRAY_BUFFER, 8 * sizeof(float), texcoords, GL_STATIC_DRAW);
because you only want to pass 8 values there. 2 texcoords for each vertex.
Edit:
This however, doesn't fully explain why your texture doesn't appear at all. I primarily thought that there might be a problem with your texcoords pointer but it doesn't seem to be the case.
I have small code drawing a square initially but when I maximize the window it changes to rectangle. I know this has do with aspect ratio and when I add glutReshapeFunc(Reshape); call it works perfectly, I mean after maximizing the window, it remains square only. ReshapFunc is called every time display is modified and before the first display as well.
I am not getting just by adding reshapefunc, how it maintains aspect ratio. Please help me understanding this. I am copying my code here:
void display()
{
glClear(GL_COLOR_BUFFER_BIT);
glColor3f(0.5, 0.5, 1.0);
glBegin(GL_POLYGON);
glVertex2f(-0.5, -0.5);
glVertex2f(0.5, -0.5);
glVertex2f(0.5, 0.5);
glVertex2f(-0.5, 0.5);
glEnd();
glutSwapBuffers();
glFlush();
}
void Reshape(int w, int h) {
glutPostRedisplay();
}
void init()
{
glClearColor(1.0, 0.0, 1.0, 0.0);
glColor3f(1.0, 1.0, 1.0);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(-1.0, 1.0, -1.0, 1.0);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
}
int main(int argc, char** argv)
{
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_RGBA | GLUT_DOUBLE);
glutInitWindowSize(500, 500);
glutInitWindowPosition(200, 200);
glutCreateWindow("basics");
glutDisplayFunc(display);
// If I comment this, it will become rectangle.
glutReshapeFunc(Reshape);
init();
glutMainLoop();
}
Your problem is related to the use of gluOrtho2D (...). If you want to preserve aspect ratio, your projection matrix needs to be defined based on the dimensions of your window.
I suggest you do this in your reshape function:
GLdouble aspect = (GLdouble)w / (GLdouble)h;
glMatrixMode (GL_PROJECTION);
glLoadIdentity ();
gluOrtho2D (-1.0 * aspect, 1.0 * aspect, -1.0, 1.0);
glMatrixMode (GL_MODELVIEW);
glViewport (0, 0, w, h);
This program should allow the user to select the material of a teapot, that will change dynamically.So I made a menu.
I use setRGB (declared in a utility file) to set the color of an array of 3 GLfloat's.
createMenu just creates the menu reading the voices from a va_list.
setMaterial sets the material, according to the value passed with an enumeration:
typedef enum
{
BlackPlastic= 0,
Brass,
Bronze,
Chrome,
Copper,
Gold,
Peweter,
Silver,
PolishedSilver
}MaterialType;
I will omit the body of the functions that I have tested since they work:
int createMenu(void (*callback) (int),int key, const char* const first, ...)
{
// creates a menu, the number of entries depends on the list length,
// the value starts from zero
}
void setRGB( GLfloat* color, GLfloat red, GLfloat green, GLfloat blue)
{
// Sets the color
}
void setMaterial (GLfloat** material, MaterialType type)
{
// Sets the material color (ambient, diffuse, specular).
}
That's the whole program.My fear here is that I am doing something wrong so that the teapot isn't drawn because OpenGL enters in an invalid state.
The problem is that sometimes I don't see the teapot drawn in the window, I just get a black window.Incredibly sometimes work and I see the teapot, and I am able to change the material colors.
#include <OpenGL/OpenGL.h>
#include <GLUT/GLUT.h>
#include "utility.h"
#include <stdlib.h>
GLfloat width=500, height=500;
GLfloat** material;
GLfloat light[3][3]= { {1,1,0}, {1,0.5,0}, {1,0,0} };
void menuCallback (int choice)
{
setMaterial((GLfloat**)material, choice);
glutPostRedisplay();
}
void init()
{
glEnable(GL_DEPTH_TEST);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(-width/2, width/2, -height/2, height/2, 1, 1000);
material= malloc(4*sizeof(GLfloat*));
for(GLuint i=0; i<4; i++)
{
material[i]=malloc(3*sizeof(GLfloat));
}
setRGB(material[3], 1, 1, 0);
setMaterial(material, BlackPlastic);
}
void display()
{
glClearColor(0, 0, 0, 0);
glClear(GL_COLOR_BUFFER_BIT);
glShadeModel(GL_FLAT);
glMaterialfv(GL_FRONT, GL_AMBIENT, material[0] );
glMaterialfv(GL_FRONT, GL_DIFFUSE, material[1]);
glMaterialfv(GL_FRONT, GL_SPECULAR, material[2]);
glMaterialfv(GL_FRONT, GL_SHININESS, material[3] );
glLightfv(GL_LIGHT0, GL_AMBIENT, light[0]);
glLightfv(GL_LIGHT0, GL_DIFFUSE, light[1]);
glLightfv(GL_LIGHT0, GL_SPECULAR, light[2]);
glEnable(GL_LIGHT0);
glEnable(GL_LIGHTING);
glutSolidTeapot(100);
glutSwapBuffers();
}
void keyboard(unsigned char key, int x, int y)
{
}
int main(int argc,char * argv[])
{
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_RGBA | GLUT_DOUBLE );
glutInitWindowPosition(100, 100);
glutInitWindowSize(width,height);
glutCreateWindow(*argv);
createMenu(menuCallback, GLUT_LEFT_BUTTON, "Black Plastic", "Brass", "Bronze", "Chrome", "Copper", "Gold", "Peweter", "Silver", "Polished Silver", NULL);
glutDisplayFunc(display);
glutKeyboardFunc(keyboard);
init();
glutMainLoop();
return 0;
}
You're drawing a solid teapot? I presume you enabled depth testing (though your glutInitDisplayMode lacks the depth buffer bit). Anyway, you should probably also clear the depth buffer. Right now you're clearing only the color buffer
glClear(GL_COLOR_BUFFER_BIT);
Change it into
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);