Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 8 years ago.
Improve this question
I had a drawing function called DrawImage but it's really confusing and is only working with a specific form of the reshape function so I have 2 questions:
How do I draw a texture in OpenGL ? I just want to create a function that gets a texture, x, y, width, height and maybe angle and paint it and draws it according to the arguments. I want to draw it as a GL_QUAD regularly but I'm not sure how to do that anymore .-. People say I should use SDL or SFML to do so, is it recommended ? If it is, can you give me a simple function that loads a texture and one that draws it ? I'm currently using SOIL to load textures.
the function is as here:
void DrawImage(char filename, int xx, int yy, int ww, int hh, int angle)
{
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, filename);
glLoadIdentity();
glTranslatef(xx,yy,0.0);
glRotatef(angle,0.0,0.0,1.0);
glTranslatef(-xx,-yy,0.0);
// Draw a textured quad
glBegin(GL_QUADS);
glTexCoord2f(0, 0); glVertex2f(xx,yy);
glTexCoord2f(0, 1); glVertex2f(xx,yy + hh);
glTexCoord2f(1, 1); glVertex2f(xx + ww,yy + hh);
glTexCoord2f(1, 0); glVertex2f(xx + ww,yy);
glDisable(GL_TEXTURE_2D);
glPopMatrix();
glMatrixMode(GL_PROJECTION);
glPopMatrix();
glMatrixMode(GL_MODELVIEW);
glEnd();
}
Someone said to me that you can't call glDisable, glPopMatrix or glMatrixMode between glBegin and glEnd. The problem is - the code won't work without it. Any idea how to make it work without it ?
2. About the glutReshapeFunc, the documentation says it gets a pointer to a functions with 2 args, width and height - I created (up to now) a function that gets void - any idea how to write a reshape function that gets a width and height and actually does what reshape needs to do.
and one minor question: How better is C++ than C when it comes to GUIs like OpenGL ? As all as I can see, only OOP is the matter and I didn't went to any problem that OOP could solve and C couldn't (in OpenGL I mean).
No need to answer all of the question - question number 1 is basically the most important to me :P
Your DrawImage function looks pretty much just fine. Although, yes, you shouldn't be calling glMatrixMode etc. befor glEnd so remove them. I believe the issue is simply to do with setting up your projection matrix and the added calls just happen to fix an issue that shouldn't be there in the first place. glutReshapeFunc is used to capture window resize events so until you need it you don't have to use it.
SDL gives you a lot more control over events and glut, but takes a little longer to set up. GLFW is also a good alternative. I guess its not that important to change unless you see a feature you need. These are libs to create a GL context and do some event handling. SOIL can be used for them all.
OpenGL is a graphics API and gives a common interface for doing hardware accelerated 3D graphics, not a GUI lib. There are GUI libs written for OpenGL though.
Yes I believe many take OOP to the extreme. I like the term C++ as a better C, rather than completely restructuring the way you code. Maybe just keep using C, but with a C++ compiler. Then when you see a feature you like, use it. Eventually you may find you're using lots and then have a better appreciation for the reason for their existence and when to use them rather than blindly following coding practices. Just imo, this is all very subjective.
So, the projection matrix...
To draw stuff in 3D on a 2D screen you "project" the 3D points onto a plane. I'm sure you've seen images like this:
This allows you to define your arbitrary 3D coordinate system. Except for drawing stuff in 2D its natural to want to use pixel coordinates directly. After all that's what you monitor displays. Thus, you want to use kind of a bypass projection which doesn't do any perspective scaling and matches pixels in scale and aspect ratio.
The default projection (or "viewing volume") is an orthographic -1 to one cube. To change it,
glMatrixMode(GL_PROJECTION); //from now on all glOrtho, glTranslate etc affect projection
glOrtho(0, widthInPixels, 0, heightInPixels, -1, 1);
glMatrixMode(GL_MODELVIEW); //good to leave in edit-modelview mode
Call this anywhere really, but since the only affecting variables are window width/height it's normal to put it in some initialization code or, if you plan on resizing your window, a resize event handler such as:
void reshape(int x, int y) {... do stuff with x/y ...}
...
glutReshapeFunc(reshape); //give glut the callback
This will make the lower left corner of the screen the origin and values passed to glVertex can now be in pixels.
A couple more things: instead of glTranslatef(-xx,-yy,0.0); you could just use glVertex2f(0,0) after. Push/pop matrix should always be paired within a function so the caller isn't expected to match it.
I'll finish with a full example:
#include <GL/glut.h>
#include <GL/gl.h>
#include <stdio.h>
int main(int argc, char** argv)
{
//create GL context
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_RGBA);
glutInitWindowSize(800, 600);
glutCreateWindow("windowname");
//create test checker image
unsigned char texDat[64];
for (int i = 0; i < 64; ++i)
texDat[i] = ((i + (i / 8)) % 2) * 128 + 127;
//upload to GPU texture
GLuint tex;
glGenTextures(1, &tex);
glBindTexture(GL_TEXTURE_2D, tex);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, 8, 8, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, texDat);
glBindTexture(GL_TEXTURE_2D, 0);
//match projection to window resolution (could be in reshape callback)
glMatrixMode(GL_PROJECTION);
glOrtho(0, 800, 0, 600, -1, 1);
glMatrixMode(GL_MODELVIEW);
//clear and draw quad with texture (could be in display callback)
glClear(GL_COLOR_BUFFER_BIT);
glBindTexture(GL_TEXTURE_2D, tex);
glEnable(GL_TEXTURE_2D);
glBegin(GL_QUADS);
glTexCoord2i(0, 0); glVertex2i(100, 100);
glTexCoord2i(0, 1); glVertex2i(100, 500);
glTexCoord2i(1, 1); glVertex2i(500, 500);
glTexCoord2i(1, 0); glVertex2i(500, 100);
glEnd();
glDisable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, 0);
glFlush(); //don't need this with GLUT_DOUBLE and glutSwapBuffers
getchar(); //pause so you can see what just happened
//System("pause"); //I think this works on windows
return 0;
}
If you're ok with using OpenGL 3.0 or higher, an easier way to draw a texture is glBlitFramebuffer(). It won't support rotation, but only copying the texture to a rectangle within your framebuffer, including scaling if necessary.
I haven't tested this code, but it would look something like this, with tex being your texture id:
GLuint readFboId = 0;
glGenFramebuffers(1, &readFboId);
glBindFramebuffer(GL_READ_FRAMEBUFFER, readFboId);
glFramebufferTexture2D(GL_READ_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
GL_TEXTURE_2D, tex, 0);
glBlitFramebuffer(0, 0, texWidth, texHeight,
0, 0, winWidth, winHeight,
GL_COLOR_BUFFER_BIT, GL_LINEAR);
glBindFramebuffer(GL_READ_FRAMEBUFFER, 0);
glDeleteFramebuffers(1, &readFboId);
You can of course reuse the same FBO if you want to draw textures repeatedly. I only create/destroy it here to make the code self-contained.
Related
I read this tutorial online: http://www.opengl-tutorial.org/beginners-tutorials/tutorial-3-matrices/. More precisely the chapter about projection matrices.
I am trying to apply what I read, to a specific part of what I drew. However I don't see anything different. Did I misunderstand anything or do I just apply it incorrectly?
I am actually drawing 2 houses. And I would like to create this effect: http://www.opengl-tutorial.org/assets/images/tuto-3-matrix/homogeneous.png
My code:
void rescale()
{
glViewport(0,0,500,500);
glLoadIdentity();
glMatrixMode(GL_PROJECTION);
glOrtho(-6.500, 6.500, -6.500, 6.500, -6.00, 12.00);
}
void setup()
{
glClearColor(1.0, 1.0, 1.0, 1.0);
glEnable(GL_DEPTH_TEST);
}
void draw()
{
glPushMatrix();
//draw something in the orthogonal projection
glPopMatrix();
glPushMatrix();
gluPerspective(0.0, 1, -12, 30); //perspective projection
glPushMatrix();
glScalef(0.2,0.2,0.2);
glTranslatef(-1, 0.5, -5);
glColor3f(1.0, 0.0, 0.0);
drawVertexCube();
glPopMatrix();
glPushMatrix();
glScalef(0.2,0.2,0.2);
glTranslatef(-1, 0.5, -40);
glColor3f(0, 1.0, 0.0);
drawVertexCube();
glPopMatrix();
glPopMatrix();
glFlush();
}
This is the result:
As you can see there is no (visible) result. I would like to accentuate the effect much more.
Your gluPerspective call is invalid:
gluPerspective(0.0, 1, -12, 30);
The near and far parameters must both be positive, otherwise the command will generate just an GL error and will leave the current matrix unmodified, so your ortho projection is still in use. Your glPushMatrix()/glPopMatrix() calls won't help because you never enclosed the glOrtho() call in such a block, so you never restore to a state where that matrix is not already applied. Furthermore. the glMatrixMode() inbetween a glLoadIdentity() and the glOrtho seems also quite weird.
You should also be aware that all GL matrix functions besides the glLoad* ones will multiply the current matrix by a new matrix. Since you still have the ortho matrix applied, you would get the product of both matrices, which will totally screw up your results.
Finally, you should be aware that all of the GL matrix strack functionality is deprecated and completely removed in the core profile of modern OpenGL. If you are learning OpenGL nowadays, you should really consider learning the "modern" way in OpenGL (which is basically already a decade old).
I am using CUDA to generate this ABGR output image. The image in question is stored in a uchar4 array. Each element of the array represents the color of each pixel in the image. Obviously, this output array is a 2D image but it is allocated in CUDA as a linear memory of interleaved bytes.
I know that CUDA can easily map this array to an OpenGL Vertex Buffer Object. My question is, assuming that I have the RGB value of every pixel in an image, along with the width and height of the image, how can I draw this image to screen using OpenGL?
I know that some kind of shader must be involved but since my knowledge is very little, I have no idea how a shader can use the color of each pixel, but map it to correct screen pixels.
I know I should increase my knowledge in OpenGL, but this seems like a trivial task.
If there is an easy way for me to draw this image, I'd rather not spend much time learning OpenGL.
I finally figured out an easy way to do what I wanted. Unfortunately, I did not know about the existence of the sample that Robert was talking about on NVIDIA's website.
Long story short, the easiest way to draw the image was to define a Pixel Buffer Object in OpenGL, register the buffer with CUDA and pass it as an output array of uchar4 to the CUDA kernel. Here is a quick pseudo-code based on JOGL and JCUDA that shows the steps involved. Most of the code was obtained from the sample on NVIDIA's website:
1) Creaing the OpenGL buffers
GL2 gl = drawable.getGL().getGL2();
int[] buffer = new int[1];
// Generate buffer
gl.glGenBuffers(1, IntBuffer.wrap(buffer));
glBuffer = buffer[0];
// Bind the generated buffer
gl.glBindBuffer(GL2.GL_ARRAY_BUFFER, glBuffer);
// Specify the size of the buffer (no data is pre-loaded in this buffer)
gl.glBufferData(GL2.GL_ARRAY_BUFFER, imageWidth * imageHeight * 4, (Buffer)null, GL2.GL_DYNAMIC_DRAW);
gl.glBindBuffer(GL2.GL_ARRAY_BUFFER, 0);
// The bufferResource is of type CUgraphicsResource and is defined as a class field
this.bufferResource = new CUgraphicsResource();
// Register buffer in CUDA
cuGraphicsGLRegisterBuffer(bufferResource, glBuffer, CUgraphicsMapResourceFlags.CU_GRAPHICS_MAP_RESOURCE_FLAGS_NONE);
2) Initialize the texture and set texture parameters
GL2 gl = drawable.getGL().getGL2();
int[] texture = new int[1];
gl.glGenTextures(1, IntBuffer.wrap(texture));
this.glTexture = texture[0];
gl.glBindTexture(GL2.GL_TEXTURE_2D, glTexture);
gl.glTexParameteri(GL2.GL_TEXTURE_2D, GL2.GL_TEXTURE_MIN_FILTER, GL2.GL_LINEAR);
gl.glTexParameteri(GL2.GL_TEXTURE_2D, GL2.GL_TEXTURE_MAG_FILTER, GL2.GL_LINEAR);
gl.glTexImage2D(GL2.GL_TEXTURE_2D, 0, GL2.GL_RGBA8, imageWidth, imageHeight, 0, GL2.GL_BGRA, GL2.GL_UNSIGNED_BYTE, (Buffer)null);
gl.glBindTexture(GL2.GL_TEXTURE_2D, 0);
3) Run the CUDA kernel and display the results in OpenGL's display loop.
this.runCUDA();
GL2 gl = drawable.getGL().getGL2();
gl.glBindBuffer(GL2.GL_PIXEL_UNPACK_BUFFER, glBuffer);
gl.glBindTexture(GL2.GL_TEXTURE_2D, glTexture);
gl.glTexSubImage2D(GL2.GL_TEXTURE_2D, 0, 0, 0,
imageWidth, imageHeight,
GL2.GL_RGBA, GL2.GL_UNSIGNED_BYTE, 0); //The last argument must be ZERO! NOT NULL! :-)
gl.glBindBuffer(GL2.GL_PIXEL_PACK_BUFFER, 0);
gl.glBindBuffer(GL2.GL_PIXEL_UNPACK_BUFFER, 0);
gl.glBindTexture(GL2.GL_TEXTURE_2D, glTexture);
gl.glEnable(GL2.GL_TEXTURE_2D);
gl.glDisable(GL2.GL_DEPTH_TEST);
gl.glDisable(GL2.GL_LIGHTING);
gl.glTexEnvf(GL2.GL_TEXTURE_ENV, GL2.GL_TEXTURE_ENV_MODE, GL2.GL_REPLACE);
gl.glMatrixMode(GL2.GL_PROJECTION);
gl.glPushMatrix();
gl.glLoadIdentity();
gl.glOrtho(-1.0, 1.0, -1.0, 1.0, -1.0, 1.0);
gl.glMatrixMode(GL2.GL_MODELVIEW);
gl.glLoadIdentity();
gl.glViewport(0, 0, imageWidth, imageHeight);
gl.glBegin(GL2.GL_QUADS);
gl.glTexCoord2f(0.0f, 1.0f);
gl.glVertex2f(-1.0f, -1.0f);
gl.glTexCoord2f(1.0f, 1.0f);
gl.glVertex2f(1.0f, -1.0f);
gl.glTexCoord2f(1.0f, 0.0f);
gl.glVertex2f(1.0f, 1.0f);
gl.glTexCoord2f(0.0f, 0.0f);
gl.glVertex2f(-1.0f, 1.0f);
gl.glEnd();
gl.glMatrixMode(GL2.GL_PROJECTION);
gl.glPopMatrix();
gl.glDisable(GL2.GL_TEXTURE_2D);
3.5) The CUDA call:
public void runCuda(GLAutoDrawable drawable) {
devOutput = new CUdeviceptr();
// Map the OpenGL buffer to a resource and then obtain a CUDA pointer to that resource
cuGraphicsMapResources(1, new CUgraphicsResource[]{bufferResource}, null);
cuGraphicsResourceGetMappedPointer(devOutput, new long[1], bufferResource);
// Setup the kernel parameters making sure that the devOutput pointer is passed to the kernel
Pointer kernelParams =
.
.
.
.
int gridSize = (int) Math.ceil(imageWidth * imageHeight / (double)DESC_BLOCK_SIZE);
cuLaunchKernel(function,
gridSize, 1, 1,
DESC_BLOCK_SIZE, 1, 1,
0, null,
kernelParams, null);
cuCtxSynchronize();
// Unmap the buffer so that it can be used in OpenGL
cuGraphicsUnmapResources(1, new CUgraphicsResource[]{bufferResource}, null);
}
PS: I thank Robert for providing the link to the sample. I also thank the people who downvoted my question without any useful feedback!
I’m trying to make a minimalist OpenGL program to run on both my Intel chipset (Mesa) and NVIDIA card through Bumblebee (Optimus).
My source code (using FreeGLUT):
#include <GL/freeglut.h>
void display(void);
void resized(int w, int h);
int main(int argc, char** argv)
{
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_RGBA | GLUT_SINGLE);
glutInitContextVersion(2, 1);
glutInitContextProfile(GLUT_CORE_PROFILE);
glutInitWindowSize(640, 480);
glutCreateWindow("Hello, triangle!");
glutReshapeFunc(resized);
glutDisplayFunc(display);
glClearColor(0.3, 0.3, 0.3, 1.0);
glutMainLoop();
return 0;
}
void display(void)
{
glClear(GL_COLOR_BUFFER_BIT);
glColor3f(1.0, 1.0, 1.0);
glBegin(GL_TRIANGLES);
glVertex3f(0, 0.75, 0.0);
glVertex3f(-0.75, -0.75, 0.0);
glVertex3f(0.75, -0.75, 0.0);
glEnd();
glFlush();
}
void resized(int w, int h)
{
glViewport(0, 0, w, h);
glutPostRedisplay();
}
When I launch directly the program (./a.out) on the Intel chipset, everything works. I don’t have that chance with primusrun ./a.out which displays a transparent window:
It is not really transparent, the image behind stays even if I move the window.
What's interesting is that when I change for a double color buffer (using GLUT_DOUBLE instead of GLUT_SINGLE, and glutSwapBuffers() instead of glFush()) this works both on Intel and primusrun.
Here's my glxinfo: http://pastebin.com/9DADif6X
and my primusrun glxinfo: http://pastebin.com/YCHJuWAA
Am I doing it wrong or is it a Bumblebee-related bug?
The window is probably not really transparent, it probably just shows whatever was beneath it when it showed up; try moving it around and watch if it "drags" along the picture.
When using a compositor, single buffered windows are a bit tricky, because there's no cue for the compositor to know, when the program is done rendering. Using a double buffered window performing a buffer swap does give the compositor that additional information.
In addition to that, to finish a single buffered drawing you call glFinish not glFlush; glFinish also acts as a cue that drawing has been, well, finished.
Note that there's little use for single buffered drawing these days. The only argument against double buffering was lack of available graphics memory. In times where GPUs have several hundreds of megabytes of RAM available this is no longer a grave argument.
I want to learn OpenGL, and decided to start with a very simple example - rendering the shape of comet Wild 2 as inferred from measurements from the Stardust spacecraft (details about the data in: http://nssdc.gsfc.nasa.gov/nmc/masterCatalog.do?ds=PSSB-00133). Please keep in mind that I know absolutely NOTHING about OpenGL. Some Google-fu helped me get as far as the code presented below. Despite my best efforts, my comet sucks:
I would like for it to look prettier, and I have no idea how to proceed (besides reading the Red book, or similar). For example:
How can I make a very basic "wireframe" rendering of the shape?
Suppose the Sun is along the "bottom" direction (i.e., along -Y), how can I add the light and see the shadow on the other side?
How can I add "mouse events" so that I can rotate my view by, and zoom in/out?
How can I make this monster look prettier? Any references to on-line tutorials, or code examples?
I placed the source code, data, and makefile (for OS X) in bitbucket:
hg clone https://arrieta#bitbucket.org/arrieta/learning-opengl
The data consists of 8,761 triplets (the vertices, in a body-fixed frame) and 17,518 triangles (each triangle is a triplet of integers referring to one of the 8,761 vertex triplets).
#include<stdio.h>
#include<stdlib.h>
#include<OpenGL/gl.h>
#include<OpenGL/glu.h>
// I added this in case you want to "copy/paste" the program into a
// non-Mac computer
#ifdef __APPLE__
# include <GLUT/glut.h>
#else
# include <GL/glut.h>
#endif
/* I hardcoded the data and use globals. I know it sucks, but I was in
a hurry. */
#define NF 17518
#define NV 8761
unsigned int fs[3 * NF];
float vs[3 * NV];
float angle = 0.0f;
/* callback when the window changes size (copied from Internet example) */
void changeSize(int w, int h) {
if (h == 0) h = 1;
float ratio = w * 1.0 / h;
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glViewport(0, 0, w, h);
gluPerspective(45.0f, ratio, 0.2f, 50000.0f); /* 45 degrees fov in Y direction; 50km z-clipping*/
glMatrixMode(GL_MODELVIEW);
}
/* this renders and updates the scene (mostly copied from Internet examples) */
void renderScene() {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
gluLookAt(0.0f, 0.0f, 10000.0f, /* eye is looking down along the Z-direction at 10km */
0.0f, 0.0f, 0.0f, /* center at (0, 0, 0) */
0.0f, 1.0f, 0.0f); /* y direction along natural y-axis */
/* just add a simple rotation */
glRotatef(angle, 0.0f, 0.0f, 1.0f);
/* use the facets and vertices to insert triangles in the buffer */
glBegin(GL_TRIANGLES);
unsigned int counter;
for(counter=0; counter<3 * NF; ++counter) {
glVertex3fv(vs + 3 * fs[counter]); /* here is where I'm loading
the data - why do I need to
load it every time? */
}
glEnd();
angle += 0.1f; /* update the rotation angle */
glutSwapBuffers();
}
int main(int argc, char* argv[]) {
FILE *fp;
unsigned int counter;
/* load vertices */
fp = fopen("wild2.vs", "r");
counter = 0;
while(fscanf(fp, "%f", &vs[counter++]) > 0);
fclose(fp);
/* load facets */
fp = fopen("wild2.fs", "r");
counter = 0;
while(fscanf(fp, "%d", &fs[counter++]) > 0);
fclose(fp);
/* this initialization and "configuration" is mostly copied from Internet */
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_DEPTH | GLUT_DOUBLE | GLUT_RGBA);
glutInitWindowPosition(0, 0);
glutInitWindowSize(1024, 1024);
glutCreateWindow("Wild-2 Shape");
glEnable(GL_LIGHTING);
glEnable(GL_LIGHT0);
glEnable(GL_DEPTH_TEST);
GLfloat mat_specular[] = { 1.0, 1.0, 1.0, 1.0 };
GLfloat mat_shininess[] = { 30.0 };
GLfloat light_position[] = {3000.0, 3000.0, 3000.0, 0.0 };
glClearColor (0.0, 0.0, 0.0, 0.0);
glShadeModel (GL_SMOOTH);
glMaterialfv(GL_FRONT, GL_SPECULAR, mat_specular);
glMaterialfv(GL_FRONT, GL_SHININESS, mat_shininess);
glLightfv(GL_LIGHT0, GL_POSITION, light_position);
glutDisplayFunc(renderScene);
glutReshapeFunc(changeSize);
glutIdleFunc(renderScene);
glutMainLoop();
return 0;
}
EDIT
It is starting to look better, and I have now plenty of resources to look into for the time being. It still sucks, but my questions have been answered!
I added the normals, and can switch back and forth between the "texture" and the wireframe:
PS. The repository shows the changes made as per SeedmanJ's suggestions.
It's really easy to change to a wireframe rendering in OpenGL, you'll have to use
glPolygonMode(GL_FRONT_AND_BACK, GL_LINE);
and to switch back to a fill rendering,
glPolygonMode(GL_FRONT_AND_BACK, GL_FILL);
About the lights, OpenGL allows you to use at most 8 different lights, generating your final rendering thanks to the normals, and materials. You can activate a lighting mode with:
glEnable(GL_LIGHTING);
and then activate each of your lights with either:
glEnable(GL_LIGHT0);
glEnable(GL_LIGHT1);
to change a light property like its position, please look at
http://linux.die.net/man/3/gllightfv
You'll have to set up your normals for each vertices you define, if your using the glBegin() method. In VBO rendering it's the same but normals are also contained in the vram. In the glBegin() method, you can use
glNormal3f(x, y, z); for example
for each vertex you define.
And for more information about what you can do, the redbook is a good way to begin.
Moving your "scene" is one more thing OpenGL indirectly allows you to do. As it all works with matrix,
you can either use
glTranslate3f(x, y, z);
glRotate3f(num, x, y, z);
....
Managing key events and mouse events has (i'm almost sure about that) nothing to do with OpenGL, it depends on the lib your using, for example glut/SDL/... so you'll have to refer to their own documentations.
Finaly, for more further information about some of the functions you can use, http://www.opengl.org/sdk/docs/man/, and there's also a tutorial part, leading you to different interesting websites.
Hope this helps!
How can I make a very basic "wireframe" rendering of the shape?
glPolygonMode( GL_FRONT, GL_LINE );
Suppose the Sun is along the "bottom" direction (i.e., along -Y), how can I add the light and see the shadow on the other side?
Good shadows are hard, especially with the fixed-function pipeline.
But before that you need normals to go with your vertices. You can calculate per-face normals pretty easily.
How can I add "mouse events" so that I can rotate my view by, and zoom in/out?
Try the mouse handlers I did here.
Though I some like to say "Start with something simpler", I think, sometimes you need to "dive in" to get a good understanding, on a small time span! Well done!
Also if you would like an example, please ask...
I have written a WELL DOCUMENTED, and efficient,
but readable pure Win32 (No .NET, or MFC) OpenGL FPS!
Though it appears other people answered most of you questions...
I can help you if you would like, maybe make a cool texture (if you don't have one)...
To answer this question:
glBegin(GL_TRIANGLES);
unsigned int counter;
for(counter=0; counter<3 * NF; ++counter) {
glVertex3fv(vs + 3 * fs[counter]); /* here is where I'm loading
the data - why do I need to
load it every time? */
}
glEnd();
That is rendering the vertices of the 3D Model (in the case the view has changed)
and using the DC (Device Context), BitBlt's it- onto the Window!
It has to be done repeatedly (in case something has caused the window to clear)...
I've written a small tiling game engine with OpenGL and C, and I can't seem to figure out what the problem is. My main loop looks like this:
void main_game_loop()
{
(poll for events and respond to them)
glClear(GL_COLOR_BUFFER_BIT);
glPushMatrix();
draw_block(WALL, 10, 10);
}
draw_block:
void draw_block(block b, int x, int y)
{
(load b's texture from a hash and store it in GLuint tex)
glPushMatrix();
glTranslatef(x, y, 0);
glBindTexture(GL_TEXTURE_2D, tex);
glBegin(GL_QUADS);
//BLOCK_DIM is 32, the width and height of the texture
glTexCoord2i(0, 0); glVertex3f(0, 0, 0);
glTexCoord2i(1, 0); glVertex3f(BLOCK_DIM, 0, 0);
glTexCoord2i(1, 1); glVertex3f(BLOCK_DIM, BLOCK_DIM, 0);
glTexCoord2i(0, 1); glVertex3f(0, BLOCK_DIM, 0);
glEnd();
glPopMatrix;
}
initialization function: (called before main_game_loop)
void init_gl()
{
glViewport(0, 0, SCREEN_WIDTH, SCREEN_HEIGHT);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(0, SCREEN_WIDTH, SCREEN_HEIGHT, 0);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glClearColor(0, 0, 0, 0);
glEnable(GL_TEXTURE_2D);
glEnable(GL_BLEND);
glDisable(GL_DEPTH_TEST);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
}
When run, this displays a black screen. However, if I remove the glViewport call, it seemingly displays the texture, but huge and in the corner of the window. Screenshot:
The texture IS being drawn correctly, because if I scale out by a huge factor, I can see the entire image. The y-axis also seems to be flipped from what I used in the gluOrtho2D call (discovered by making events add or subtract from x/y coordinates of the image, subtracting from the y coordinate causes the image to move downward). I'm starting to get frustrated, because this is the simplest possible example I can think of. I'm using SDL, and am passing SDL_OPENGL to SDL_SetVideoMode. What is going on here?
Looks like a problem with glViewport, but just to be sure, did you try clearing the color buffer to purple?
I've always thought of glViewport as a video/windowing function, not actually part of OpenGL itself, because it is the intermediate between the window manager and the OpenGL subsystem, and it uses window coordinates. As such, you should probably look at it along with the other SDL video calls. I suggest updating the question with the full code, or at least with those parts relevant to the video/window subsystem.
Or is it that you omitted to call glViewport after a resize?
You should also try your code without SDL_FULLSCREEN and/or with a smaller window. I usually start with a 512x512 or 640x480 window until I get the viewport and some basic controls right.
the first two parameters of glViewPort specifies the lower left of the view
http://www.opengl.org/sdk/docs/man/xhtml/glViewport.xml
You can try
glViewport(0, SCREEN_HEIGHT, SCREEN_WIDTH, SCREEN_HEIGHT);
For gluOrtho2D, the parameters are left, right, top, bottom
so I would probably use
gluOrtho2D(0, SCREEN_WIDTH, 0, SCREEN_HEIGHT);