Opengl Lighting and Normals - c

I'm currently experimenting with opengl and glut.
As i have like no idea what i'm doing i totally mess up with the lighting.
The complete compilable file can be found here: main.c
I have a display loop which currently operates like following:
glutDisplayFunc(also idle func):
glClear GL_COLOR_BUFFER_BIT and GL_DEPTH_BUFFER_BIT
switch to modelview matrix
load identity
first Rotate and then Translate according to my keyboard and mouse inputs for the camera
draw a ground with glNormal 0,1,0 and glMaterial on front and back,
which is encapsulated by push/popmatrix
pushmatrix
translate
glutSolidTeapod
popmatrix
do lighting related things, glEnable GL_LIGHTING and LIGHT0 and passing
float pos[] = {0.1,0.1,-0.1,0.1};
glLightfv( GL_LIGHT0, GL_POSITION, pos );
swap the buffers
the function associated with
glutReshapeFunc operates(this is from the lighthouse3d.com glut tutorial):
calculate the ratio of the screen
switch to projection matrix
loadidentity
set the viewport
set the perspective
switch to modelview matrix
However this all seems to work somehow,
but as i enable lighting, the normals seem to totally mess up.
My GL_LIGHT0 seems to stay as it should, as i can see the lightspot on the ground
not moving, as i move around
And the Teapods texture seem to move if i move my camera,
the teapod itself stands still.
Here is some visual material to explain it,
i apologize for my bad english : /
Link to YouTube video describing visually

You have a series of mistakes in your code:
You don't properly set the properties of your OpenGL window:
glutCreateWindow (WINTITLE);
glutInitDisplayMode (GLUT_RGB | GLUT_DOUBLE | GLUT_DEPTH);
The glutInitDisplayMode will only affect any windows you create after that. You should swap those two lines.
You never enable the depth test. You should add glEnable(GL_DEPTH_TEST) after you created the windows. Not using the depth test expalins the weird "see-through" effect you get with the teapot.
You have the following code
glEnable (GL_CULL_FACE | GL_CULL_FACE_MODE);
This is wrong in two ways: the GLenums are not single bits, but just values. You cannot OR them together and expect anything useful to happen. I don't know if this particular call will enable something you don't expect or just generate an error.
The second issue here is that GL_CULL_FACE_MODE isn't even a valid enum to enable.
In your case, you either skip the CULL_FACE completely, or you should write
glEnable (GL_CULL_FACE);
glFrontFace(GL_CW);
The latter call changes the face orientation from OpenGL's default counterclokcwise rule to the clockwise one, as the GLUT teapot is defined that way. Interestingly, your floor is also drawn following that rule, so it will fit for your whole scene. At least for now.
You have not fully understood how GL's state machine works. You draw the scene and then set the lighting. But this will not have an effect on the already drawn objects. It just affects the next time you draw something, which will be in the next frame here. Now, the lighting of the fixed function pipeline works in eye space. That means that if you want a light source which is located at a fixed position in the world, and not in a fixed position relativ to the camera, you have to re-specify the light position, with the updated modelview matrix, everytime the camera moves. In your case, the light source will lag behind one frame when the camera moves. This is probably not noticeable, but still wrong in principle. You should reorder your display() function to
glClear (GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glMatrixMode (GL_MODELVIEW);
glLoadIdentity();
control (WALKSPEED, MOUSESPEED, mousein);
lightHandler();
drawhelpgrid(1, 20);
drawTeapod();
glutSwapBuffers();
With those changes, I can actually get the expected result of a lighted teapot on my system. But as I side note I feel obligded to warn you that almost all of your code relies on deprecated features of OpenGL. This stuff has been removed from modern versions of OpenGL. If you start learning OpenGL now, you should consider learning the modern programmable pipeline, and not some decades old obsolete stuff.

Related

Transparency issue with Scenekit renderer on arkit

I have an odd behaviour in certain conditions when I have two geometries one above the other and the first one at least has alpha value != 0.
I am using a program for the SCNMaterial, program is in metal and does simple stuff, color out is of the type
float4(color.x * alpha, color.y * alpha, color.z * alpha, alpha)
The geometry doesn't really influence the behaviour, but for the shake of explanation I have a SCNPlane on the back and an SCNCyclinder on the front.
It all looks good until camera it's perpendicular +/- 30 degs more or less, see below
As you can see you have a bluish background and then a cylinder few centimeters in front of it, it is semi transparent and we can clearly see the bluish behind it.
Whenever the camera get passed +/- 30 from being perpendicular, we can see through the background because it is not anymore rendered in corrispondence of the cylinder, see the one on the left
Basically when it is parpendicular or so the two layers are summed up and give the correct result, when reached that cone of angles only the layer on the top is rendered.
What I am missing here to make it as it should.
Not using any priority for the rendering as I am aware.
Thank you.

Performance drop in SceneKit with custom Metal shader

I have scene with 4000 objects (1000 objects are visible), all using same material (via assigning custom created SCNGeometry's firstMaterial property to same SCNMaterial object) running at 60FPS (1000 draw calls, 150k triangles, Metal flush ~12ms).
Now I want to change rendering my material. Using shader modifier all works fine, performance is the same, but I need to completely replace SceneKit's rendering, so I am using SCNProgram with Metal vertex and fragment shaders (tried both very basic and dumped SceneKit's default shaders). When I assign that program to my material everything works as expected, except huge performance drop - statistics shows 20FPS, 4000 draw calls, 600K triangles, Metal flush ~30ms.
So, it seems that some work done several times. Maybe some one have idea where's the root of problem can be?
EDIT:
It seems I narrowed problem. This affects both OpenGL and Metal shaders: I'm loosing frustum culling when using custom SCNProgram. In default SceneKit Metal shader I see:
typedef struct {
...
#ifdef USE_BOUNDINGBOX
float2x3 boundingBox;
#endif
...
} commonprofile_node;
Which is passed to vertex and fragment shader, but is not used anywhere. Since default shader does not have frustum culling code, then perhaps it should be done elsewhere.

Blitting with OpenCL

I am making an OpenCL raycaster, and I'm looking to blit pixels to the screen with the less overhead possible (Every tick counts), as low than calling glClear() every tick, I thought of creating a framebuffer to draw to and passing it to OpenCL, and then blitting with glBlitFramebuffer() but I think that automatically drawing to screen is way better, so there, is there a way to draw pixels with openCL ? Hacky stuff are OK
The best thing I can do now is check out how glClear does it ...
The usual approach is to use OpenCL to draw to a shared OpenGL/OpenCL texture object (created with the clCreateFromGLTexture() function) and then draw it to the screen with OpenGL by rendering a full-screen quad with that texture.
Edit: I've written a small example which uses OpenCL to calculate a mandelbrot fractal and then renders it directly from the GPU to the screen with OpenGL. The code is here: mandelbrot.cpp.

OpenGL; Overlapping Alpha-Transparent Particles

I am writing an OpenGL program in C that implements alpha-transparent bill boarding particles that use a PNG (with transparency) as their texture via pnglib. However, I have discovered that a particle's transparent zones will still replace particles called before it that are also behind it. In other words, particles newly called, though transparent in some areas, are completely overlapping some particles called before them when, instead, those previously called particles should be showing through the transparency.
In order to visualize the effect this is having, I am attaching a few images to display the problem:
Initially I am calling the particles from oldest-to-newest:
However when the view is changed, the overlapping effect is apparent:
When I decide to reverse the call order I get the opposite:
I believe that a solution to this would involve calling the particles in order from farthest from the camera to nearest. However, it is pretty computationally heavy to go through each active particle, order them from furthest-to-nearest, and then call each one every display frame. I am hoping to find an easier, more efficient solution. I've already tried my hand with glBlendFunc() but no sfactor or dfactor seems to work.
Draw all non transparent geometry first. Then, before drawing the particles, disable the depth-buffer writes by calling glDepthMask (GL_FALSE)
That will fix most of the rendering problems.
Sorting the particles by distance from the camera is still a good idea. With todays CPU power that shouldn't be much of a problem.

OpenGL total beginner and 2D animation project?

I have installed GLUT and Visual Studio 2010 and found some tutorials on OpenGL basics (www.opengl-tutorial.org) and 2D graphics programming. I have advanced knowledge in C but no expirience with graphics programming...
For project (astronomy - time scales) , i must create one object in center of window and make other 5 objects (circles,dots...) to rotate around centered object with respect to some equations (i can implement them and solve). Equations is for calculating coordinates of that 5 objects and all of equations have parameter t (as time). For creating animation i will vary parameter t from 0 to 2pi with some step and get coordinates in different moments. If task was to print new coordinates of objects it would be easy to me but problem is how to make animation of graphics. Can i use some functions of OpenGL for rotation/translation ? How to make an object to move to desired location with coordinates determined by equation? Or i can redraw object in new coordinates every millisecond? First thing i thought was to draw all objects, calculate new coordinates, clear screen and draw all objects in new coordinates and repeat that infinitely..(it would be primitive but will work?)
Here is screen shot of that objects - http://i.snag.gy/ht7tG.jpg . My question is how to make animation by calculating new coordinates of objects each step and moving them to new location. Can i do that with basics in OpenGL and good knowledge of C and geometry? Any ideas from what to start? Thanks
Or i can redraw object in new coordinates every millisecond? First
thing i thought was to draw all objects, calculate new coordinates,
clear screen and draw all objects in new coordinates and repeat that
infinitely..
This is indeed the way to go. I would further suggest that you don't bother with shaders and vertex buffers as is the OpenGL 3/4 way. What would be easiest is called "immediate mode", deprecated by OpenGL 3/4 but available in 1/2/3. It's easy:
glPushMatrix(); //save modelview matrix
glTranslatef(obj->x, obj->y, obj->z); //move origin to object center
glBegin(GL_TRIANGLES); //start drawing triangles
glColor3f(1.0f, 0.0f, 0.0f); //a nice red one
glVertex3f(0.0, +0.6f, 0.0f);
glVertex3f(-0.4f, 0.0f, 0.0f);
glVertex3f(+0.4f, 0.0f, 0.0f); //almost equilateral
glEnd();
glPopMatrix(); //restore modelview matrix/origin
Do look into helper libraries glu (useful for setting up the camera / the projection matrix) and glut (should make it very easy to set up a window and basic controls and drawing).
It would probably take you longer to set it up (display a rotating triangle) than to figure out how to use it. In fact, here's some code to help you get started. Your first challenge could be to set up a 2D orthogonal projection matrix that projects along the Z-axis, so you can use the 2D functions (glVertex2).
First thing i thought was to draw all objects, calculate new coordinates, clear screen and draw all objects in new coordinates and repeat that infinitely..(it would be primitive but will work?)
That's exactly how it works. With GLUT, you set a display function that gets called when GLUT thinks it's time to draw a new frame. In this function, clear the screen, draw the objects and flush it to the screen. Then just instruct GLUT to draw another frame, and you're animating!
Might want to keep track of the time inbetween frames so you can animate things smoothly, but I'm sure you can figure that part out.
OpenGL is really just a drawing library. It doesn't do animation, that's up to you to implement. Clear/draw/flush is the commonly used approach for it though.
Note: with 'flush' I mean glFlush(), although GLUT in multi-buffer mode requires glutSwapBuffers()
The red book explains the proper way to draw models that can first be translated, rotated, scaled and so on: http://www.glprogramming.com/red/chapter03.html
Basically, you load the identity, perform transforms/rotations/scales (which one you want first matters - again the book explains it), draw the model as though it was at the origin at normal scale and it'll be placed in its new position. Then you can load identity and proceed with the next one. Every frame of an animation, you glClear() and recalculate/redraw everything. (It sounds expensive, but there's usually not much you can cache between draws).

Resources