OpenGL - Nothing appearing on screen - c

I'm using Windows 7 with VC++ 2010
I'm trying to draw a simple point to a screen but it's not showing.
The screen is clearing to black so I know that I have a valid OpenGL context etc...
Basically my OpenGL code boils down to this (I don't have a depth buffer at this point):
glClear( GL_COLOR_BUFFER_BIT );
glMatrixMode( GL_PROJECTION );
glLoadIdentity();
gluPerspective( 45.0, 1018.0 / 743.0, 5.0, 999.0 );
glMatrixMode( GL_MODELVIEW );
glLoadIdentity();
glColor4f( 1, 1, 1, 1 );
glPointSize( 100 );
glBegin( GL_POINTS );
glVertex2i( 0, 0 );
glEnd();
SwapBuffers( hdc );
The initialization code for OpenGL is this:
glClearColor( 0, 0, 0, 1 );
glShadeModel( GL_SMOOTH );
glHint( GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST );
The problem is that nothing appears on the screen, the only thing that happens is the screen gets cleared.

Go through the following checklist (which is the general opengl checklist from delphigl.com (de_DE), which we usually give people to go through when they don't see anything):
Is your object accidentially painted in black? Try and change the glClearColor.
Do you have texturing enabled accidentially? Disable it before drawing with glDisable(GL_TEXTURE_2D).
Try disabling the following tests:
GL_DEPTH_TEST
GL_CULL_FACE
GL_ALPHA_TEST
Check whether your glViewport is setup correctly.
Try translating your Model View Matrix out of the near-clipping-plane (5.0 in your case) with glTranslatef(0, 0, -6.0)
There are several potential issues. The main problem will be how you are using the gluPerspective projection. gluPerspective is for perspectivic view and as such, it won't display anything at the (0, 0, 0) in View Coordinates. In your setup, you forbid displaying anything before (0, 0, 5) in View Coordinates (near clipping plane). I suggest setting your point to glVertex3f(0., 0., 10.) and try again. Another solution would be to use glTranslatef to move your View Coordinates around by more than 5 units.
Also glPointSize will probably not accept your value of 100, as common implementations are limited to a point size of 64.
For a good start with OpenGL, I'd also recommend reading up on Nehes Tutorials. They might not be State-Of-The-Art, but cover anything you're facing right now.

The problem was because I had called glDepthRange misunderstanding what it actually did, I was calling it like this: glDepthRange( nearPlane, farPlane ). (which was 5.0f and 999.0f) When I removed this call everything was able to draw correctly. Thankyou very much for your help. :)

Related

glClear before XMapWindow (prevent undefined graphical buffer)

I'm struggling to handle the case where upon XMapWindow with glX, the buffer is undefined and as such the glx buffer will show undefined data before the first glXSwapBuffers is drawn.
I vaguely remember gl operations being meaningless before glXMakeCurrent, and glXMakeCurrent being meaningless before XMapWindow. Under these constraints, How does one control what will be drawn to an X11 GLX Window when it is mapped?
Seems like I would want to write the order as thus, but I still get undefined data (is this a vga driver specific issue that I should simply ignore?)
glXMakeCurrent(d, w, ctx);
glClearColor(0.0, 0.0, 0.0, 1.0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glXSwapBuffers(d, w);
XMapWindow(d, w);
Looking at a Khronos reference example, this behavior happens in their example.
With a small patch, we can trigger exercise the undefined behavior every time.
--- OpenGL 3.0 Context Creation (GLX).orig.cc 2020-03-23 22:32:41.421765953 -0700
+++ OpenGL 3.0 Context Creation (GLX).edit.cc 2020-03-23 22:32:46.113753021 -0700
## -163,6 +163,8 ##
printf( "Mapping window\n" );
XMapWindow( display, win );
+ XSync( display, win );
+ sleep(2);
// Get the default screen's GLX extension list
const char *glxExts = glXQueryExtensionsString( display,
It looks like there is simply a defacto-race condition between XMapWindow and glx ini to glXSwapBuffers; best thing to do would be to ensure your first frame is swapped in ASAP and hope that you usually beat X to the punch.

Problems with attaching textures of different sizes to FBO

Today I faced a strange problem while I was developing my OpenGL 4.5 application. I attempted to attach two textures of different sizes to one FBO as color attachments in order to create a bloom shader. As far as I know, in modern OpenGL versions this should be possible.
This is the code I'm using:
//Create textures
GLuint tex[2];
glCreateTextures( GL_TEXTURE_2D, 2, tex );
glTextureStorage2D( tex[0], 1, GL_RGB8, 2048, 2048 );
glTextureStorage2D( tex[1], 1, GL_RGB8, 1024, 1024 );
//Create FBO
GLuint fbo;
glCreateFramebuffers( 1, &fbo );
glNamedFramebufferTexture( fbo, GL_COLOR_ATTACHMENT0, tex[0], 0 );
glNamedFramebufferTexture( fbo, GL_COLOR_ATTACHMENT1, tex[1], 0 );
//Check completeness
GLenum comp = glCheckNamedFramebufferStatus( fbo, GL_FRAMEBUFFER );
I'd expect comp to be GL_FRAMEBUFFER_COMPLETE, however, in my case glCheckNamedFramebufferStatus returns GL_FRAMEBUFFER_INCOMPLETE_DIMENSIONS_EXT.
I'm afraid it might be some driver bug, based on these two threads, since apparently INCOMPLETE_DIMENSIONS has been removed from newer OpenGL versions:
http://forum.lwjgl.org/index.php?topic=4207.0
devtalk.nvidia.com topic
Here's the full code to illustrate the issue - https://pastebin.com/c9Hqzzky.
My output is:
0x8cd9
0x8cd9 - GL_FRAMEBUFFER_INCOMPLETE_DIMENSIONS_EXT
fbotest: fbotest.c:41: main: Assertion `comp != GL_FRAMEBUFFER_INCOMPLETE_DIMENSIONS_EXT' failed.
Aborted (core dumped)
I have Nvidia GTX 1060 graphics card, Ubuntu 18.04LTS and Nvidia driver version 390.67.
Has anyone seen similar behavior before? If so, what are possible workarounds?
Thank you for help in advance
This is actually an Nvidia bug with ARB DSA (which I've filed last Februrary), if you use non-Named versions it will not give a validation error.

give positions for sphere drawn using gluSphere()

Here's my code .. But my sphere always stays at the origin... my glTranslatef() doesn't change the position of the sphere... Give me answers with explanation..
glColor3f(1,0,0);
GLUquadric *quad;
quad = gluNewQuadric();
gluSphere(quad,25,100,20);
glTranslatef(2,2,2);
You're drawing the sphere before doing the translation, so of course the translation has no effect.
Move your glTranslatef to above gluSphere
glColor3f(1,0,0);
GLUquadric *quad;
quad = gluNewQuadric();
glTranslatef(2,2,2);
gluSphere(quad,25,100,20);
(Also note that the glu library is a quite old, and you should probably avoid it)

glXSwapbuffers appear not to have swapped (?)

My situation is like this. I wrote a code that checked a group of windows if their content are eligible to be swapped or not (that is all the redrawing are successfully performed on the said window and all its children after a re-sizing event). Should the conditions be met, I performed glXSwapBuffers call for the said window, and all its children. My aim was to allow for a flicker-freed-upon-resizing system. The child windows were arranged in tile fashion, and does not overlap. Between them, the function appeared to work. My issue however, arise with the parent. Sometime during the re-sizing, its content flickers. So far, this is what I have implemented.
All the events such as ConfigureNotify, or Expose, are already compressed as is needed.
The window background_pixmap is set as None.
Understanding that whenever an Expose event is generated, window background content is lost. With every redrawing done, I keep always keep the copy of the finished redraw in my own allocated buffer. (Neither a pixmap or fbo, but it suffices for now.)
My logic for each call to glXSwapBuffers() is this.
void window_swap( Window *win ) {
Window *child;
if ( win ) {
for ( child=win->child; child; child=child->next )
window_swap( child );
if ( isValidForSwap( win ) ) {
glXMakeCurrent( dpy, win->drawable, win->ctx );
glDrawBuffer( GL_BACK );
RedrawWindowFromBuffer( win, win->backing_store );
glXSwapBuffers( dpy, win->drawable );
}
}
}
Which...should serve, the content is always restored before a call to swap. Sadly, it did not appear so in the implementation. From the above code, I make some adjustment for the purpose of debugging by outputting what should be in the buffer as following.
void window_swap( Window *win ) {
if ( win ) {
if ( isValidForSwap( win ) ) {
glXMakeCurrent( dpy, win->drawable, win->ctx );
glDrawBuffer( GL_BACK );
OutputWindowBuffer( "back.jpg", GL_BACK );
RedrawWindowFromBuffer( win, win->backing_store );
glXSwapBuffers( dpy, win->drawable );
glDrawBuffer( GL_BACK );
glClearColor( 1.0, 1.0, 1.0, 1.0 );
glClear( GL_COLOR_BUFFER_BIT );
OutputWindowBuffer( "front_after.jpg", GL_FRONT );
OutputWindowBuffer( "back_after.jpg", GL_BACK );
}
}
}
The function OutputWindowBuffer() use standard glReadPixel() to read the buffer content and then output it as image. Which buffer is to be read is determined by the parameter passed into the function. What I've found out with the output picture is this.
The picture output of the back buffer after RedrawWindowFromBuffer() is what was expected.
The picture output of the back buffer after the swap is filled with the cleared colour as was expected. Thus, it is not the case that glReadPixel might be lagging in its execution when it was called for the Front buffer as some discovered bug about intel system seemed to suggest once.
The picture output of the front buffer after the swap show mostly black artifacts (My window's colour is always cleared to other colour before each drawings).
Is there other plausible explanations as to why swapping the buffer, does not appear to swap the buffer? Is there other routes I should be looking into as to implement a flicker-free re-sizing? I have read an article suggesting the use of WinGravity, but I'm afraid I don't quite comprehend it yet.
If your windows have a background pixmap set, then at every resizing step they get filled with that, before the actual OpenGL redraw commences. This is one source of flicker. The other problem is glXSwapBuffers not being synched to the vertical retrace. You can set this using glXSwapInterval.
So the two things to do for flicker free resizing: Set a nil background pixmap and enable glXSwapBuffers synched to vertical retrace (swap interval 1).

DirectX9 Use Geometry Instancing for a Mesh with multiple materials

I am trying to have a flexible Geometry Instancing code able to handle meshes with multiple materials. For a mesh with one material everything is fine. I manage to render as many instances as I want with a single draw call.
Things get a bit more complicated with multiple materials. My mesh comes from an .x file. It has one vertex buffer, one index buffer but several materials. The indexes to render for each subset (materials) is stored in an attribute array.
Here is the code I use:
d3ddev->SetVertexDeclaration( m_vertexDeclaration );
d3ddev->SetIndices( m_indexBuffer );
d3ddev->SetStreamSourceFreq(0, (D3DSTREAMSOURCE_INDEXEDDATA | m_numInstancesToDraw ));
d3ddev->SetStreamSource(0, m_vertexBuffer, 0, D3DXGetDeclVertexSize( m_geometryElements, 0 ) );
d3ddev->SetStreamSourceFreq(1, (D3DSTREAMSOURCE_INSTANCEDATA | 1ul));
d3ddev->SetStreamSource(1, m_instanceBuffer, 0, D3DXGetDeclVertexSize( m_instanceElements, 1 ) );
m_effect->Begin(NULL, NULL); // begin using the effect
m_effect->BeginPass(0); // begin the pass
for( DWORD i = 0; i < m_numMaterials; ++i ) // loop through each subset.
{
d3ddev->SetMaterial(&m_materials[i]); // set the material for the subset
if(m_textures[i] != NULL)
{
d3ddev->SetTexture( 0, m_textures[i] );
}
d3ddev->DrawIndexedPrimitive(
D3DPT_TRIANGLELIST, // Type
0, // BaseVertexIndex
m_attributes[i].VertexStart, // MinIndex
m_attributes[i].VertexCount, // NumVertices
m_attributes[i].FaceStart * 3, // StartIndex
m_attributes[i].FaceCount // PrimitiveCount
);
}
m_effect->EndPass();
m_effect->End();
d3ddev->SetStreamSourceFreq(0,1);
d3ddev->SetStreamSourceFreq(1,1);
This code will work for the first material only. When I say the first I meant the one at index 0 because if I start my loop with the second material, it will not be rendered. However, by debugging the vertex buffer in PIX, I can see all my materials being processed properly. So something happens after the vertex shader.
Another weird issue, all my materials will be rendered if I set my stream source containing the instance data to be a vertex size of zero.
So Instead of this:
d3ddev->SetStreamSource(1, m_instanceBuffer, 0, D3DXGetDeclVertexSize( m_instanceElements, 1 ) );
I replace it by:
d3ddev->SetStreamSource(1, m_instanceBuffer, 0, 0 );
But of course, with this code, all my instances are rendered at the same position since I reuse the same instance data over and over again.
And last point, everything works fine if I create my device with D3DCREATE_SOFTWARE_VERTEXPROCESSING. Only Hardware has the issue but unfortunately DirectX does not report any problem in debug mode.
See the Shader Model 3 docs
If you are implementing shaders in hardware, you may not use vs_3_0 or ps_3_0 with any other shader versions, and you may not use either shader type with the fixed function pipeline. These changes make it possible to simplify drivers and the runtime. The only exception is that software-only vs_3_0 shaders may be used with any pixel shader version.
I had the same problem and in my case, the problem was with pool for instancing mesh. I originally had this mesh in SYSTEM_MEMORY, but the instanced mesh in POOL_DEFAULT. When I changed instancing mesh to sit in a default mem, everything worked as desired.
Hope it helps.

Resources