glClear before XMapWindow (prevent undefined graphical buffer) - c

I'm struggling to handle the case where upon XMapWindow with glX, the buffer is undefined and as such the glx buffer will show undefined data before the first glXSwapBuffers is drawn.
I vaguely remember gl operations being meaningless before glXMakeCurrent, and glXMakeCurrent being meaningless before XMapWindow. Under these constraints, How does one control what will be drawn to an X11 GLX Window when it is mapped?
Seems like I would want to write the order as thus, but I still get undefined data (is this a vga driver specific issue that I should simply ignore?)
glXMakeCurrent(d, w, ctx);
glClearColor(0.0, 0.0, 0.0, 1.0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glXSwapBuffers(d, w);
XMapWindow(d, w);

Looking at a Khronos reference example, this behavior happens in their example.
With a small patch, we can trigger exercise the undefined behavior every time.
--- OpenGL 3.0 Context Creation (GLX).orig.cc 2020-03-23 22:32:41.421765953 -0700
+++ OpenGL 3.0 Context Creation (GLX).edit.cc 2020-03-23 22:32:46.113753021 -0700
## -163,6 +163,8 ##
printf( "Mapping window\n" );
XMapWindow( display, win );
+ XSync( display, win );
+ sleep(2);
// Get the default screen's GLX extension list
const char *glxExts = glXQueryExtensionsString( display,
It looks like there is simply a defacto-race condition between XMapWindow and glx ini to glXSwapBuffers; best thing to do would be to ensure your first frame is swapped in ASAP and hope that you usually beat X to the punch.

Related

How do I use SDL_LockTexture() to update dirty rectangles?

I'm migrating an application from SDL 1.2 to 2.0, and it keeps an array of dirty rectangles to determine which parts of its SDL_Surface to draw to the screen. I'm trying to find the best way to integrate this with SDL 2's SDL_Texture.
Here's how the SDL 1.2 driver is working: https://gist.github.com/nikolas/1bb8c675209d2296a23cc1a395a32a0d
And here's how I'm getting changes from the surface to the texture in SDL 2:
void *pixels;
int pitch;
SDL_LockTexture(_sdl_texture, NULL, &pixels, &pitch);
memcpy(
pixels, _sdl_surface->pixels,
pitch * _sdl_surface->h);
SDL_UnlockTexture(_sdl_texture);
for (int i = 0; i < num_dirty_rects; i++) {
SDL_RenderCopy(
_sdl_renderer, _sdl_texture, &_dirty_rects[i], &_dirty_rects[i]);
}
SDL_RenderPresent(_sdl_renderer);
I'm just updating the entire surface, but then taking advantage of the dirty rectangles in the RenderCopy(). Is there a better way to do things here, only updating the dirty rectangles? Will I run into problems calling SDL_LockTexture and UnlockTexture up to a hundred times every frame, or is that how they're meant to be used?
SDL_LockTexture accepts an SDL_Rect param which I could use here, but then it's unclear to me how to get the appropriate rect from _sdl_surface->pixels. How would I copy out just a small rect from this pixel data of the entire screen?

Problems with attaching textures of different sizes to FBO

Today I faced a strange problem while I was developing my OpenGL 4.5 application. I attempted to attach two textures of different sizes to one FBO as color attachments in order to create a bloom shader. As far as I know, in modern OpenGL versions this should be possible.
This is the code I'm using:
//Create textures
GLuint tex[2];
glCreateTextures( GL_TEXTURE_2D, 2, tex );
glTextureStorage2D( tex[0], 1, GL_RGB8, 2048, 2048 );
glTextureStorage2D( tex[1], 1, GL_RGB8, 1024, 1024 );
//Create FBO
GLuint fbo;
glCreateFramebuffers( 1, &fbo );
glNamedFramebufferTexture( fbo, GL_COLOR_ATTACHMENT0, tex[0], 0 );
glNamedFramebufferTexture( fbo, GL_COLOR_ATTACHMENT1, tex[1], 0 );
//Check completeness
GLenum comp = glCheckNamedFramebufferStatus( fbo, GL_FRAMEBUFFER );
I'd expect comp to be GL_FRAMEBUFFER_COMPLETE, however, in my case glCheckNamedFramebufferStatus returns GL_FRAMEBUFFER_INCOMPLETE_DIMENSIONS_EXT.
I'm afraid it might be some driver bug, based on these two threads, since apparently INCOMPLETE_DIMENSIONS has been removed from newer OpenGL versions:
http://forum.lwjgl.org/index.php?topic=4207.0
devtalk.nvidia.com topic
Here's the full code to illustrate the issue - https://pastebin.com/c9Hqzzky.
My output is:
0x8cd9
0x8cd9 - GL_FRAMEBUFFER_INCOMPLETE_DIMENSIONS_EXT
fbotest: fbotest.c:41: main: Assertion `comp != GL_FRAMEBUFFER_INCOMPLETE_DIMENSIONS_EXT' failed.
Aborted (core dumped)
I have Nvidia GTX 1060 graphics card, Ubuntu 18.04LTS and Nvidia driver version 390.67.
Has anyone seen similar behavior before? If so, what are possible workarounds?
Thank you for help in advance
This is actually an Nvidia bug with ARB DSA (which I've filed last Februrary), if you use non-Named versions it will not give a validation error.

Processing WM_PAINT

I have read many examples on the internet but I'm still stuck. I'm trying to process the WM_PAINT message sent to my application.
In my application, I always draw in the same DC, named g_hDC. It works perfectly. When WM_PAINT is received, I just try to draw the content of my g_hDC into the DC returned by BeginPaint. I guess g_hDC contains the last bitmap that I drawn. So I just want to restore it.
case WM_PAINT:
PAINTSTRUCT ps;
int ret;
HDC compatDC;
HDC currentDC;
HDC paintDC;
HBITMAP compatBitmap;
HGDIOBJ oldBitmap;
paintDC = BeginPaint(g_hWnd, &ps);
currentDC = GetDC(g_hWnd);
compatDC = CreateCompatibleDC(paintDC);
compatBitmap=CreateCompatibleBitmap(paintDC, CONFIG_WINDOW_WIDTH, CONFIG_WINDOW_HEIGHT);
oldBitmap=SelectObject(compatDC, compatBitmap);
ret = BitBlt(compatDC,
ps.rcPaint.left,
ps.rcPaint.top,
ps.rcPaint.right - ps.rcPaint.left,
ps.rcPaint.bottom - ps.rcPaint.top,
currentDC,
ps.rcPaint.left,
ps.rcPaint.top,
SRCCOPY);
ret = BitBlt(paintDC,
ps.rcPaint.left,
ps.rcPaint.top,
ps.rcPaint.right - ps.rcPaint.left,
ps.rcPaint.bottom - ps.rcPaint.top,
compatDC,
ps.rcPaint.left,
ps.rcPaint.top,
SRCCOPY);
DeleteObject(SelectObject(compatDC, oldBitmap));
DeleteDC(compatDC);
DeleteDC(currentDC);
EndPaint(g_hWnd, &ps);
break;
But it just draws a white rectangle ... I tried many possibilities and nothing works. Can you please help me?
There are a number of things you are doing wrong.
First, your saving g_hDC is relying on an implementation detail: you notice that the pointers are the same, and thus save the pointer. This may work in the short term for a variety of reasons related to optimization on GDI's part (for instance, there is DC cache), but will stop working eventually, when it is least convenient. Or you may be tempted to use the DC pointer when you don't have the DC, and will scribble over something else (or fail to do so due to GDI object thread affinity).
The correct way to access the DC of a window outside its WM_PAINT is by calling GetDC(hwnd).
CreateCompatibleDC() creates an in-memory DC compatible with hdc. Drawing to compatDC is not enough to draw to hdc; you need to draw back to hdc after you draw to compatDC. For your case, you will need to have two BitBlt() calls; the second one will blit back from compatDC onto hdc. See this sample code for details.
You cannot DeleteObject() a bitmap while you have it selected into a DC. Your SelectObject(compatDC, oldBitmap) call needs to come before DeleteObject(compatBitmap). (This is what i486 was trying to get at in his answer.)
(I'm sure this answer is misleading or incomplete in some places; please let me know if it is.)
Use this to delete bitmat: DeleteObject( SelectObject(compatDC,oldBitmap) ); - without DeleteBitmap on prev line. SelectObject returns current (old) selection as return value - and you delete it. In your case you are trying to delete still selected bitmap.
PS: I don't see CreateCompatibleDC - where you are creating compatDC? Add compatDC = CreateCompatibleDC( hdc ); before CreateCompatibleBitmap.

glXSwapbuffers appear not to have swapped (?)

My situation is like this. I wrote a code that checked a group of windows if their content are eligible to be swapped or not (that is all the redrawing are successfully performed on the said window and all its children after a re-sizing event). Should the conditions be met, I performed glXSwapBuffers call for the said window, and all its children. My aim was to allow for a flicker-freed-upon-resizing system. The child windows were arranged in tile fashion, and does not overlap. Between them, the function appeared to work. My issue however, arise with the parent. Sometime during the re-sizing, its content flickers. So far, this is what I have implemented.
All the events such as ConfigureNotify, or Expose, are already compressed as is needed.
The window background_pixmap is set as None.
Understanding that whenever an Expose event is generated, window background content is lost. With every redrawing done, I keep always keep the copy of the finished redraw in my own allocated buffer. (Neither a pixmap or fbo, but it suffices for now.)
My logic for each call to glXSwapBuffers() is this.
void window_swap( Window *win ) {
Window *child;
if ( win ) {
for ( child=win->child; child; child=child->next )
window_swap( child );
if ( isValidForSwap( win ) ) {
glXMakeCurrent( dpy, win->drawable, win->ctx );
glDrawBuffer( GL_BACK );
RedrawWindowFromBuffer( win, win->backing_store );
glXSwapBuffers( dpy, win->drawable );
}
}
}
Which...should serve, the content is always restored before a call to swap. Sadly, it did not appear so in the implementation. From the above code, I make some adjustment for the purpose of debugging by outputting what should be in the buffer as following.
void window_swap( Window *win ) {
if ( win ) {
if ( isValidForSwap( win ) ) {
glXMakeCurrent( dpy, win->drawable, win->ctx );
glDrawBuffer( GL_BACK );
OutputWindowBuffer( "back.jpg", GL_BACK );
RedrawWindowFromBuffer( win, win->backing_store );
glXSwapBuffers( dpy, win->drawable );
glDrawBuffer( GL_BACK );
glClearColor( 1.0, 1.0, 1.0, 1.0 );
glClear( GL_COLOR_BUFFER_BIT );
OutputWindowBuffer( "front_after.jpg", GL_FRONT );
OutputWindowBuffer( "back_after.jpg", GL_BACK );
}
}
}
The function OutputWindowBuffer() use standard glReadPixel() to read the buffer content and then output it as image. Which buffer is to be read is determined by the parameter passed into the function. What I've found out with the output picture is this.
The picture output of the back buffer after RedrawWindowFromBuffer() is what was expected.
The picture output of the back buffer after the swap is filled with the cleared colour as was expected. Thus, it is not the case that glReadPixel might be lagging in its execution when it was called for the Front buffer as some discovered bug about intel system seemed to suggest once.
The picture output of the front buffer after the swap show mostly black artifacts (My window's colour is always cleared to other colour before each drawings).
Is there other plausible explanations as to why swapping the buffer, does not appear to swap the buffer? Is there other routes I should be looking into as to implement a flicker-free re-sizing? I have read an article suggesting the use of WinGravity, but I'm afraid I don't quite comprehend it yet.
If your windows have a background pixmap set, then at every resizing step they get filled with that, before the actual OpenGL redraw commences. This is one source of flicker. The other problem is glXSwapBuffers not being synched to the vertical retrace. You can set this using glXSwapInterval.
So the two things to do for flicker free resizing: Set a nil background pixmap and enable glXSwapBuffers synched to vertical retrace (swap interval 1).

OpenGL - Nothing appearing on screen

I'm using Windows 7 with VC++ 2010
I'm trying to draw a simple point to a screen but it's not showing.
The screen is clearing to black so I know that I have a valid OpenGL context etc...
Basically my OpenGL code boils down to this (I don't have a depth buffer at this point):
glClear( GL_COLOR_BUFFER_BIT );
glMatrixMode( GL_PROJECTION );
glLoadIdentity();
gluPerspective( 45.0, 1018.0 / 743.0, 5.0, 999.0 );
glMatrixMode( GL_MODELVIEW );
glLoadIdentity();
glColor4f( 1, 1, 1, 1 );
glPointSize( 100 );
glBegin( GL_POINTS );
glVertex2i( 0, 0 );
glEnd();
SwapBuffers( hdc );
The initialization code for OpenGL is this:
glClearColor( 0, 0, 0, 1 );
glShadeModel( GL_SMOOTH );
glHint( GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST );
The problem is that nothing appears on the screen, the only thing that happens is the screen gets cleared.
Go through the following checklist (which is the general opengl checklist from delphigl.com (de_DE), which we usually give people to go through when they don't see anything):
Is your object accidentially painted in black? Try and change the glClearColor.
Do you have texturing enabled accidentially? Disable it before drawing with glDisable(GL_TEXTURE_2D).
Try disabling the following tests:
GL_DEPTH_TEST
GL_CULL_FACE
GL_ALPHA_TEST
Check whether your glViewport is setup correctly.
Try translating your Model View Matrix out of the near-clipping-plane (5.0 in your case) with glTranslatef(0, 0, -6.0)
There are several potential issues. The main problem will be how you are using the gluPerspective projection. gluPerspective is for perspectivic view and as such, it won't display anything at the (0, 0, 0) in View Coordinates. In your setup, you forbid displaying anything before (0, 0, 5) in View Coordinates (near clipping plane). I suggest setting your point to glVertex3f(0., 0., 10.) and try again. Another solution would be to use glTranslatef to move your View Coordinates around by more than 5 units.
Also glPointSize will probably not accept your value of 100, as common implementations are limited to a point size of 64.
For a good start with OpenGL, I'd also recommend reading up on Nehes Tutorials. They might not be State-Of-The-Art, but cover anything you're facing right now.
The problem was because I had called glDepthRange misunderstanding what it actually did, I was calling it like this: glDepthRange( nearPlane, farPlane ). (which was 5.0f and 999.0f) When I removed this call everything was able to draw correctly. Thankyou very much for your help. :)

Resources