Committing first page of sparse texture crashes at glfwDestroyWindow - c

The following OpenGL/GLFW/GLAD code (MCVE version) crashes the process with the following access violation exception in nvoglv64.dll:
0x00007FFC731F586F (nvoglv64.dll) in ConsoleApplication2.exe: 0xC0000005: Zugriffsverletzung beim Lesen an Position 0x0000000000024AA8.
at the glfwDestroyWindow(window) call at the end of the code listing below, and I don't know, why. This call won't crash when either not calling glTexPageCommitmentARB or calling it with a commit of GL_FALSE/0.
I've read the specification of ARB_sparse_texture three times now, checking the conditions of every argument of glTexPageCommitmentARB (x, y, z, width, height, depth multiples of the page sizes of the internal format) and I am somewhat sure that I've followed all instructions on how to use that function correctly. I've also checked for any OpenGL errors with a debug context and a debug message callback before, there were no outputs.
Further information:
OS: Windows 10 x64 (20H2)
Driver: Nvidia 461.09 (DCH)
GLAD Configuration: OpenGL 4.3 Core + ARB_sparse_texture + ARB_sparse_texture2
GLFW: Fresh local MSVC x64 Release build from Git commit 0b9e48fa3df9c18
The fprintf right before the glTexPageCommitmentARB call in the code below print the following output before the crash at the end of the program:
256 128 1 32768
#include <stdlib.h>
#include <stdio.h>
#include <glad/glad.h>
#include <GLFW/glfw3.h>
int main(int argc, char** argv) {
glfwInit();
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 4);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);
glfwWindowHint(GLFW_VISIBLE, GLFW_FALSE);
GLFWwindow* window = glfwCreateWindow(800, 600, "", NULL, NULL);
if (window == NULL) {
fprintf(stderr, "%s\n", "GLFW window NULL");
exit(1);
}
glfwMakeContextCurrent(window);
if (!gladLoadGLLoader((GLADloadproc)glfwGetProcAddress)) {
fprintf(stderr, "%s\n", "gladLoadGLLoader failed");
exit(2);
}
if (!GLAD_GL_ARB_sparse_texture || !GLAD_GL_ARB_sparse_texture2) {
fprintf(stderr, "%s\n", "GL_ARB_sparse_texture or GL_ARB_sparse_texture2 unsupported");
exit(3);
}
GLuint tex;
glGenTextures(1, &tex);
glBindTexture(GL_TEXTURE_2D, tex);
// activate sparse allocation for this texture
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_SPARSE_ARB, GL_TRUE);
// use default page size index 0 (as per ARB_sparse_texture2)
glTexParameteri(GL_TEXTURE_2D, GL_VIRTUAL_PAGE_SIZE_INDEX_ARB, 0);
GLint width, height, depth, maxSparseTexSize;
// query page size for width, height and depth
// R16UI is supported as sparse texture internal format as per
// https://www.khronos.org/registry/OpenGL/extensions/ARB/ARB_sparse_texture.txt
glGetInternalformativ(GL_TEXTURE_2D, GL_R16UI, GL_VIRTUAL_PAGE_SIZE_X_ARB, 1, &width);
glGetInternalformativ(GL_TEXTURE_2D, GL_R16UI, GL_VIRTUAL_PAGE_SIZE_Y_ARB, 1, &height);
glGetInternalformativ(GL_TEXTURE_2D, GL_R16UI, GL_VIRTUAL_PAGE_SIZE_Z_ARB, 1, &depth);
glGetIntegerv(GL_MAX_SPARSE_TEXTURE_SIZE_ARB, &maxSparseTexSize);
fprintf(stderr, "%d %d %d %d\n", width, height, depth, maxSparseTexSize);
glTexStorage2D(
/*target*/GL_TEXTURE_2D,
/*levels*/1,
/*internalFormat*/GL_R16UI,
/*width*/maxSparseTexSize,
/*height*/maxSparseTexSize);
// commit one page at (0, 0, 0)
glTexPageCommitmentARB(GL_TEXTURE_2D,
/*level*/0,
/*xoffset*/0,
/*yoffset*/0,
/*zoffset*/0,
/*width*/width,
/*height*/height,
/*depth*/depth,
/*commit*/GL_TRUE);
glfwDestroyWindow(window);
glfwTerminate();
}
Am I missing anything here or is it just a driver or GLFW bug?
EDIT: Some more information:
The crash happens in a driver thread (nvoglv64.dll) right after the main thread called wglDeleteContext(...) for the GL context. When the Visual Studio 2019 debugger halts due to the exception, I can see three driver threads (all from nvoglv64.dll) existing (one of which issued the access violation exception) while the main thread is halted right after wglDeleteContext(...).
EDIT2: Experimenting more with this, the crash always occurred when the texture was not dereferenced (ref count to 0) and disposed of by the driver before destroying the GL context. For example, in the code above, when I call glDeleteTextures(1, &tex); right before glDestroyWindow(window), the crash does not occur (reproducibly). In the actual app I distilled the above MCVE from, the texture was also attached as a color attachment in a FBO. When I did not deleted the FBO but only decremented the ref count for the texture (by calling glDeleteTextures(...), the crash would still occur. Only when I also dereferenced the FBO by glDeleteFramebuffers(...) and the texture along with it, would the crash not occur anymore.

Related

SDL2 Texture sometimes empty after loading multiple 8 bit surfaces

I'll try to make that question as concise as possible, but don't hesitate to ask for clarification.
I'm dealing with legacy code, and I'm trying to load thousands of 8 bit images from the disk to create a texture for each.
I've tried multiple things, and I'm at the point where I'm trying to load my 8 bits images into a 32 bits surface, and then create a texture from that surface.
The problem : while loading and 8 bit image onto a 32 bit surface is working, when I try to SDL_CreateTextureFromSurface, I end up with a lot of textures that are completely blank (full of transparent pixels, 0x00000000).
Not all textures are wrong, thought. Each time I run the program, I get different "bad" textures. Sometimes there's more, sometimes there's less. And when I trace the program, I always end up with a correct texture (is that a timing problem?)
I know that the loading to the SDL_Surface is working, because I'm saving all the surfaces to the disk, and they're all correct. But I inspected the textures using NVidia NSight Graphics, and more than half of them are blank.
Here's the offending code :
int __cdecl IMG_SavePNG(SDL_Surface*, const char*);
SDL_Texture* Resource8bitToTexture32(SDL_Renderer* renderer, SDL_Color* palette, int paletteSize, void* dataAddress, int Width, int Height)
{
u32 uiCurrentOffset;
u32 uiSourceLinearSize = (Width * Height);
SDL_Color *currentColor;
char strSurfacePath[500];
// The texture we're creating
SDL_Texture* newTexture = NULL;
// Load image at specified address
SDL_Surface* tempSurface = SDL_CreateRGBSurface(0x00, Width, Height, 32, 0x00FF0000, 0x0000FF00, 0x000000FF, 0xFF000000);
SDL_SetSurfaceBlendMode(tempSurface, SDL_BLENDMODE_NONE);
if(SDL_MUSTLOCK(tempSurface)
SDL_LockSurface(tempSurface);
for(uiCurrentOffset = 0; uiCurrentOffset < uiSourceLinearSize; uiCurrentOffset++)
{
currentColor = &palette[pSourceData[uiCurrentOffset]];
if(pSourceData[uiCurrentOffset] != PC_COLOR_TRANSPARENT)
{
((u32*)tempSurface->pixels)[uiCurrentOffset] = (u32)((currentColor->a << 24) + (currentColor->r << 16) + (currentColor->g << 8) + (currentColor->b << 0));
}
}
if(SDL_MUSTLOCK(tempSurface)
SDL_UnlockSurface(tempSurface);
// Create texture from surface pixels
newTexture = SDL_CreateTextureFromSurface(renderer, tempSurface);
// Save the surface to disk for verification only
sprintf(strSurfacePath, "c:\\tmp\\surfaces\\%s.png", GenerateUniqueName());
IMG_SavePNG(tempSurface, strSurfacePath);
// Get rid of old loaded surface
SDL_FreeSurface(tempSurface);
return newTexture;
}
Note that in the original code, I'm checking for boundaries, and for NULL after the SDL_Create*. I'm also aware that it would be better to have a spritesheet for the textures instead of loading each texture individually.
EDIT :
Here's a sample of what I'm observing in NSight if I capture a frame and use the Resources View.
The first 3186 textures are correct. Then I get 43 empty textures. Then I get 228 correct textures. Then 100 bad ones. Then 539 correct ones. Then 665 bad ones. It goes on randomly like that, and it changes each time I run my program.
Again, each time the surfaces saved by IMG_SavePNG are correct. This seems to indicate that something happens when I call SDL_CreateTextureFromSurface but at that point, I don't want to rule anything out, because it's a very weird problem, and it smells undefined behaviour all over the place. But I just can't find the problem.
With the help of #mark-benningfield, I was able to find the problem.
TL;DR
There's a bug (or at least, an undocumented feature) in SDL with the DX11 renderer. There's a work-around ; see at the end.
CONTEXT
I'm trying to load around 12,000 textures when my program start. I know it's not a good idea, but I was planning on using that as a stepping-stone to another more sane system.
DETAILS
What I realized while debugging that problem is that the SDL renderer for DirectX 11 does that when it creates a texture :
result = ID3D11Device_CreateTexture2D(rendererData->d3dDevice,
&textureDesc,
NULL,
&textureData->mainTexture
);
The Microsoft's ID3D11Device::CreateTexture2D method page indicates that :
If you don't pass anything to pInitialData, the initial content of the memory for the resource is undefined. In this case, you need to write the resource content some other way before the resource is read.
If we're to believe that article :
Default Usage
The most common type of usage is default usage. To fill a default texture (one created with D3D11_USAGE_DEFAULT) you can :
[...]
After calling ID3D11Device::CreateTexture2D, use ID3D11DeviceContext::UpdateSubresource to fill the default texture with data from a pointer provided by the application.
So it looks like that D3D11_CreateTexture is using the second method of the default usage to initialize a texture and its content.
But right after that, in the SDL, we call SDL_UpdateTexture (without checking the return value ; I'll get to that later). If we dig until we get the the D3D11 renderer, we get that :
static int
D3D11_UpdateTextureInternal(D3D11_RenderData *rendererData, ID3D11Texture2D *texture, int bpp, int x, int y, int w, int h, const void *pixels, int pitch)
{
ID3D11Texture2D *stagingTexture;
[...]
/* Create a 'staging' texture, which will be used to write to a portion of the main texture. */
ID3D11Texture2D_GetDesc(texture, &stagingTextureDesc);
[...]
result = ID3D11Device_CreateTexture2D(rendererData->d3dDevice, &stagingTextureDesc, NULL, &stagingTexture);
[...]
/* Get a write-only pointer to data in the staging texture: */
result = ID3D11DeviceContext_Map(rendererData->d3dContext, (ID3D11Resource *)stagingTexture, 0, D3D11_MAP_WRITE, 0, &textureMemory);
[...]
/* Commit the pixel buffer's changes back to the staging texture: */
ID3D11DeviceContext_Unmap(rendererData->d3dContext, (ID3D11Resource *)stagingTexture, 0);
/* Copy the staging texture's contents back to the texture: */
ID3D11DeviceContext_CopySubresourceRegion(rendererData->d3dContext, (ID3D11Resource *)texture, 0, x, y, 0, (ID3D11Resource *)stagingTexture, 0, NULL);
SAFE_RELEASE(stagingTexture);
return 0;
}
Note : code snipped for conciseness.
This seems to indicate, based on that article I mentioned, that SDL is using the second method of the Default Usage to allocate the texture memory on the GPU, but uses the Staging Usage to upload the actual pixels.
I don't know that much about DX11 programming, but that mixing up of techniques got my programmer's sense tingling.
I contacted a game programmer I know and explained the problem to him. He told me the following interesting bits :
The driver gets to decide where it's storing staging textures. It usually lies in CPU RAM.
It's much better to specify a pInitialData pointer, as the driver can decide to upload the textures asynchronously.
If you load too many staging textures without commiting them to the GPU, you can fill up the RAM.
I then wondered why SDL didn't return me a "out of memory" error at the time I called SDL_CreateTextureFromSurface, and I found out why (again, snipped for concision) :
SDL_Texture *
SDL_CreateTextureFromSurface(SDL_Renderer * renderer, SDL_Surface * surface)
{
[...]
SDL_Texture *texture;
[...]
texture = SDL_CreateTexture(renderer, format, SDL_TEXTUREACCESS_STATIC,
surface->w, surface->h);
if (!texture) {
return NULL;
}
[...]
if (SDL_MUSTLOCK(surface)) {
SDL_LockSurface(surface);
SDL_UpdateTexture(texture, NULL, surface->pixels, surface->pitch);
SDL_UnlockSurface(surface);
} else {
SDL_UpdateTexture(texture, NULL, surface->pixels, surface->pitch);
}
[...]
return texture;
}
If the creation of the texture is successful, it doesn't care whether or not it succeeded in updating the textures (no check on SDL_UpdateTexture's return value).
WORKAROUND
The poor-man's workaround to that problem is to call SDL_RenderPresent each time you call a SDL_CreateTextureFromSurface.
It's probably fine to do it once every hundred textures depending on your texture size. But just be aware that calling SDL_CreateTextureFromSurface repeatedly without updating the renderer will actually fill up the system RAM, and the SDL won't return you any error condition to check for this.
The irony of this is that had I implemented a "correct" loading loop with percentage of completion on screen, I would never had that problem. But fate had me implement this the quick-and-dirty way, as a proof of concept for a bigger system, and I got sucked into that problem.

cvShowImage makes the system to throw exceptions

I have a code in C language that uses the cvopen Library.
Here is the code:
#include <stdio.h>
#include <opencv2\highgui\highgui_c.h>
int main(void)
{
int i;
cvNamedWindow("Display window", CV_WINDOW_AUTOSIZE); //create a window
//create an image
IplImage* image = cvLoadImage("C:\\Users\\magshimim\\Desktop\\Mummy.png", 1);
if (!image)//The image is empty.
{
printf("could not open image\n");
}
else
{
cvShowImage("Display window", image);
cvWaitKey(0);
system("pause");
cvReleaseImage(&image);
}
getchar();
return 0;
}
In line 17 "cvShowImage("Display window", image);" the system throws exception that says:
Exception thrown at 0xAD76406A in Q4.exe: 0xC0000008: An invalid handle was specified
The cvopen pack is fine, and other function works. but this code (which works on other computers) just crushes every time.
How can i fix this?
cvShowImage is part of the old C-style naming convention in OpenCV. This old convention has been fully depreciated and is not compatible with OpenCV 3.0 and up.
Instead of cvShowImage try using imshow
imshow("Display Window", image);

Why is initializing gl3w so much faster than initializing GLEW?

I'm using GLFW to set the OpenGL context and then test the speed of each library by initializing it multiple times, with all optimization flags on.
On my machine, gl3w can be initialized 100 times in about 0.5 seconds:
#include "gl3w.h"
#include <GLFW/glfw3.h>
int main(void)
{
if (!glfwInit()) return 1;
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
GLFWwindow *win = glfwCreateWindow(960, 540, "Title", NULL, NULL);
if (!win) return 2;
glfwMakeContextCurrent(win);
for (int i = 0; i < 100; ++i) if (gl3wInit()) return 3;
if (!gl3wIsSupported(3, 3)) return 4;
glfwDestroyWindow(win);
glfwTerminate();
return 0;
}
While initializing GLEW 100 times takes about 2.5 seconds, making it about 5 times slower!
#include <GL/glew.h>
#include <GLFW/glfw3.h>
int main(void)
{
if (!glfwInit()) return 1;
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
GLFWwindow *win = glfwCreateWindow(960, 540, "Title", NULL, NULL);
if (!win) return 2;
glfwMakeContextCurrent(win);
glewExperimental = GL_TRUE;
for (int i = 0; i < 100; ++i) if (glewInit()) return 3;
glfwDestroyWindow(win);
glfwTerminate();
return 0;
}
I am very surprised about this since both of these libraries are designed to fit the same purpose. Could someone explain what the difference is all about?
By the way, omitting the glewExperimental = GL_TRUE; cuts time to 0.3 seconds, but then GLEW isn't initializing correctly because glBindVertexArray(0); just afterwards throws a segmentation fault while it shouldn't.
Because GLEW is doing more work than GL3W.
GL3W consists of only the functions from the OpenGL header glcorearb.h. This contains the functions in OpenGL 4.5, as well as many ARB extensions. This means it does not include other extension functions or non-core profile stuff.
GLEW gives you every function in the OpenGL registry. Also, there's this:
glewExperimental = GL_TRUE;
Normally, GLEW will check to see if a particular extension is supported before trying to load its functions. By using that, you're telling GLEW not to do that check. Instead, it will try to load every function regardless of whether the extension is specified or not.
You may have been told that you have to use this switch. That's not true anymore.
See, OpenGL 3.0 changed how you test extensions in OpenGL. And thus, the old method was deprecated, and thus removed in 3.1 and core profile 3.2+.
However, GLEW kept using the old extension testing functionality. As such, you had to use that switch if you wanted to use GLEW with a core profile.
GLEW 2.0, eight years later, finally fixed this... long after about a half dozen much better OpenGL loading libraries solved the problem many times over. But in any case, the point is that you shouldn't use this switch with GLEW 2.0.

cvCreateFileCapture strange error

i'm trying to create a simple Opencv program in C that creates a file capture from a .avi, and it plays it in a window highlighting faces. I'm running a self-compiled version of Opencv (i already tried the same with a jpeg image and it works).
Building goes well, no errors, no warning, but when i launch it this the console output this:
Unknown parameter encountered: "server role"
Ignoring unknown parameter "server role"
And the program simply stops
Previously it was complaining for a missing /home/#user/.smb/smb.conf file, so i tried installing samba ( even though i've still no idea what does samba have to do in all this )
here is my code:
main(){
printf("Ciao!");
cvNamedWindow("window", CV_WINDOW_AUTOSIZE);
cvWaitKey(0);
printf("ok");
CvCapture* capture = cvCreateFileCapture("monsters.avi");
CvHaarClassifierCascade* cascade = load_object_detector("haarcascade_frontalface_alt.xml");
CvMemStorage* storage = cvCreateMemStorage(0);
//List of the faces
CvSeq* faces;
while (0<10) {
CvArr* image = cvQueryFrame(capture);
double scale = 1;
faces = cvHaarDetectObjects(image,cascade, storage, 1.2, 2, CV_HAAR_DO_CANNY_PRUNING, cvSize(1,1), cvSize(300,300));
int i;
for(i = 0; i < faces->total; i++ )
{
CvRect face_rect = *(CvRect*)cvGetSeqElem( faces, i );
cvRectangle( image,
cvPoint(face_rect.x*scale,face_rect.y*scale),
cvPoint((face_rect.x+face_rect.width)*scale,(face_rect.y+face_rect.height)*scale),
CV_RGB(255,0,0) , 3, 8, 0);
}
cvReleaseMemStorage( &storage );
cvShowImage("window", image);
}
cvWaitKey(0);
printf("Ciao!");
}
I thank you for your answer, i switched to C++ for my trials. Now i did this:
int main(){
namedWindow("Video", CV_WINDOW_FREERATIO);
VideoCapture cap("sintel.mp4");
if(!cap.isOpened()) // check if we succeeded
return -1;
Mat edges;
for(;;){
Mat frame;
cap>>frame;
cvtColor(frame, edges, CV_BGR2GRAY);
GaussianBlur(edges, edges, Size(7,7), 1.5, 1.5);
Canny(edges, edges, 0, 30, 3);
imshow("Video", edges);
//cvWaitKey(0);
}
return(0);
}
Now it succesfully load the video and query a frame, evry time i press a key it obviously query another frame and everything works fine, but if i comment the waitkey() the program simply hangs for a bit and crashes if i try to close the window, i'm starting to think there is a problem with codecs or something like that...
There are so many potential problems in the code, most of them related to not coding defensively.
What is cvWaitKey(0); doing after cvNamedWindow()? It's unecessary, remove it!
What happens if the capture was unsucessful? Code defensively:
CvCapture* capture = cvCreateFileCapture("monsters.avi");
if (!capture)
{
// File not found, handle error and possibly quit the application
}
and you should use this technique for every pointer that you receive from OpenCV, ok?
One of the major problems, is that you allocate memory for CvMemStorage before the loop, but inside the loop you release it, which means that after the first loop iteration there will be no longer a valid CvMemStorage* storage, and that's a HUGE problem.
Either move the allocation procedure to the beginning of the loop, so on every iteration memory is allocated/deallocated, or move the cvReleaseMemStorage( &storage ); call out of the loop.
Now it works fine, i changed cvWaitKey() with this
if(waitKey(30) >= 0) break;
I don't understand exactly why but now everything works as it should :)

screenshot using openGL and/or X11

i am trying to get a screenshot of the screen or a window. I tried using functions from X11
and it works fine. The problem is that getting the pixels from XImage takes a lot of time.
Than i tried to look for some answers on how to do it using openGL. Here's what i've got:
#include <stdlib.h>
#include <stdio.h>
#include <cstdio>
#include <GL/glut.h>
#include <GL/gl.h>
#include <GL/glx.h>
#include <X11/Xlib.h>
int main(int argc, char **argv)
{
int width=1200;
int height=800;
//_____________________________----
Display *dpy;
Window root;
GLint att[] = { GLX_RGBA, GLX_DEPTH_SIZE, 24, GLX_DOUBLEBUFFER, None };
XVisualInfo *vi;
GLXContext glc;
dpy = XOpenDisplay(NULL);
if ( !dpy ) {
printf("\n\tcannot connect to X server\n\n");
exit(0);
}
root = DefaultRootWindow(dpy);
vi = glXChooseVisual(dpy, 0, att);
if (!vi) {
printf("\n\tno appropriate visual found\n\n");
exit(0);
}
glXMakeCurrent(dpy, root, glc);
glc = glXCreateContext(dpy, vi, NULL, GL_TRUE);
printf("vendor: %s\n", (const char*)glGetString(GL_VENDOR));
//____________________________________________
glXMakeCurrent(dpy, root, glc);
glEnable(GL_DEPTH_TEST);
GLubyte* pixelBuffer = new GLubyte[sizeof(GLubyte)*width*height*3*3];
glReadBuffer(GL_FRONT);
GLint ReadBuffer;
glGetIntegerv(GL_READ_BUFFER,&ReadBuffer);
glPixelStorei(GL_READ_BUFFER,GL_RGB);
GLint PackAlignment;
glGetIntegerv(GL_PACK_ALIGNMENT,&PackAlignment);
glPixelStorei(GL_PACK_ALIGNMENT,1);
glReadPixels(0, 0, width, height, GL_RGB, GL_UNSIGNED_INT, pixelBuffer);
int i;
for (i=0;i<100;i++) printf("%u\n",((unsigned int *)pixelBuffer)[i]);
return 0;
}
when i run the program it returns an error:
X Error of failed request: BadAccess (attempt to access private resource denied)
Major opcode of failed request: 199 ()
Minor opcode of failed request: 26
Serial number of failed request: 20
Current serial number in output stream: 20
if i comment the line with glXMakeCurrent(dpy, root, glc); before glc = glXCreateContext(dpy, vi, NULL, GL_TRUE); it returns no erros, but all the pixels are 0.
How should i go about this problem? I am new to openGL and maybe i am missing something important here. Maybe also another way of getting pixels from the screen or specific window exists?
I don't think what you are trying to do is possible. You can't use OpenGL to read pixels from window you don't own and which probably don't even use OpenGL. You need to stick to X11.
If you have XImage you can get raw pixels from ximage->data. Just make sure you are reading it in correct format.
http://tronche.com/gui/x/xlib/graphics/images.html
You can use XShmGetImage, but you have to query the extensions of the X11 server first, to make sure MIT-SHM extension is available. You also need to know how to setup and use a shared memory segment for this.
Querying the Extension:
http://git.videolan.org/?p=ffmpeg.git;a=blob;f=libavdevice/x11grab.c#l224
Getting the image:
http://git.videolan.org/?p=ffmpeg.git;a=blob;f=libavdevice/x11grab.c#l537
The following runs once at 140 fps on my platform. xcb_image_get() (called with XCB_IMAGE_FORMAT_Z_PIXMAP) will store all pixels in ximg->data, pixel by pixel. On my platform, each pixel is 32 bits, each channel is 8 bits, and there's 3 channels (to 8 bits per pixel are unused).
/*
gcc ss.c -o ss -lxcb -lxcb-image && ./ss
*/
#include <stdio.h>
#include <xcb/xcb_image.h>
xcb_screen_t* xcb_get_screen(xcb_connection_t* connection){
const xcb_setup_t* setup = xcb_get_setup(connection); // I think we don't need to free/destroy this!
return xcb_setup_roots_iterator(setup).data;
}
void xcb_image_print(xcb_image_t* ximg){
printf("xcb_image_print() Printing a (%u,%u) `xcb_image_t` of %u bytes\n\n", ximg->height, ximg->width, ximg->size);
for(int i=0; i < ximg->size; i += 4){
printf(" ");
printf("%02x", ximg->data[i+3]);
printf("%02x", ximg->data[i+2]);
printf("%02x", ximg->data[i+1]);
printf("%02x", ximg->data[i+0]);
}
puts("\n");
}
int main(){
// Connect to the X server
xcb_connection_t* connection = xcb_connect(NULL, NULL);
xcb_screen_t* screen = xcb_get_screen(connection);
// Get pixels!
xcb_image_t* ximg = xcb_image_get(connection, screen->root, 0, 0, screen->width_in_pixels, screen->height_in_pixels, 0xffffffff, XCB_IMAGE_FORMAT_Z_PIXMAP);
// ... Now all pixels are in ximg->data!
xcb_image_print(ximg);
// Clean-up
xcb_image_destroy(ximg);
xcb_disconnect(connection);
}

Resources