Problems with attaching textures of different sizes to FBO - c

Today I faced a strange problem while I was developing my OpenGL 4.5 application. I attempted to attach two textures of different sizes to one FBO as color attachments in order to create a bloom shader. As far as I know, in modern OpenGL versions this should be possible.
This is the code I'm using:
//Create textures
GLuint tex[2];
glCreateTextures( GL_TEXTURE_2D, 2, tex );
glTextureStorage2D( tex[0], 1, GL_RGB8, 2048, 2048 );
glTextureStorage2D( tex[1], 1, GL_RGB8, 1024, 1024 );
//Create FBO
GLuint fbo;
glCreateFramebuffers( 1, &fbo );
glNamedFramebufferTexture( fbo, GL_COLOR_ATTACHMENT0, tex[0], 0 );
glNamedFramebufferTexture( fbo, GL_COLOR_ATTACHMENT1, tex[1], 0 );
//Check completeness
GLenum comp = glCheckNamedFramebufferStatus( fbo, GL_FRAMEBUFFER );
I'd expect comp to be GL_FRAMEBUFFER_COMPLETE, however, in my case glCheckNamedFramebufferStatus returns GL_FRAMEBUFFER_INCOMPLETE_DIMENSIONS_EXT.
I'm afraid it might be some driver bug, based on these two threads, since apparently INCOMPLETE_DIMENSIONS has been removed from newer OpenGL versions:
http://forum.lwjgl.org/index.php?topic=4207.0
devtalk.nvidia.com topic
Here's the full code to illustrate the issue - https://pastebin.com/c9Hqzzky.
My output is:
0x8cd9
0x8cd9 - GL_FRAMEBUFFER_INCOMPLETE_DIMENSIONS_EXT
fbotest: fbotest.c:41: main: Assertion `comp != GL_FRAMEBUFFER_INCOMPLETE_DIMENSIONS_EXT' failed.
Aborted (core dumped)
I have Nvidia GTX 1060 graphics card, Ubuntu 18.04LTS and Nvidia driver version 390.67.
Has anyone seen similar behavior before? If so, what are possible workarounds?
Thank you for help in advance

This is actually an Nvidia bug with ARB DSA (which I've filed last Februrary), if you use non-Named versions it will not give a validation error.

Related

glClear before XMapWindow (prevent undefined graphical buffer)

I'm struggling to handle the case where upon XMapWindow with glX, the buffer is undefined and as such the glx buffer will show undefined data before the first glXSwapBuffers is drawn.
I vaguely remember gl operations being meaningless before glXMakeCurrent, and glXMakeCurrent being meaningless before XMapWindow. Under these constraints, How does one control what will be drawn to an X11 GLX Window when it is mapped?
Seems like I would want to write the order as thus, but I still get undefined data (is this a vga driver specific issue that I should simply ignore?)
glXMakeCurrent(d, w, ctx);
glClearColor(0.0, 0.0, 0.0, 1.0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glXSwapBuffers(d, w);
XMapWindow(d, w);
Looking at a Khronos reference example, this behavior happens in their example.
With a small patch, we can trigger exercise the undefined behavior every time.
--- OpenGL 3.0 Context Creation (GLX).orig.cc 2020-03-23 22:32:41.421765953 -0700
+++ OpenGL 3.0 Context Creation (GLX).edit.cc 2020-03-23 22:32:46.113753021 -0700
## -163,6 +163,8 ##
printf( "Mapping window\n" );
XMapWindow( display, win );
+ XSync( display, win );
+ sleep(2);
// Get the default screen's GLX extension list
const char *glxExts = glXQueryExtensionsString( display,
It looks like there is simply a defacto-race condition between XMapWindow and glx ini to glXSwapBuffers; best thing to do would be to ensure your first frame is swapped in ASAP and hope that you usually beat X to the punch.

How do I use SDL_LockTexture() to update dirty rectangles?

I'm migrating an application from SDL 1.2 to 2.0, and it keeps an array of dirty rectangles to determine which parts of its SDL_Surface to draw to the screen. I'm trying to find the best way to integrate this with SDL 2's SDL_Texture.
Here's how the SDL 1.2 driver is working: https://gist.github.com/nikolas/1bb8c675209d2296a23cc1a395a32a0d
And here's how I'm getting changes from the surface to the texture in SDL 2:
void *pixels;
int pitch;
SDL_LockTexture(_sdl_texture, NULL, &pixels, &pitch);
memcpy(
pixels, _sdl_surface->pixels,
pitch * _sdl_surface->h);
SDL_UnlockTexture(_sdl_texture);
for (int i = 0; i < num_dirty_rects; i++) {
SDL_RenderCopy(
_sdl_renderer, _sdl_texture, &_dirty_rects[i], &_dirty_rects[i]);
}
SDL_RenderPresent(_sdl_renderer);
I'm just updating the entire surface, but then taking advantage of the dirty rectangles in the RenderCopy(). Is there a better way to do things here, only updating the dirty rectangles? Will I run into problems calling SDL_LockTexture and UnlockTexture up to a hundred times every frame, or is that how they're meant to be used?
SDL_LockTexture accepts an SDL_Rect param which I could use here, but then it's unclear to me how to get the appropriate rect from _sdl_surface->pixels. How would I copy out just a small rect from this pixel data of the entire screen?

DirectX9 Use Geometry Instancing for a Mesh with multiple materials

I am trying to have a flexible Geometry Instancing code able to handle meshes with multiple materials. For a mesh with one material everything is fine. I manage to render as many instances as I want with a single draw call.
Things get a bit more complicated with multiple materials. My mesh comes from an .x file. It has one vertex buffer, one index buffer but several materials. The indexes to render for each subset (materials) is stored in an attribute array.
Here is the code I use:
d3ddev->SetVertexDeclaration( m_vertexDeclaration );
d3ddev->SetIndices( m_indexBuffer );
d3ddev->SetStreamSourceFreq(0, (D3DSTREAMSOURCE_INDEXEDDATA | m_numInstancesToDraw ));
d3ddev->SetStreamSource(0, m_vertexBuffer, 0, D3DXGetDeclVertexSize( m_geometryElements, 0 ) );
d3ddev->SetStreamSourceFreq(1, (D3DSTREAMSOURCE_INSTANCEDATA | 1ul));
d3ddev->SetStreamSource(1, m_instanceBuffer, 0, D3DXGetDeclVertexSize( m_instanceElements, 1 ) );
m_effect->Begin(NULL, NULL); // begin using the effect
m_effect->BeginPass(0); // begin the pass
for( DWORD i = 0; i < m_numMaterials; ++i ) // loop through each subset.
{
d3ddev->SetMaterial(&m_materials[i]); // set the material for the subset
if(m_textures[i] != NULL)
{
d3ddev->SetTexture( 0, m_textures[i] );
}
d3ddev->DrawIndexedPrimitive(
D3DPT_TRIANGLELIST, // Type
0, // BaseVertexIndex
m_attributes[i].VertexStart, // MinIndex
m_attributes[i].VertexCount, // NumVertices
m_attributes[i].FaceStart * 3, // StartIndex
m_attributes[i].FaceCount // PrimitiveCount
);
}
m_effect->EndPass();
m_effect->End();
d3ddev->SetStreamSourceFreq(0,1);
d3ddev->SetStreamSourceFreq(1,1);
This code will work for the first material only. When I say the first I meant the one at index 0 because if I start my loop with the second material, it will not be rendered. However, by debugging the vertex buffer in PIX, I can see all my materials being processed properly. So something happens after the vertex shader.
Another weird issue, all my materials will be rendered if I set my stream source containing the instance data to be a vertex size of zero.
So Instead of this:
d3ddev->SetStreamSource(1, m_instanceBuffer, 0, D3DXGetDeclVertexSize( m_instanceElements, 1 ) );
I replace it by:
d3ddev->SetStreamSource(1, m_instanceBuffer, 0, 0 );
But of course, with this code, all my instances are rendered at the same position since I reuse the same instance data over and over again.
And last point, everything works fine if I create my device with D3DCREATE_SOFTWARE_VERTEXPROCESSING. Only Hardware has the issue but unfortunately DirectX does not report any problem in debug mode.
See the Shader Model 3 docs
If you are implementing shaders in hardware, you may not use vs_3_0 or ps_3_0 with any other shader versions, and you may not use either shader type with the fixed function pipeline. These changes make it possible to simplify drivers and the runtime. The only exception is that software-only vs_3_0 shaders may be used with any pixel shader version.
I had the same problem and in my case, the problem was with pool for instancing mesh. I originally had this mesh in SYSTEM_MEMORY, but the instanced mesh in POOL_DEFAULT. When I changed instancing mesh to sit in a default mem, everything worked as desired.
Hope it helps.

OpenGL - Nothing appearing on screen

I'm using Windows 7 with VC++ 2010
I'm trying to draw a simple point to a screen but it's not showing.
The screen is clearing to black so I know that I have a valid OpenGL context etc...
Basically my OpenGL code boils down to this (I don't have a depth buffer at this point):
glClear( GL_COLOR_BUFFER_BIT );
glMatrixMode( GL_PROJECTION );
glLoadIdentity();
gluPerspective( 45.0, 1018.0 / 743.0, 5.0, 999.0 );
glMatrixMode( GL_MODELVIEW );
glLoadIdentity();
glColor4f( 1, 1, 1, 1 );
glPointSize( 100 );
glBegin( GL_POINTS );
glVertex2i( 0, 0 );
glEnd();
SwapBuffers( hdc );
The initialization code for OpenGL is this:
glClearColor( 0, 0, 0, 1 );
glShadeModel( GL_SMOOTH );
glHint( GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST );
The problem is that nothing appears on the screen, the only thing that happens is the screen gets cleared.
Go through the following checklist (which is the general opengl checklist from delphigl.com (de_DE), which we usually give people to go through when they don't see anything):
Is your object accidentially painted in black? Try and change the glClearColor.
Do you have texturing enabled accidentially? Disable it before drawing with glDisable(GL_TEXTURE_2D).
Try disabling the following tests:
GL_DEPTH_TEST
GL_CULL_FACE
GL_ALPHA_TEST
Check whether your glViewport is setup correctly.
Try translating your Model View Matrix out of the near-clipping-plane (5.0 in your case) with glTranslatef(0, 0, -6.0)
There are several potential issues. The main problem will be how you are using the gluPerspective projection. gluPerspective is for perspectivic view and as such, it won't display anything at the (0, 0, 0) in View Coordinates. In your setup, you forbid displaying anything before (0, 0, 5) in View Coordinates (near clipping plane). I suggest setting your point to glVertex3f(0., 0., 10.) and try again. Another solution would be to use glTranslatef to move your View Coordinates around by more than 5 units.
Also glPointSize will probably not accept your value of 100, as common implementations are limited to a point size of 64.
For a good start with OpenGL, I'd also recommend reading up on Nehes Tutorials. They might not be State-Of-The-Art, but cover anything you're facing right now.
The problem was because I had called glDepthRange misunderstanding what it actually did, I was calling it like this: glDepthRange( nearPlane, farPlane ). (which was 5.0f and 999.0f) When I removed this call everything was able to draw correctly. Thankyou very much for your help. :)

Increasing camera capture resolution in OpenCV

In my C/C++ program, I'm using OpenCV to capture images from my webcam. The camera (Logitech QuickCam IM) can capture at resolutions 320x240, 640x480 and 1280x960. But, for some strange reason, OpenCV gives me images of resolution 320x240 only. Calls to change the resolution using cvSetCaptureProperty() with other resolution values just don't work. How do I capture images with the other resolutions possible with my webcam?
I'm using openCV 1.1pre1 under Windows (videoinput library is used by default by this version of openCv under windows).
With these instructions I can set camera resolution. Note that I call the old cvCreateCameraCapture instead of cvCaptureFromCam.
capture = cvCreateCameraCapture(cameraIndex);
cvSetCaptureProperty( capture, CV_CAP_PROP_FRAME_WIDTH, 640 );
cvSetCaptureProperty( capture, CV_CAP_PROP_FRAME_HEIGHT, 480 );
videoFrame = cvQueryFrame(capture);
I've tested it with Logitech, Trust and Philips webcams
There doesn't seem to be a solution. The resolution can be increased to 640x480 using this hack shared by lifebelt77. Here are the details reproduced:
Add to highgui.h:
#define CV_CAP_PROP_DIALOG_DISPLAY 8
#define CV_CAP_PROP_DIALOG_FORMAT 9
#define CV_CAP_PROP_DIALOG_SOURCE 10
#define CV_CAP_PROP_DIALOG_COMPRESSION 11
#define CV_CAP_PROP_FRAME_WIDTH_HEIGHT 12
Add the function icvSetPropertyCAM_VFW to cvcap.cpp:
static int icvSetPropertyCAM_VFW( CvCaptureCAM_VFW* capture, int property_id, double value )
{
int result = -1;
CAPSTATUS capstat;
CAPTUREPARMS capparam;
BITMAPINFO btmp;
switch( property_id )
{
case CV_CAP_PROP_DIALOG_DISPLAY:
result = capDlgVideoDisplay(capture->capWnd);
//SendMessage(capture->capWnd,WM_CAP_DLG_VIDEODISPLAY,0,0);
break;
case CV_CAP_PROP_DIALOG_FORMAT:
result = capDlgVideoFormat(capture->capWnd);
//SendMessage(capture->capWnd,WM_CAP_DLG_VIDEOFORMAT,0,0);
break;
case CV_CAP_PROP_DIALOG_SOURCE:
result = capDlgVideoSource(capture->capWnd);
//SendMessage(capture->capWnd,WM_CAP_DLG_VIDEOSOURCE,0,0);
break;
case CV_CAP_PROP_DIALOG_COMPRESSION:
result = capDlgVideoCompression(capture->capWnd);
break;
case CV_CAP_PROP_FRAME_WIDTH_HEIGHT:
capGetVideoFormat(capture->capWnd, &btmp, sizeof(BITMAPINFO));
btmp.bmiHeader.biWidth = floor(value/1000);
btmp.bmiHeader.biHeight = value-floor(value/1000)*1000;
btmp.bmiHeader.biSizeImage = btmp.bmiHeader.biHeight *
btmp.bmiHeader.biWidth * btmp.bmiHeader.biPlanes *
btmp.bmiHeader.biBitCount / 8;
capSetVideoFormat(capture->capWnd, &btmp, sizeof(BITMAPINFO));
break;
default:
break;
}
return result;
}
and edit captureCAM_VFW_vtable as following:
static CvCaptureVTable captureCAM_VFW_vtable =
{
6,
(CvCaptureCloseFunc)icvCloseCAM_VFW,
(CvCaptureGrabFrameFunc)icvGrabFrameCAM_VFW,
(CvCaptureRetrieveFrameFunc)icvRetrieveFrameCAM_VFW,
(CvCaptureGetPropertyFunc)icvGetPropertyCAM_VFW,
(CvCaptureSetPropertyFunc)icvSetPropertyCAM_VFW, // was NULL
(CvCaptureGetDescriptionFunc)0
};
Now rebuilt highgui.dll.
I've done image processing in linux before and skipped OpenCV's built in camera functionality because it's (as you've discovered) incomplete.
Depending on your OS you may have more luck going straight to the hardware through normal channels as opposed to through openCV. If you are using Linux, video4linux or video4linux2 should give you relatively trivial access to USB webcams and you can use libavc1394 for firewire. Depending on the device and the quality of the example code you follow, you should be able to get the device running with the parameters you want in an hour or two.
Edited to add: You are on your own if its Windows. I imagine it's not much more difficult but I've never done it.
I strongly suggest using VideoInput lib, it supports any DirectShow device (even multiple devices at the same time) and is more configurable. You'll spend five minutes make it play with OpenCV.
Check this ticket out:
https://code.ros.org/trac/opencv/ticket/376
"The solution is to use the newer libv4l-based wrapper.
install libv4l-dev (this is how it's called in Ubuntu)
rerun cmake, you will see "V4L/V4L2: Using libv4l"
rerun make. now the resolution can be changed. tested with built-in isight on MBP."
This fixed it for me using Ubuntu and might aswell work for you.
Code I finally got working in Python once Aaron Haun pointed out I needed to define the arguments of the set function before using them.
#Camera_Get_Set.py
#By Forrest L. Erickson of VRX Company Inc. 8-31-12.
#Opens the camera and reads and reports the settings.
#Then tries to set for higher resolution.
#Workes with Logitech C525 for resolutions 960 by 720 and 1600 by 896
import cv2.cv as cv
import numpy
CV_CAP_PROP_POS_MSEC = 0
CV_CAP_PROP_POS_FRAMES = 1
CV_CAP_PROP_POS_AVI_RATIO = 2
CV_CAP_PROP_FRAME_WIDTH = 3
CV_CAP_PROP_FRAME_HEIGHT = 4
CV_CAP_PROP_FPS = 5
CV_CAP_PROP_POS_FOURCC = 6
CV_CAP_PROP_POS_FRAME_COUNT = 7
CV_CAP_PROP_BRIGHTNESS = 8
CV_CAP_PROP_CONTRAST = 9
CV_CAP_PROP_SATURATION = 10
CV_CAP_PROP_HUE = 11
CV_CAPTURE_PROPERTIES = tuple({
CV_CAP_PROP_POS_MSEC,
CV_CAP_PROP_POS_FRAMES,
CV_CAP_PROP_POS_AVI_RATIO,
CV_CAP_PROP_FRAME_WIDTH,
CV_CAP_PROP_FRAME_HEIGHT,
CV_CAP_PROP_FPS,
CV_CAP_PROP_POS_FOURCC,
CV_CAP_PROP_POS_FRAME_COUNT,
CV_CAP_PROP_BRIGHTNESS,
CV_CAP_PROP_CONTRAST,
CV_CAP_PROP_SATURATION,
CV_CAP_PROP_HUE})
CV_CAPTURE_PROPERTIES_NAMES = [
"CV_CAP_PROP_POS_MSEC",
"CV_CAP_PROP_POS_FRAMES",
"CV_CAP_PROP_POS_AVI_RATIO",
"CV_CAP_PROP_FRAME_WIDTH",
"CV_CAP_PROP_FRAME_HEIGHT",
"CV_CAP_PROP_FPS",
"CV_CAP_PROP_POS_FOURCC",
"CV_CAP_PROP_POS_FRAME_COUNT",
"CV_CAP_PROP_BRIGHTNESS",
"CV_CAP_PROP_CONTRAST",
"CV_CAP_PROP_SATURATION",
"CV_CAP_PROP_HUE"]
capture = cv.CaptureFromCAM(0)
print ("\nCamera properties before query of frame.")
for i in range(len(CV_CAPTURE_PROPERTIES_NAMES)):
# camera_valeus =[CV_CAPTURE_PROPERTIES_NAMES, foo]
foo = cv.GetCaptureProperty(capture, CV_CAPTURE_PROPERTIES[i])
camera_values =[CV_CAPTURE_PROPERTIES_NAMES[i], foo]
# print str(camera_values)
print str(CV_CAPTURE_PROPERTIES_NAMES[i]) + ": " + str(foo)
print ("\nOpen a window for display of image")
cv.NamedWindow("Camera", 1)
while True:
img = cv.QueryFrame(capture)
cv.ShowImage("Camera", img)
if cv.WaitKey(10) == 27:
break
cv.DestroyWindow("Camera")
#cv.SetCaptureProperty(capture, CV_CAP_PROP_FRAME_WIDTH, 1024)
#cv.SetCaptureProperty(capture, CV_CAP_PROP_FRAME_HEIGHT, 768)
cv.SetCaptureProperty(capture, CV_CAP_PROP_FRAME_WIDTH, 1600)
cv.SetCaptureProperty(capture, CV_CAP_PROP_FRAME_HEIGHT, 896)
print ("\nCamera properties after query and display of frame.")
for i in range(len(CV_CAPTURE_PROPERTIES_NAMES)):
# camera_valeus =[CV_CAPTURE_PROPERTIES_NAMES, foo]
foo = cv.GetCaptureProperty(capture, CV_CAPTURE_PROPERTIES[i])
camera_values =[CV_CAPTURE_PROPERTIES_NAMES[i], foo]
# print str(camera_values)
print str(CV_CAPTURE_PROPERTIES_NAMES[i]) + ": " + str(foo)
print ("/nOpen a window for display of image")
cv.NamedWindow("Camera", 1)
while True:
img = cv.QueryFrame(capture)
cv.ShowImage("Camera", img)
if cv.WaitKey(10) == 27:
break
cv.DestroyWindow("Camera")
I am using debian and ubuntu, i had the same problem, i couldn't change the resolution of video input using CV_CAP_PROP_FRAME_WIDTH and CV_CAP_PROP_FRAME_HEIGHT
I turned out that the reason was a missing library.
I installed lib4l-dev through synaptic, rebuilt OpenCV and the problem is SOLVED!
I am posting this to ensure that no one else wastes time on this setproperty function. I spent 2 days on this to see that nothing seems to be working. So I dug out the code (I had installed the library the first time around). This is what actually happens - cvSetCaptureProperty, calls setProperty inside CvCapture class and lo behold setProperty does nothing. It just returns false.
Instead I'll pick up using another library to feed OpenCV a capture video/images. I am using OpenCV 2.2
cvSetCaptureProperty( capture, CV_CAP_PROP_FRAME_WIDTH, WIDTH );
cvSetCaptureProperty( capture, CV_CAP_PROP_FRAME_HEIGHT, HEIGHT);
cvQueryFrame(capture);
That will not work with OpenCV 2.2, but if you use OpenCV 2.1 it will work fine !
If you are on windows platform, try DirectShow (IAMStreamConfig).
http://msdn.microsoft.com/en-us/library/dd319784%28v=vs.85%29.aspx
Under Windows try to use VideoInput library:
http://robocraft.ru/blog/computervision/420.html
I find that in Windows (from Win98 to WinXP SP3), OpenCV will often use Microsoft's VFW library for camera access. The problem with this is that it is often very slow (say a max of 15 FPS frame capture) and buggy (hence why cvSetCaptureProperty often doesn't work). Luckily, you can usually change the resolution in other software (particularly "AMCAP", which is a demo program that is easily available) and it will effect the resolution that OpenCV will use. For example, you can run AMCAP to set the resolution to 640x480, and then OpenCV will use that by default from that point onwards!
But if you can use a different Windows camera access library such as the "videoInput" library http://muonics.net/school/spring05/videoInput/ that accesses the camera using very efficient DirectShow (part of DirectX). Or if you have a professional quality camera, then often it will come with a custom API that lets you access the camera, and you could use that for fast access with the ability to change resolution and many other things.
Just one information that could be valuable for people having difficulties to change the default capture resolution (640 x 480) ! I experimented myself a such problem with opencv 2.4.x and one Logitech camera ... and found one workaround !
The behaviour I detected is that the default format is setup as initial parameters when camera capture is started (cvCreateCameraCapture), and all request to change height or width :
cvSetCaptureProperty( capture, CV_CAP_PROP_FRAME_WIDTH, ...
or
cvSetCaptureProperty( capture, CV_CAP_PROP_FRAME_HEIGHT, ...
are not possible afterwards ! Effectively, I discovered with adding return error of ioctl functions that V4l2 driver is returning EBUSY for thet requests !
Therefore, one workaround should be to change the default value directly in highgui/cap_v4l.cpp :
*#define DEFAULT_V4L_WIDTH 1280 // Originally 640*
*#define DEFAULT_V4L_HEIGHT 720 // Originally 480*
After that, I just recompiled opencv ... and arrived to get 1280 x 720 without any problem ! Of course, a better fix should be to stop the acquisition, change the parameters, and restart stream after, but I'm not enough familiar with opencv for doing that !
Hope it will help.
Michel BEGEY
Try this:
capture = cvCreateCameraCapture(-1);
//set resolution
cvSetCaptureProperty(capture, CV_CAP_PROP_FRAME_WIDTH, frameWidth);
cvSetCaptureProperty(capture, CV_CAP_PROP_FRAME_HEIGHT, frameHeight);
cvQueryFrame(capture);
cvSetCaptureProperty( capture, CV_CAP_PROP_FRAME_WIDTH, any_supported_size );
cvSetCaptureProperty( capture, CV_CAP_PROP_FRAME_HEIGHT, any_supported_size);
cvQueryFrame(capture);
should be just enough!

Resources