SIFT Assertion Failed error - sift

I'm trying to use SIFT to match two images and i'm using the code below:
cv::initModule_nonfree();
cv::Mat matFrame(frame);
cv::Mat matFrameAnt(frameAnterior);
cv::SiftFeatureDetector detector(400); //I've tried different values here
cv::SiftDescriptorExtractor extractor(400); //but i get always the same error
std::vector<cv::KeyPoint> keypoints1;
std::vector<cv::KeyPoint> keypoints2;
detector.detect( matFrame, keypoints1 );
detector.detect( matFrameAnt, keypoints2 );
cv::Mat feat1;
cv::Mat feat2;
cv::Mat descriptor1;
cv::Mat descriptor2;
extractor.compute( matFrame, keypoints1, descriptor1 );
extractor.compute( matFrameAnt, keypoints2, descriptor2 );
std::vector<cv::DMatch> matches;
cv::BFMatcher matcher(cv::NORM_L2, false);
matcher.match(descriptor1,descriptor2, matches);
cv::Mat result;
cv::drawMatches( matFrame, keypoints1, matFrameAnt, keypoints2, matches, result );
cv::namedWindow("SIFT", CV_WINDOW_AUTOSIZE );
cv::imshow("SIFT", result);
I get this error when i run the code (it compiles perfectly).
"OpenCV Error: Assertion failed (firstOctave >= -1 && actualNlayers <= nOctaveLayers) in unknown function, file ......\src\opencv\modules\nonfree\src\sift.cpp, line 755".
I understand that that function is getting a non positive value, so i printed all the possible values from my code and i found out that the size of my two keypoints vectors are -616431 and -616422.
The two images i'm using are black&white images, with a black blackground and my hand (white) in the middle of it.
What's happening? Am i using not valid images? Am i using the functions cv::SiftFeatureDetector and cv::SiftDescriptorExtractor wrong?

It seems you have no clue what you are doing. This feature is fairly undocumented, therefore try digging into the source, or let me tell you what you did.
cv::SiftFeatureDetector detector(50)
This means you will get at most 50 matches.
cv::SiftDescriptorExtractor extractor(400);
This means your magnification for extration is 400x. This parameter should be in the order of "1" for normal results.
The rest of the documentation is here: http://docs.opencv.org/2.3/modules/features2d/doc/common_interfaces_of_feature_detectors.html#SiftFeatureDetector

Related

SDL2 Texture sometimes empty after loading multiple 8 bit surfaces

I'll try to make that question as concise as possible, but don't hesitate to ask for clarification.
I'm dealing with legacy code, and I'm trying to load thousands of 8 bit images from the disk to create a texture for each.
I've tried multiple things, and I'm at the point where I'm trying to load my 8 bits images into a 32 bits surface, and then create a texture from that surface.
The problem : while loading and 8 bit image onto a 32 bit surface is working, when I try to SDL_CreateTextureFromSurface, I end up with a lot of textures that are completely blank (full of transparent pixels, 0x00000000).
Not all textures are wrong, thought. Each time I run the program, I get different "bad" textures. Sometimes there's more, sometimes there's less. And when I trace the program, I always end up with a correct texture (is that a timing problem?)
I know that the loading to the SDL_Surface is working, because I'm saving all the surfaces to the disk, and they're all correct. But I inspected the textures using NVidia NSight Graphics, and more than half of them are blank.
Here's the offending code :
int __cdecl IMG_SavePNG(SDL_Surface*, const char*);
SDL_Texture* Resource8bitToTexture32(SDL_Renderer* renderer, SDL_Color* palette, int paletteSize, void* dataAddress, int Width, int Height)
{
u32 uiCurrentOffset;
u32 uiSourceLinearSize = (Width * Height);
SDL_Color *currentColor;
char strSurfacePath[500];
// The texture we're creating
SDL_Texture* newTexture = NULL;
// Load image at specified address
SDL_Surface* tempSurface = SDL_CreateRGBSurface(0x00, Width, Height, 32, 0x00FF0000, 0x0000FF00, 0x000000FF, 0xFF000000);
SDL_SetSurfaceBlendMode(tempSurface, SDL_BLENDMODE_NONE);
if(SDL_MUSTLOCK(tempSurface)
SDL_LockSurface(tempSurface);
for(uiCurrentOffset = 0; uiCurrentOffset < uiSourceLinearSize; uiCurrentOffset++)
{
currentColor = &palette[pSourceData[uiCurrentOffset]];
if(pSourceData[uiCurrentOffset] != PC_COLOR_TRANSPARENT)
{
((u32*)tempSurface->pixels)[uiCurrentOffset] = (u32)((currentColor->a << 24) + (currentColor->r << 16) + (currentColor->g << 8) + (currentColor->b << 0));
}
}
if(SDL_MUSTLOCK(tempSurface)
SDL_UnlockSurface(tempSurface);
// Create texture from surface pixels
newTexture = SDL_CreateTextureFromSurface(renderer, tempSurface);
// Save the surface to disk for verification only
sprintf(strSurfacePath, "c:\\tmp\\surfaces\\%s.png", GenerateUniqueName());
IMG_SavePNG(tempSurface, strSurfacePath);
// Get rid of old loaded surface
SDL_FreeSurface(tempSurface);
return newTexture;
}
Note that in the original code, I'm checking for boundaries, and for NULL after the SDL_Create*. I'm also aware that it would be better to have a spritesheet for the textures instead of loading each texture individually.
EDIT :
Here's a sample of what I'm observing in NSight if I capture a frame and use the Resources View.
The first 3186 textures are correct. Then I get 43 empty textures. Then I get 228 correct textures. Then 100 bad ones. Then 539 correct ones. Then 665 bad ones. It goes on randomly like that, and it changes each time I run my program.
Again, each time the surfaces saved by IMG_SavePNG are correct. This seems to indicate that something happens when I call SDL_CreateTextureFromSurface but at that point, I don't want to rule anything out, because it's a very weird problem, and it smells undefined behaviour all over the place. But I just can't find the problem.
With the help of #mark-benningfield, I was able to find the problem.
TL;DR
There's a bug (or at least, an undocumented feature) in SDL with the DX11 renderer. There's a work-around ; see at the end.
CONTEXT
I'm trying to load around 12,000 textures when my program start. I know it's not a good idea, but I was planning on using that as a stepping-stone to another more sane system.
DETAILS
What I realized while debugging that problem is that the SDL renderer for DirectX 11 does that when it creates a texture :
result = ID3D11Device_CreateTexture2D(rendererData->d3dDevice,
&textureDesc,
NULL,
&textureData->mainTexture
);
The Microsoft's ID3D11Device::CreateTexture2D method page indicates that :
If you don't pass anything to pInitialData, the initial content of the memory for the resource is undefined. In this case, you need to write the resource content some other way before the resource is read.
If we're to believe that article :
Default Usage
The most common type of usage is default usage. To fill a default texture (one created with D3D11_USAGE_DEFAULT) you can :
[...]
After calling ID3D11Device::CreateTexture2D, use ID3D11DeviceContext::UpdateSubresource to fill the default texture with data from a pointer provided by the application.
So it looks like that D3D11_CreateTexture is using the second method of the default usage to initialize a texture and its content.
But right after that, in the SDL, we call SDL_UpdateTexture (without checking the return value ; I'll get to that later). If we dig until we get the the D3D11 renderer, we get that :
static int
D3D11_UpdateTextureInternal(D3D11_RenderData *rendererData, ID3D11Texture2D *texture, int bpp, int x, int y, int w, int h, const void *pixels, int pitch)
{
ID3D11Texture2D *stagingTexture;
[...]
/* Create a 'staging' texture, which will be used to write to a portion of the main texture. */
ID3D11Texture2D_GetDesc(texture, &stagingTextureDesc);
[...]
result = ID3D11Device_CreateTexture2D(rendererData->d3dDevice, &stagingTextureDesc, NULL, &stagingTexture);
[...]
/* Get a write-only pointer to data in the staging texture: */
result = ID3D11DeviceContext_Map(rendererData->d3dContext, (ID3D11Resource *)stagingTexture, 0, D3D11_MAP_WRITE, 0, &textureMemory);
[...]
/* Commit the pixel buffer's changes back to the staging texture: */
ID3D11DeviceContext_Unmap(rendererData->d3dContext, (ID3D11Resource *)stagingTexture, 0);
/* Copy the staging texture's contents back to the texture: */
ID3D11DeviceContext_CopySubresourceRegion(rendererData->d3dContext, (ID3D11Resource *)texture, 0, x, y, 0, (ID3D11Resource *)stagingTexture, 0, NULL);
SAFE_RELEASE(stagingTexture);
return 0;
}
Note : code snipped for conciseness.
This seems to indicate, based on that article I mentioned, that SDL is using the second method of the Default Usage to allocate the texture memory on the GPU, but uses the Staging Usage to upload the actual pixels.
I don't know that much about DX11 programming, but that mixing up of techniques got my programmer's sense tingling.
I contacted a game programmer I know and explained the problem to him. He told me the following interesting bits :
The driver gets to decide where it's storing staging textures. It usually lies in CPU RAM.
It's much better to specify a pInitialData pointer, as the driver can decide to upload the textures asynchronously.
If you load too many staging textures without commiting them to the GPU, you can fill up the RAM.
I then wondered why SDL didn't return me a "out of memory" error at the time I called SDL_CreateTextureFromSurface, and I found out why (again, snipped for concision) :
SDL_Texture *
SDL_CreateTextureFromSurface(SDL_Renderer * renderer, SDL_Surface * surface)
{
[...]
SDL_Texture *texture;
[...]
texture = SDL_CreateTexture(renderer, format, SDL_TEXTUREACCESS_STATIC,
surface->w, surface->h);
if (!texture) {
return NULL;
}
[...]
if (SDL_MUSTLOCK(surface)) {
SDL_LockSurface(surface);
SDL_UpdateTexture(texture, NULL, surface->pixels, surface->pitch);
SDL_UnlockSurface(surface);
} else {
SDL_UpdateTexture(texture, NULL, surface->pixels, surface->pitch);
}
[...]
return texture;
}
If the creation of the texture is successful, it doesn't care whether or not it succeeded in updating the textures (no check on SDL_UpdateTexture's return value).
WORKAROUND
The poor-man's workaround to that problem is to call SDL_RenderPresent each time you call a SDL_CreateTextureFromSurface.
It's probably fine to do it once every hundred textures depending on your texture size. But just be aware that calling SDL_CreateTextureFromSurface repeatedly without updating the renderer will actually fill up the system RAM, and the SDL won't return you any error condition to check for this.
The irony of this is that had I implemented a "correct" loading loop with percentage of completion on screen, I would never had that problem. But fate had me implement this the quick-and-dirty way, as a proof of concept for a bigger system, and I got sucked into that problem.

How to do automatic OpenGL error checking using GLEW?

I was recently trying to implement automatic error checking after each OpenGL function call. I considered wrapping each OpenGL function in a caller like this:
CheckForErrors(glCreateBuffers(1, &VBO));
But I saw that GLEW already uses its own function wrapper:
#define GLEW_GET_FUN(x) x
So I decided to edit it instead of writting my own function wrapper:
#ifndef GLEW_GET_FUN
#ifdef DEBUG
#define GLEW_GET_FUN(x) while (glGetError() != GL_NO_ERROR);\
x; {\
GLenum error = glGetError();\
if (error != GL_NO_ERROR) {\
printf("[GLEW]: OpenGL error(s) occured while calling %s in %s (line %s):", #x, __FILE__, __LINE__);\
do printf(" %d", error); while (error = glGetError());\
printf("\n");\
__debugbreak();\
}
#else
#define GLEW_GET_FUN(x) x
#endif
#endif
Unfortunately, this doesn't compile. For example this function call:
GLuint vertexShaderID = glCreateShader(GL_VERTEX_SHADER);
Gets changed to this by the preprocessor:
GLuint vertexShaderID = while (glGetError() != GL_NO_ERROR); __glewCreateShader; { GLenum error = glGetError(); if (error != 0) { printf("[GLEW]: OpenGL error(s) occured while calling %s in %s (line %s):", "__glewCreateShader", "main.cpp", 51); do printf(" %d", error); while (error = glGetError()); printf("\n"); __debugbreak(); }(GL_VERTEX_SHADER);
There are 2 problems here:
The statement starts with a while loop, so it cannot return the value.
The parentheses with function parameters are placed after the whole thing and not right after the function call.
I don't know how to overcome those problems and I will appreciate help.
Notes
I am aware of the glDebugMessageCallback() function, but it is only availble in OpenGL 4.3+ which is a rather new and partially insupported yet version.
I cannot remove the while loop at the beginning, because I have to clear all errors before calling the function (unless there is a diffrent way to do this).
I am trying to do something like this, but without using a separate function wrapper.
I don't know how to overcome those problems
You can't. What you want to do is simply not viable in the way you want to do it. You cannot turn an expression (which is what a function call is) into a statement (or rather, a series of statements) and have that work everywhere. It will only work in circumstances where the expression is used as a statement.
If you are unwilling to just regularly insert error checking code into your application, and are unable to use the modern debug messaging API, then the standard solution is to use an external tool to find and report errors. RenderDoc can detect OpenGL errors, for example. It allows you to log every OpenGL call and can report errors anytime they occur.
As Nicol Bolas said, it is impossible to do it the way I originally wanted, but I will describe why this is the case and what can be done instead.
The Problem
GLEW wraps only the name of the function with GLEW_GET_FUN(), so function parameters will always be placed after the end of the define as they are not included in it:
//In code:
glGenBuffers(1, &VBO);
//After preprocessing:
{stuff generated by GLEW_GET_FUN}(1, &VBO);
Preprocessing isn't very inteligent so it will just put the function parameters at the end.
Other Solutions
As described in the question, one could use glDebugMessageCallback() if availble.
Wrap each function with a custom wrapper. Not automatic at all, but if someone is interested here is a great tutorial on how to make one.

cvWatershed unsupported format or combination of formats

I'm working with OpenCV 2.4.11 in C on Code::Blocks, in particular through the O'Reilly book Learning OpenCV. The section on the watershed algorithm was a bit short, so I thought I'd play with it a bit to see how exactly it works. However, every time I call the function I get the following error:
OpenCV Error: Unsupported format or combination of formats (Only 32-bit, 1-chann
el output images are supported) in cvWatershed
My program so far is very simple:
int main(int arg, int arg2) {
//open windows
cvNamedWindow("Input", 1 );
cvNamedWindow("Markings", 1 );
//load images
IplImage* input = cvLoadImage("ActualDoorPhoto.jpg", CV_LOAD_IMAGE_COLOR);
assert(input != NULL);
IplImage* markingstemp = cvLoadImage("ActualMarkingTest.jpg", CV_LOAD_IMAGE_COLOR);
assert(markingstemp != NULL);
//prepare markings
IplImage* markings = cvCreateImage(cvGetSize(markingstemp), 32, 1);
CvMat* markmat = cvCreateMat(input->width, input->height, CV_32FC1);
cvWatershed(input, markmat);
cvShowImage("Input", input);
cvShowImage("Markings", markings);
cvWaitKey(0);
return 0;
}
I have tried putting both markings and markmat as the second argument for cvWatershed, as well as several other things (notably markings with the contours of markingstemp drawn onto it), but every time I get the same error. Can anyone tell me what I'm doing wrong?
You're inverting the dimensions of the output matrix. It should be:
CvMat* markmat = cvCreateMat(input->height, input->width, CV_32FC1);
The format should also probably be changed to CV_32SC1.

Error in output.println() on Processing

I'm trying to read some data from Processing and write it to a file. The data is correct, since I can plot it without problem. However, when I attempt to write it to a file it throws me the following error:
Error, disabling serialEvent() for /dev/ttyACM0
null
Specifically, I've found out where the problem is. It's in this function:
void serialEvent(Serial myPort) {
int inByte = myPort.read();
if (inByte >= 0 && inByte <= 255)
{
// This is what makes the problem arise
output.println("test: " + inByte);
// ...
}
}
I've even changed the line output.println(); with this and then the same function works, printing it to the correct file (but it's obviously not what I want):
// This does work
point(mouseX, mouseY);
output.println(mouseX);
Any idea where the problem might be? I'm using arduino and it passes values from 0 to 255 from serial. The values seem correct, since I can plot them without problem. I've also tried changing println() for print() with no luck.
EDIT. After some testing, I find this really odd. This works:
point(mouseX, mouseY);
output.println(inByte);
While, without the point(), it doesn't work (same error). As a temporary solution, I can put the output.println at the end of the function, but this is obviously not a long-term solution.
I ended up doing a simple check in my code:
if (output != null) {
output.println(t + " " + inByte);
}
It just works. However, I think that polymorphism would be even better here. Having an object that absorbs the text if the output is not initialized, and that prints it to the file if it is.
I had the same problem but here is my solution and it works perfectly.
at the beginning of the code write sth like this:
boolean outputInitialized = false;
after output = createWriter(filename.txt); put following:
outputInitialized = true;
the output.print function has to be included in an if-function:
if (outputInitialized) {
output.println(filename);
}

Call to XCreateColormap Creates Segmentation Fault

For some reason, my Call to XCreateColormap in XLib is giving me a segmentation fault. The funny thing is that most of the code that I've used is almost identical to the code that I've seen on the net which shows how to create a window and OpenGL context using XLib and GLX.
In terms of details, I have a struct called OVI_UnixDisplayData, which basically acts as a container for all of the relevant X Window/GLX data used to create a window and assign it a context. I initially have a function which is designed to create a context and then return a pointer to that data struct. That struct is referred to as just dat (for data).
Occurance of SegFault
dat->fbConfigs = glXChooseFBConfig( dat->display, DefaultScreen( dat->display ), visualAttr, &dat->framebuffCount );
if ( !dat->fbConfigs || dat->framebuffCount < 1 )
{
puts( OVI_ERR_GLX_FRAME_BUF_CFG );
exit( 1 );
}
printf( OVI_STAT_GLX_FRAME_BUFF_CFG_COUNT, dat->framebuffCount );
dat->visualinfo = glXGetVisualFromFBConfig( dat->display, dat->fbConfigs[ dat->fbCountId ] );
printf( OVI_STAT_GLX_FRAME_BUFF_VIS_ID, dat->visualinfo->visualid );
puts( OVI_STAT_X_COLORMAP_CREATE );
dat->setwinatt->colormap = XCreateColormap(
dat->display,
RootWindow( dat->display, dat->visualinfo->screen ),
dat->visualinfo->visual, AllocNone );
I've checked my own versions of GLX, which are being returned as 1.4, so that can't be the problem. Ontop of that, in my debugger, I know that dat->visualinfo->visual->ext_data holds the value of 0x0, so I wouldn't be surprised if that has something to do with it. The problem is that I don't know how (if at all), and I wouldn't know what function to call to get it properly initialized, as its behavior seems to be that of more of a C-like implementation of a linked-list.
Can someone shed some light on this? I need info, and while a Google search has given me some results on other people experiencing seg-faults from this function call, none of them have had a reason even similar to mine for this happening.
If it means anything, I'm running GLX 1.4, and OpenGL 4.2
The segmentation fault has occured to the fact that I had XSetWindowAttributes allocated as a pointer to an address. The issue was resolved when I chose to allocate it on the stack, instead.
Consider this issue resolved.

Resources