cvCreateFileCapture strange error - c

i'm trying to create a simple Opencv program in C that creates a file capture from a .avi, and it plays it in a window highlighting faces. I'm running a self-compiled version of Opencv (i already tried the same with a jpeg image and it works).
Building goes well, no errors, no warning, but when i launch it this the console output this:
Unknown parameter encountered: "server role"
Ignoring unknown parameter "server role"
And the program simply stops
Previously it was complaining for a missing /home/#user/.smb/smb.conf file, so i tried installing samba ( even though i've still no idea what does samba have to do in all this )
here is my code:
main(){
printf("Ciao!");
cvNamedWindow("window", CV_WINDOW_AUTOSIZE);
cvWaitKey(0);
printf("ok");
CvCapture* capture = cvCreateFileCapture("monsters.avi");
CvHaarClassifierCascade* cascade = load_object_detector("haarcascade_frontalface_alt.xml");
CvMemStorage* storage = cvCreateMemStorage(0);
//List of the faces
CvSeq* faces;
while (0<10) {
CvArr* image = cvQueryFrame(capture);
double scale = 1;
faces = cvHaarDetectObjects(image,cascade, storage, 1.2, 2, CV_HAAR_DO_CANNY_PRUNING, cvSize(1,1), cvSize(300,300));
int i;
for(i = 0; i < faces->total; i++ )
{
CvRect face_rect = *(CvRect*)cvGetSeqElem( faces, i );
cvRectangle( image,
cvPoint(face_rect.x*scale,face_rect.y*scale),
cvPoint((face_rect.x+face_rect.width)*scale,(face_rect.y+face_rect.height)*scale),
CV_RGB(255,0,0) , 3, 8, 0);
}
cvReleaseMemStorage( &storage );
cvShowImage("window", image);
}
cvWaitKey(0);
printf("Ciao!");
}
I thank you for your answer, i switched to C++ for my trials. Now i did this:
int main(){
namedWindow("Video", CV_WINDOW_FREERATIO);
VideoCapture cap("sintel.mp4");
if(!cap.isOpened()) // check if we succeeded
return -1;
Mat edges;
for(;;){
Mat frame;
cap>>frame;
cvtColor(frame, edges, CV_BGR2GRAY);
GaussianBlur(edges, edges, Size(7,7), 1.5, 1.5);
Canny(edges, edges, 0, 30, 3);
imshow("Video", edges);
//cvWaitKey(0);
}
return(0);
}
Now it succesfully load the video and query a frame, evry time i press a key it obviously query another frame and everything works fine, but if i comment the waitkey() the program simply hangs for a bit and crashes if i try to close the window, i'm starting to think there is a problem with codecs or something like that...

There are so many potential problems in the code, most of them related to not coding defensively.
What is cvWaitKey(0); doing after cvNamedWindow()? It's unecessary, remove it!
What happens if the capture was unsucessful? Code defensively:
CvCapture* capture = cvCreateFileCapture("monsters.avi");
if (!capture)
{
// File not found, handle error and possibly quit the application
}
and you should use this technique for every pointer that you receive from OpenCV, ok?
One of the major problems, is that you allocate memory for CvMemStorage before the loop, but inside the loop you release it, which means that after the first loop iteration there will be no longer a valid CvMemStorage* storage, and that's a HUGE problem.
Either move the allocation procedure to the beginning of the loop, so on every iteration memory is allocated/deallocated, or move the cvReleaseMemStorage( &storage ); call out of the loop.

Now it works fine, i changed cvWaitKey() with this
if(waitKey(30) >= 0) break;
I don't understand exactly why but now everything works as it should :)

Related

CSFML Vertex Array and drawing

It's been a few weeks i've been working a project for my school and I now need to work on particles. I've been looking at vertices and it looks like to be a good way to make them.
I've started by trying to print at least one vertex on the screen and to print it, but I don't know what I'm doing wrong.
CSFML is a very restricted library as not many people use it, so trying to find SFML examples and to figure out the derivates of the functions to C is quite hard and giving me some troubles.
Here's my code :
{
sfVertex a;
sfVector2f apos = {200, 100};
a.color = sfRed;
a.position = apos;
sfVertexArray *array = sfVertexArray_create();
sfVertexArray_setPrimitiveType(array, sfPoints);
sfVertexArray_append(array, a);
sfRenderWindow_drawVertexArray(window, array, 0);
}
In this example, I'm trying to create a vertex, give it a position, a color, and then create a vertex array that takes point vertices and to append my vertex to the vertex array. I think the only problem here is to print it on the screen, as sfRenderWindow_drawVertexArray(window, array, 0); doesn't print anything, and if I put the render state to 1 my program just crashes before even opening my window.
I tried to find examples and explanations about this function but I'm pretty much lost now.
I think your error was that you did not set sfPoints in your code. Here is a simple code that draws 4 points.
#include <iostream>
#include <SFML/Graphics.hpp>
int main(){
sf::RenderWindow window(sf::VideoMode(200, 200), "SFML works!");
while (window.isOpen()){
sf::Event event;
while (window.pollEvent(event)){
if (event.type == sf::Event::Closed)
window.close();
}
sf::VertexArray vertexArray (sf::Points, 4);
vertexArray[0].position = sf::Vector2f(10, 10);
vertexArray[1].position = sf::Vector2f(10, 20);
vertexArray[2].position = sf::Vector2f(20, 10);
vertexArray[3].position = sf::Vector2f(20, 20);
// Set colour for all vertices
for(int i = 0; i < 4.; i++){
vertexArray[i].color=sf::Color::Yellow;
}
window.clear();
window.draw(vertexArray);
window.display();
}
return 0;
}

SDL2 Texture sometimes empty after loading multiple 8 bit surfaces

I'll try to make that question as concise as possible, but don't hesitate to ask for clarification.
I'm dealing with legacy code, and I'm trying to load thousands of 8 bit images from the disk to create a texture for each.
I've tried multiple things, and I'm at the point where I'm trying to load my 8 bits images into a 32 bits surface, and then create a texture from that surface.
The problem : while loading and 8 bit image onto a 32 bit surface is working, when I try to SDL_CreateTextureFromSurface, I end up with a lot of textures that are completely blank (full of transparent pixels, 0x00000000).
Not all textures are wrong, thought. Each time I run the program, I get different "bad" textures. Sometimes there's more, sometimes there's less. And when I trace the program, I always end up with a correct texture (is that a timing problem?)
I know that the loading to the SDL_Surface is working, because I'm saving all the surfaces to the disk, and they're all correct. But I inspected the textures using NVidia NSight Graphics, and more than half of them are blank.
Here's the offending code :
int __cdecl IMG_SavePNG(SDL_Surface*, const char*);
SDL_Texture* Resource8bitToTexture32(SDL_Renderer* renderer, SDL_Color* palette, int paletteSize, void* dataAddress, int Width, int Height)
{
u32 uiCurrentOffset;
u32 uiSourceLinearSize = (Width * Height);
SDL_Color *currentColor;
char strSurfacePath[500];
// The texture we're creating
SDL_Texture* newTexture = NULL;
// Load image at specified address
SDL_Surface* tempSurface = SDL_CreateRGBSurface(0x00, Width, Height, 32, 0x00FF0000, 0x0000FF00, 0x000000FF, 0xFF000000);
SDL_SetSurfaceBlendMode(tempSurface, SDL_BLENDMODE_NONE);
if(SDL_MUSTLOCK(tempSurface)
SDL_LockSurface(tempSurface);
for(uiCurrentOffset = 0; uiCurrentOffset < uiSourceLinearSize; uiCurrentOffset++)
{
currentColor = &palette[pSourceData[uiCurrentOffset]];
if(pSourceData[uiCurrentOffset] != PC_COLOR_TRANSPARENT)
{
((u32*)tempSurface->pixels)[uiCurrentOffset] = (u32)((currentColor->a << 24) + (currentColor->r << 16) + (currentColor->g << 8) + (currentColor->b << 0));
}
}
if(SDL_MUSTLOCK(tempSurface)
SDL_UnlockSurface(tempSurface);
// Create texture from surface pixels
newTexture = SDL_CreateTextureFromSurface(renderer, tempSurface);
// Save the surface to disk for verification only
sprintf(strSurfacePath, "c:\\tmp\\surfaces\\%s.png", GenerateUniqueName());
IMG_SavePNG(tempSurface, strSurfacePath);
// Get rid of old loaded surface
SDL_FreeSurface(tempSurface);
return newTexture;
}
Note that in the original code, I'm checking for boundaries, and for NULL after the SDL_Create*. I'm also aware that it would be better to have a spritesheet for the textures instead of loading each texture individually.
EDIT :
Here's a sample of what I'm observing in NSight if I capture a frame and use the Resources View.
The first 3186 textures are correct. Then I get 43 empty textures. Then I get 228 correct textures. Then 100 bad ones. Then 539 correct ones. Then 665 bad ones. It goes on randomly like that, and it changes each time I run my program.
Again, each time the surfaces saved by IMG_SavePNG are correct. This seems to indicate that something happens when I call SDL_CreateTextureFromSurface but at that point, I don't want to rule anything out, because it's a very weird problem, and it smells undefined behaviour all over the place. But I just can't find the problem.
With the help of #mark-benningfield, I was able to find the problem.
TL;DR
There's a bug (or at least, an undocumented feature) in SDL with the DX11 renderer. There's a work-around ; see at the end.
CONTEXT
I'm trying to load around 12,000 textures when my program start. I know it's not a good idea, but I was planning on using that as a stepping-stone to another more sane system.
DETAILS
What I realized while debugging that problem is that the SDL renderer for DirectX 11 does that when it creates a texture :
result = ID3D11Device_CreateTexture2D(rendererData->d3dDevice,
&textureDesc,
NULL,
&textureData->mainTexture
);
The Microsoft's ID3D11Device::CreateTexture2D method page indicates that :
If you don't pass anything to pInitialData, the initial content of the memory for the resource is undefined. In this case, you need to write the resource content some other way before the resource is read.
If we're to believe that article :
Default Usage
The most common type of usage is default usage. To fill a default texture (one created with D3D11_USAGE_DEFAULT) you can :
[...]
After calling ID3D11Device::CreateTexture2D, use ID3D11DeviceContext::UpdateSubresource to fill the default texture with data from a pointer provided by the application.
So it looks like that D3D11_CreateTexture is using the second method of the default usage to initialize a texture and its content.
But right after that, in the SDL, we call SDL_UpdateTexture (without checking the return value ; I'll get to that later). If we dig until we get the the D3D11 renderer, we get that :
static int
D3D11_UpdateTextureInternal(D3D11_RenderData *rendererData, ID3D11Texture2D *texture, int bpp, int x, int y, int w, int h, const void *pixels, int pitch)
{
ID3D11Texture2D *stagingTexture;
[...]
/* Create a 'staging' texture, which will be used to write to a portion of the main texture. */
ID3D11Texture2D_GetDesc(texture, &stagingTextureDesc);
[...]
result = ID3D11Device_CreateTexture2D(rendererData->d3dDevice, &stagingTextureDesc, NULL, &stagingTexture);
[...]
/* Get a write-only pointer to data in the staging texture: */
result = ID3D11DeviceContext_Map(rendererData->d3dContext, (ID3D11Resource *)stagingTexture, 0, D3D11_MAP_WRITE, 0, &textureMemory);
[...]
/* Commit the pixel buffer's changes back to the staging texture: */
ID3D11DeviceContext_Unmap(rendererData->d3dContext, (ID3D11Resource *)stagingTexture, 0);
/* Copy the staging texture's contents back to the texture: */
ID3D11DeviceContext_CopySubresourceRegion(rendererData->d3dContext, (ID3D11Resource *)texture, 0, x, y, 0, (ID3D11Resource *)stagingTexture, 0, NULL);
SAFE_RELEASE(stagingTexture);
return 0;
}
Note : code snipped for conciseness.
This seems to indicate, based on that article I mentioned, that SDL is using the second method of the Default Usage to allocate the texture memory on the GPU, but uses the Staging Usage to upload the actual pixels.
I don't know that much about DX11 programming, but that mixing up of techniques got my programmer's sense tingling.
I contacted a game programmer I know and explained the problem to him. He told me the following interesting bits :
The driver gets to decide where it's storing staging textures. It usually lies in CPU RAM.
It's much better to specify a pInitialData pointer, as the driver can decide to upload the textures asynchronously.
If you load too many staging textures without commiting them to the GPU, you can fill up the RAM.
I then wondered why SDL didn't return me a "out of memory" error at the time I called SDL_CreateTextureFromSurface, and I found out why (again, snipped for concision) :
SDL_Texture *
SDL_CreateTextureFromSurface(SDL_Renderer * renderer, SDL_Surface * surface)
{
[...]
SDL_Texture *texture;
[...]
texture = SDL_CreateTexture(renderer, format, SDL_TEXTUREACCESS_STATIC,
surface->w, surface->h);
if (!texture) {
return NULL;
}
[...]
if (SDL_MUSTLOCK(surface)) {
SDL_LockSurface(surface);
SDL_UpdateTexture(texture, NULL, surface->pixels, surface->pitch);
SDL_UnlockSurface(surface);
} else {
SDL_UpdateTexture(texture, NULL, surface->pixels, surface->pitch);
}
[...]
return texture;
}
If the creation of the texture is successful, it doesn't care whether or not it succeeded in updating the textures (no check on SDL_UpdateTexture's return value).
WORKAROUND
The poor-man's workaround to that problem is to call SDL_RenderPresent each time you call a SDL_CreateTextureFromSurface.
It's probably fine to do it once every hundred textures depending on your texture size. But just be aware that calling SDL_CreateTextureFromSurface repeatedly without updating the renderer will actually fill up the system RAM, and the SDL won't return you any error condition to check for this.
The irony of this is that had I implemented a "correct" loading loop with percentage of completion on screen, I would never had that problem. But fate had me implement this the quick-and-dirty way, as a proof of concept for a bigger system, and I got sucked into that problem.

cvWatershed unsupported format or combination of formats

I'm working with OpenCV 2.4.11 in C on Code::Blocks, in particular through the O'Reilly book Learning OpenCV. The section on the watershed algorithm was a bit short, so I thought I'd play with it a bit to see how exactly it works. However, every time I call the function I get the following error:
OpenCV Error: Unsupported format or combination of formats (Only 32-bit, 1-chann
el output images are supported) in cvWatershed
My program so far is very simple:
int main(int arg, int arg2) {
//open windows
cvNamedWindow("Input", 1 );
cvNamedWindow("Markings", 1 );
//load images
IplImage* input = cvLoadImage("ActualDoorPhoto.jpg", CV_LOAD_IMAGE_COLOR);
assert(input != NULL);
IplImage* markingstemp = cvLoadImage("ActualMarkingTest.jpg", CV_LOAD_IMAGE_COLOR);
assert(markingstemp != NULL);
//prepare markings
IplImage* markings = cvCreateImage(cvGetSize(markingstemp), 32, 1);
CvMat* markmat = cvCreateMat(input->width, input->height, CV_32FC1);
cvWatershed(input, markmat);
cvShowImage("Input", input);
cvShowImage("Markings", markings);
cvWaitKey(0);
return 0;
}
I have tried putting both markings and markmat as the second argument for cvWatershed, as well as several other things (notably markings with the contours of markingstemp drawn onto it), but every time I get the same error. Can anyone tell me what I'm doing wrong?
You're inverting the dimensions of the output matrix. It should be:
CvMat* markmat = cvCreateMat(input->height, input->width, CV_32FC1);
The format should also probably be changed to CV_32SC1.

SDL window close because of SDL_Flip with an image surface array

I am in the beginning of a game of brick breaker type and I'm stuck in the SDL_Flip step. My CodeBlocks compiler says nothing and the console doesn't crash, but yet the SDL window shutdown and the console process returned code 3. When I ran the debugger it says:
SDL_Flip()
Display(Bricks=0x28f69c, screen=0x0)
and the Display type error was said located at the line of my SDL_Flip(screen);
Here's a glimpse of my code. My Brick_Coordinates and Brick_Surface struct are already initialize (my coordinates for Brick_Coordinates and NULL for Brick_Surface) by another function before that one:
void Display(BrickStruct Bricks[12][10],SDL_Surface *screen)
{
int i=0,j=0;
for(j=0;j<10;j++)
{
if( (j+1)%2==0 ) // If we are on even lines, display only 11 bricks
{
for(i=0;i<11;i++)
{
Bricks[i][j].Brick_Surface = IMG_Load("BrickTest1.png");
SDL_BlitSurface(Bricks[i][j].Brick_Surface, NULL, screen, &Bricks[i][j].Brick_Coordinates);
SDL_Flip(screen);
}
}
else // If we are on odd lines, display the 12 bricks
{
for(i=0;i<12;i++)
{
}
}
}
}
My Structure looks like this:
typedef struct BrickStruct
{
int type;
SDL_Rect Brick_Coordinates;
SDL_Surface *Brick_Surface;
}BrickStruct;
In my main, my code is like this:
SDL_Surface *screen= NULL;
BrickStruct Bricks[12][10]; // I create my 2D array of struct named Bricks
Display(Bricks,screen);
I've already tested with a fprintf the values of my coordinates initialized. These are good. And apparently my SDL_Blit is working. But The Flip isn't. My screen surface is big enough for all my images (480x540 and my images are 40x20). I was wondering if that problem has to do with an impossibility for Blit to place an image on top of another but the Flip doesn't even work when I try with only one image (without my loops).
Can somebody please have the kindness to indicate me where is located my problem ?
Thanks in advance
There reason was that that you didn't save screen into the global variable.
You probably had a line in your SDL_Initialisation similar to this:
SDL_Surface *screen = SDL_SetVideoMode(640, 480, 8, SDL_SWSURFACE);
This creates a new local variable called screen. Since you wanted to save this into the global one, you should change it to:
screen = SDL_SetVideoMode(640, 480, 8, SDL_SWSURFACE);
According to your debugger and your example code your screen structure is null. So your call to SDL_BlitSurface will fail. The reason it probably works for you when you do your Display call inside your Initialize is that you've just initialized your screen and used it right after.
You need to store the surface you are writing to and use it again when you're blitting.
Also, as others have recommended, you should take a look at a tutorial for SDL and perhaps some more C tutorials to reinforce some concepts.

mysterious crash after load_bitmap from Allegro

I am new to Allegro. We have to use it in our study.
I have a problem with my code, which should load a bitmap and print it.
#include <allegro.h>
int main( void )
{
allegro_init();
install_keyboard();
set_color_depth(16);
set_gfx_mode( GFX_AUTODETECT, 640, 480, 0, 0);
BITMAP *Bild;
if( (Bild=load_bitmap("Spielfeld_Rand.bmp", NULL) ) == NULL )
{
allegro_message( "Error" );
return 1;
}
while( !key[KEY_ESC])
{
draw_sprite(screen, Bild, 0,0);
}
destroy_bitmap(Bild);
return 0;
}
END_OF_MAIN()
The Code chrashes. I do not see any error message, my screen turns black and i can't do anything. I also tried to enter the full path of the picture, but it wont help.
But if i remove the if arount the load_bitmap, the program aborts and return to the sceen.
Can anyone help me with this mysterious crash?
Thanks alot.
set_gfx_mode will change your screen resolution to 640x480 and show a black screen.
The manual says not to use allegro_message in graphics mode. It is probably been called and is locking up the program.
In text mode, allegro_message will put up a dialog box with "Error" in it. The program then won't exit until the ok is selected.
You should also call allegro_exit before exiting or your screen will be left at 640x480 resolution.

Resources