Very Slow Processing of my Opencv Application - c

I am building an OpenCV application which captures a video from camera and overlays it on another video after removing the background.
I am not able to achieve a reasonable speed as it is playing the output at about 1 fps, whereas my background removal is working at 3fps.
Is there a way to display the background video at its normal speed and overlay the processed video at 3fps ?
I tried commenting out my code and I realized that the problem lies majorly with the Rendering part itself. I tried displaying the video along with my web cam feed and I noticed that there is a drop in the actual fps and the fps of video when displayed with openCV.
here is the sample code:
void main()
{
CvCapture* capture, *Vcap;
capture = cvCaptureFromCAM(0);
if(!capture)
{
printf("Video Load Error");
}
Vcap = cvCaptureFromAVI("bgDemo.mp4");
//printf("\nEntered BGR");
if(!Vcap)
{
printf("Video Load Error");
}
while(1)
{
IplImage* src = cvQueryFrame(Vcap);
if(!src)
{
Vcap = cvCaptureFromAVI("bgDemo.mp4");
continue;
}
IplImage* bck1 = cvCreateImage(cvGetSize(src),8,3);
cvResize(src,bck1,CV_INTER_LINEAR);
cvShowImage("BCK",bck1);
cvWaitKey(1);
}
}

The main problem is that you are allocating a new image at every iteration of the loop without releasing it at the end of the loop. In other words, you have a beautiful memory leak.
A better approach is to simply grab a frame of the video before the loop starts. This will let you create bck1 with the right size just once.
There are other problems with your code, I'm sharing a fixed version below, make sure you pay attention to every line of code to see what changed. I haven't had time to test it, but I'm sure you'll figure it out:
int main()
{
// I know what you are doing, just one capture interface is enough
CvCapture* capture = NULL;
capture = cvCaptureFromCAM(0);
if(!capture)
{
printf("Ooops! Camera Error");
}
capture = cvCaptureFromAVI("bgDemo.mp4");
if(!capture)
{
printf("Ooops! Video Error");
// if it failed here, it means both methods for loading a video stream failed.
// It makes no sense to let the application continue, so we return.
return -1;
}
// Retrieve a single frame from the camera
IplImage* src = cvQueryFrame(capture);
if(!src)
{
printf("Ooops! #1 cvQueryFrame Error");
return -1;
}
// Now we can create our backup image with the right dimensions.
IplImage* bck1 = cvCreateImage(cvGetSize(src),src->depth, src->nChannels);
if(!bck1)
{
printf("Ooops! cvCreateImage Error");
return -1;
}
while(1)
{
src = cvQueryFrame(capture);
if(!src)
{
printf("Ooops! #2 cvQueryFrame Error");
break;
}
cvResize(src, bck1, CV_INTER_LINEAR);
cvShowImage("BCK",bck1);
cvWaitKey(10);
}
cvReleaseImage( &bck1 ); // free manually allocated resource
return 0;
}
These fixes should speed up your application considerably.

Related

Border/Titlebar not properly displaying in SDL OSX

I was just following lazyfoo's SDL tutorial and I ran the sample code as shown here:
#include <SDL2/SDL.h>
#include <stdio.h>
//Screen dimension constants
const int SCREEN_WIDTH = 640;
const int SCREEN_HEIGHT = 480;
int main( int argc, char* args[] )
{
//The window we'll be rendering to
SDL_Window* window = NULL;
//The surface contained by the window
SDL_Surface* screenSurface = NULL;
//Initialize SDL
if( SDL_Init( SDL_INIT_VIDEO ) < 0 )
{
printf( "Failed to initialise SDL! SDL_Error: %s\n", SDL_GetError() );
}
else
{
//Create window
window = SDL_CreateWindow( "SDL Tutorial", SDL_WINDOWPOS_UNDEFINED, SDL_WINDOWPOS_UNDEFINED, SCREEN_WIDTH, SCREEN_HEIGHT, SDL_WINDOW_SHOWN );
if( window == NULL )
{
printf( "Failed to create window! SDL_Error: %s\n", SDL_GetError() );
}
else
{
//Get window surface
screenSurface = SDL_GetWindowSurface( window );
//Fill the surface white
SDL_FillRect( screenSurface, NULL, SDL_MapRGB( screenSurface->format, 255, 0, 0 ) );
//Update the surface
SDL_UpdateWindowSurface( window );
//Wait two seconds
SDL_Delay( 2000 );
}
}
//Destroy window
SDL_DestroyWindow( window );
//Quit SDL subsystems
SDL_Quit();
return 0;
}
But for some reason no real border or title bar is being shown, it just displays a white screen. I tried using
SDL_SetWindowBordered but it did nothing. Next I set the background colour to red and from this image you can see there is a titlebar but there is no close or minimize button.
Does anyone know why this is happening. Is it just me or is it a problem with mac's?
Since getting rid of SDL_Delay seemed to help, I will try to elaborate a little. If we look at the code of SDL_Delay we can see that it basically does two things:
if nanosleep() can be utilized, it does sleep for a time interval;
else, it runs in an infinite while loop, checking how much time has passed # each iteration, and breaking out of the loop after enough time has passed.
Now, I must say that I have never personally coded for osx, so I do not know how exactly does it draw it's windows. However I can assume that for some reason SDL_Delay in your code gets called (and effectively blocks the thread it's called from) before the OS manages to draw the header of the window, and after the delay finishes you immediately destroy the window yourself, thus the header is never properly drawn.
I know that the answer is already solved, but for anyone who wants a simple solution.
Example:
SDL_Delay(4000);
would turn into
for(int i = 0; i < 4000; i++){
SDL_PumpEvents();
SDL_Delay(1);
}
Actually this has nothing to do with SDL_Delay() at all.
I tested it out, and it seems that on OSX the title-bar only updates each time the events are polled or pumped.
This means that SDL_Delay() blocks the title-bar rendering process if it prevents you from pumping events.
To fix this just call SDL_PumpEvents() every millisecond or so:
for(int i = 0; i < time_to_sleep; i++){
SDL_PumpEvents();
SDL_Delay(1);
}

why is GlutPostRedisplay and sleep function is not working in this code?

i have tried to implement the data transfer between usb and cpu in this project. The data transfer is being shown as a small rectangle moving from one component of the computer to another.
In the code below, the GlutPostRedisplay does not work.
Also, can someone tell me if sleep() used is correct because the functions called in display do not work in sync. casing() is never executed. After fisrtscreen(), it directly jumps to opened() and operate() does not work.
what is the error with this code ??
void operate()
{
URLTEXTX = 200;
URLTEXTY = 950;
displayString(READUSB,1);
//southbrigde to northbrigde
bottom(488.0,425.0,380.0);
back(488.0,188.0,380.0);
top(188.0,380.0,550.0);
//northbridge to cpu
front(230.0,350.0,595.0);
top(345.0,600.0,650.0);
//read from usb
back(700.0,625.0,465.0);
bottom(625.0,460.0,385.0);
back(620.0,525.0,390.0);
sleep(1);
URLTEXTX = 200;
URLTEXTY = 950;
displayString(WRITEUSB,1);
//cpu to northbridge
bottom(350.0,650.0,595.0);
back(350.0,230.0,600.0);
//northbridge to southbridge
bottom(188.0,550.0,380.0);
front(188.0,488.0,380.0);
top(483.0,380.0,425.0);
//write to usb
front(525.0,625.0,385.0);
top(625.0,385.0,460.0);
front(620.0,700.0,460.0);
sleep(1);
URLTEXTX = 200;
URLTEXTY = 950;
displayString(READDVD,1);
//read from dvd
back(600.0,560.0,810.0);
bottom(570.0,810.0,600.0);
back(560.0,525.0,610.0);
//ram to northbridge
back(450.0,230.0,580.0);
//northbridge to cpu
front(230.0,350.0,595.0);
top(345.0,600.0,650.0);
sleep(1);
URLTEXTX = 200;
URLTEXTY = 950;
displayString(WRITEDVD,1);
//cpu to northbridge
bottom(350.0,650.0,595.0);
back(350.0,230.0,600.0);
//northbridge to ram
front(230.0,450.0,580.0);
//write to dvd
front(525.0,570.0,600.0);
top(570.0,600.0,800.0);
front(560.0,600.0,800.0);
sleep(1);
URLTEXTX = 200;
URLTEXTY = 950;
displayString(READHD,1);
//read from hard disc
back(640.0,560.0,300.0);
top(560.0,300.0,530.0);
back(560.0,525.0,530.0);
//ram to northbridge
back(450.0,230.0,580.0);
//northbridge to cpu
front(230.0,350.0,595.0);
top(345.0,600.0,650.0);
sleep(1);
URLTEXTX = 200;
URLTEXTY = 950;
displayString(WRITEHD,1);
//cpu to northbridge
bottom(350.0,650.0,595.0);
back(350.0,230.0,600.0);
//northbridge to ram
front(230.0,450.0,580.0);
//write to hard disc
front(525.0,560.0,530.0);
bottom(560.0,530.0,300.0);
front(560.0,640.0,300.0);
sleep(1);
}
void front(GLfloat x1,GLfloat x2,GLfloat y1)//to move in forward direction
{
GLfloat i;
for(i=x1;i<=x2;i++)
{
drawbit(i,x1+5,y1,y1-5);
glutPostRedisplay();
}
}
void back(GLfloat x1,GLfloat x2,GLfloat y1)//to move in backward direction
{
GLfloat i;
for(i=x1;i>=x2;i--)
{
drawbit(i,i-5,y1,y1-5);
glutPostRedisplay();
}
}
void top(GLfloat x1,GLfloat y1,GLfloat y2)//to move in upward direction
{
GLfloat i;
for(i=y1;i<=y2;i++)
{
drawbit(x1,x1+5,i,i+5);
glutPostRedisplay();
}
}
void bottom(GLfloat x1,GLfloat y1,GLfloat y2)//to move in downward direction
{
GLfloat i;
for(i=y1;i>=y2;i--)
{
drawbit(x1,x1-5,i,i-5);
glutPostRedisplay();
}
}
void drawbit(GLfloat x1,GLfloat x2,GLfloat y1,GLfloat y2)
{
glBegin(GL_POLYGON);
glColor3f(1.0,1.0,1.0);
glVertex2f(x1,y1);
glVertex2f(x2,y1);
glVertex2f(x2,y2);
glVertex2f(x1,y2);
glEnd();
glFlush();
}
void display()
{
glClear(GL_COLOR_BUFFER_BIT);
firstscreen(); //introduction to the project
sleep(3);
glClear(GL_COLOR_BUFFER_BIT);
casing(); //cpu case
sleep(2);
glClear(GL_COLOR_BUFFER_BIT);
opened(); //when cpu case is opened shows internal components
sleep(1);
operate(); //data transfer between various components
}
The problem is similar to this: Pausing in OpenGL successively
glutPostRedisplay simply sets a flag in glut to call your display callback on the next loop. It doesn't actually draw anything.
The function I suspect you're after is glutSwapBuffers. Without double buffering, geometry is drawn directly to the screen (although "draw" commands to the GPU are buffered for which you'd want glFlush). This commonly causes flickering because you see things that later get covered by closer geometry (because of the depth buffer). Double buffering solves this by rendering to an off-screen buffer and then displaying the result all at once. Make sure GLUT_DOUBLE is passed to glutInit so that you have a back buffer.
While you're sleep()ing, your application won't be able to capture and process events. Lets say you want to close the window. Until sleep returns the whole thing will be unresponsive. A sleep can still be important so you don't hog your CPU. I'd separate these concepts.
Loop/poll with an idle function until your delay time has elapsed. Then call glutPostRedisplay. Add glutSwapBuffers to display if you're double buffering.
Write a framerate limiter that calls sleep so you don't hog cycles.
A simple method to draw different things after set delays is to write a small state machine...
int state = STATE_INIT;
float timer = 0.0f;
void idle()
{
//insert framerate limiter here
//calculate time since last frame, perhaps using glutGet(GLUT_ELAPSED_TIME)
float deltaTime = ...
timer -= deltaTime;
if (timer < 0.0f)
{
switch (state)
{
case STATE_INIT:
state = STATE_DRAW_FIRST_THING;
timer = 123.0f;
...
}
glutPostRedisplay();
}
}
void display()
{
...
if (state == STATE_DRAW_FIRST_THING)
{
...
}
...
glutSwapBuffers();
}
As your app becomes bigger this I doubt this will be maintainable and you'll want something more robust, but until then this is a good start.
Simply changing a void (*currentView)(void); callback function in idle would save some hard coding in display. You might want to create an object orientated state machine. Beyond boolean states you might want to look into animation and keyframe interpolation. Rather than hard code everything, storing geometry, keyframes and state sequences in a file is a nice way to separate code and data. XML is very nice to work with for this provided you use a library.

Can't set video mode for SDL screen on embedded device

I've been hacking away on an ARM based device (Freescale i.MX27 ADS) with a built-in screen for the past few days. The device is running a modified, minimal GNU/Linux system, with no window management or graphical server. By default, the device is only supposed to run the one application that came with it.
I've never done any graphical programming before, so this is a learning experience for me. I tried writing a simple SDL program to run on the device, which would read a bitmap, and display the image on the embedded device's screen.
The problem I'm having is that no matter what resolution, depth, or flags I try, the video mode always fails to apply, and I get nothing.
I know my code isn't the problem, but I'm going to post it anyway.
#include "SDL/SDL.h"
#define SCREEN_WIDTH 640
#define SCREEN_HEIGHT 480
#define SCREEN_DEPTH 24
int main(int argc, char *argv[])
{
SDL_Surface *screen;
if(!SDL_Init(SDL_INIT_VIDEO))
{
printf("Unable to initialize SDL.\n");
return 1;
}
// It always fails right here
screen = SDL_SetVideoMode(SCREEN_WIDTH, SCREEN_HEIGHT, SCREEN_DEPTH, SDL_SWSURFACE);
if(screen == NULL)
{
printf("Unable to set video mode.\n");
return 1;
}
SDL_Surface* image;
SDL_Surface* temp;
temp = SDL_LoadBMP("hello.bmp");
if(temp == NULL)
{
printf("Unable to load bitmap.\n");
return 1;
}
image = SDL_DisplayFormat(temp);
SDL_FreeSurface(temp);
SDL_Rect src, dest;
src.x = 0;
src.y = 0;
src.w = image->w;
src.h = image->h;
dest.x = 100;
dest.y = 100;
dest.w = image->w;
dest.h = image->h;
SDL_BlitSurface(image, &src, screen, &dest);
printf("Program finished.\n\n");
return 0;
}
From what I can tell, the application that's supposed to run on this device uses Qtopia. Again, I'm new to graphics programming, so I have no idea how one should control graphical output in an embedded environment like this.
Any ideas?
My code was hiding the fact that the problem was with initializing SDL, not setting the video mode. SDL wasn't initializing because my embedded system has no X server, and no mouse. After setting SDL_NOMOUSE, the problem was resolved.

OpenCV show both incoming video and modified video in separate windows

This should be easy. I have a video stream coming in from my webcam. I'm just playing with image transformation etc. I'd like to be able to view the original images (video input) in one window and the transformed video in another. Problem is, as soon as I start capturing video instead of just single images, the original video window displays transformed video. I don't understand why.
cvNamedWindow("in", CV_WINDOW_AUTOSIZE);
cvNamedWindow("out", CV_WINDOW_AUTOSIZE);
CvCapture *fc = cvCaptureFromCAM(0);
IplImage* frame = cvQueryFrame(fc);
if (!frame) {
return 0;
}
IplImage* greyscale = cvCreateImage(cvGetSize(frame), IPL_DEPTH_8U, 1);
IplImage* output = cvCreateImage(cvGetSize(frame),IPL_DEPTH_32F , 1);
while(1){
frame= cvQueryFrame(fc);
cvShowImage("in", frame);
// manually convert to greyscale
for (int y = 0; y < frame->height; y++) {
uchar* p = (uchar*) frame->imageData + y* frame->widthStep; // pointer to row
uchar* gp = (uchar*) greyscale->imageData + y*greyscale->widthStep;
for(int x = 0; x < frame->width; x++){
gp[x] = (p[3*x] + p[3*x+1] + p[3*x+2])/3; // average RGB values
}
}
cvShowImage("out", greyscale);
char c = cvWaitKey(33);
if (c == 27) {
return 0;
}
}
In this simple example, both video streams end up appearing greyscale... The pointer values and imagedata for frame and greyscale are totally different. If I stop showing greyscale in the "out" window, then frame will appear in color.
Also, if I continue and apply a Sobel operation on the greyscale image and display the result in "out", both "in" and "out" windows will show the Sobel image!
Any ideas?
Hmm This was weird, but it seems using CV_WINDOW_AUTOSIZE was the problem? Perhaps it's not supported in OpenCV 2.1 (which I'm pretty sure is what I'm running). Anyways, using 0 instead of CV_WINDOW_AUTOSIZE when creating the windows works fine.
I have tried your code with openCV 2.0 under mandriva 2010 and it is working fine either with CV_WINDOW_AUTOSIZE or 0.
You may try to convert to grayscale with cvCvtColor(frame,grayscale,CV_RGB2GRAY) and see if the problem persist.

Are 2 simultaneous webcam windows possible with openCV?

I am applying common image transforms to my live webcam capture. I want to display the original webcam in one window and the image with the transforms applied to in another window. However, I am getting same image (filtered) on both windows, I am wondering if I am limited by the OpenCV API or if I am missing something? My code snippet looks like -
/* allocate resources */
cvNamedWindow("Original", CV_WINDOW_AUTOSIZE);
cvNamedWindow("Filtered", CV_WINDOW_AUTOSIZE);
CvCapture* capture = cvCaptureFromCAM(0);
do {
IplImage* img = cvQueryFrame(capture);
cvShowImage("Original", img);
Filters* filters = new Filters(img);
IplImage* dst = filters->doSobel();
cvShowImage("Filtered", dst);
cvWaitKey(10);
} while (1);
/* deallocate resources */
cvDestroyWindow("Original");
cvDestroyWindow("Filtered");
cvReleaseCapture(&capture);
Its possible! Try copying img to another IplImage before sending it to processing and see if that works first.
Yes, I know what you're going to say. But just try that first and see if it does what you want. The code below is just to illustrate what you should do, I don't know if it will work:
/* allocate resources */
cvNamedWindow("Original", CV_WINDOW_AUTOSIZE);
cvNamedWindow("Filtered", CV_WINDOW_AUTOSIZE);
CvCapture* capture = cvCaptureFromCAM(0);
do {
IplImage* img = cvQueryFrame(capture);
cvShowImage("Original", img);
IplImage* img_cpy = cvCreateImage(cvGetSize(img), 8, 3);
img_cpy = cvCloneImage(img);
Filters* filters = new Filters(img_cpy);
IplImage* dst = filters->doSobel();
cvShowImage("Filtered", dst);
/* Be aware that if you release img_cpy here it might not display
* the data on the window. On the other hand, not doing it now will
* cause a memory leak.
*/
//cvReleaseImage( &img_cpy );
cvWaitKey(10);
} while (1);
/* deallocate resources */
cvDestroyWindow("Original");
cvDestroyWindow("Filtered");
cvReleaseCapture(&capture);

Resources