Cleaned up camera osx and opencv - c

hi guys i am executing some sample programs at my macbook using opencv and this is my code:
#include "stdio.h"
#include "cv.h"
#include "highgui.h"
int main( int argc, char **argv )
{
CvCapture *capture = 0;
IplImage *frame = 0;
int key = 0;
/* initialize camera */
capture = cvCaptureFromCAM( 0 );
/* always check */
if ( !capture ) {
fprintf( stderr, "Cannot open initialize webcam!\n" );
return 1;
}
/* create a window for the video */
cvNamedWindow( "result", CV_WINDOW_AUTOSIZE );
while(1>0)
{
/* get a frame */
frame = cvQueryFrame( capture );
/* always check */
if(!frame ) break;
/* display current frame */
cvShowImage( "result", frame );
waitKey(10);
/* exit if user press 'Esc' */
key = cvWaitKey( 20 );
if((char)key==27 )
break;
}
/* free memory */
cvReleaseCapture( &capture );
cvDestroyWindow( "result" );
return 0;
}
the code was working fine on the Macbookpro about a year ago ( OSX snow leopard ) but at the macbook (lion) i only get this at the console: Cleaned up camera. No isight, no image... nothing, only that message? any advise o.0?
ps? i changed the number at Caoture FromCAM to 300 ( iEEE cameras ) or 500 (quicktime ) then i have no message but still no image.

Never mind guys, apparently is an issue on the current opencv version 2.6.x. I unistalled ffmpeg brew uninstall ffmepg and opencv brew uninstall opencv
then i changed my opencv version cd /usr/local/Library/Taps/homebrew-science i searched other version ( isight was working under 4.5.5 ) brew versions opencv and i added the 2.4.5 git chekout ae74fe9 opencv.rb finally i installed opencv using brew install opencv and it is done isight works great :).
ps: isight camera will work with cvCaptureFromCAM( 500 ) not 0, -1 or 300.
ps2: omg i am so happy :3

Related

OpenCV fails to recognize webcam, but mplayer succeeds

As a first step on a larger project I was trying to display the imagem from my webcam using OpenCV:
#include <stdlib.h>
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
int
main()
{
cv::VideoCapture cap(-1);
if (!cap.isOpened())
exit(EXIT_FAILURE);
cv::Mat frame;
bool done = false;
while (!done) {
cap >> frame;
cv::imshow("webcam", frame);
done = (cv::waitKey(30) >= 0);
}
return EXIT_SUCCESS;
}
This returns an error code (!cap.isOpened() passes ,confirmed with gdb). Initially I had 0 instead of -1. When searching this site -1 was suggested, but it was to no avail. I also tried 1 through 3, as another user suggested it.
I can display my webcam using mplayer, more specifically mplayer tv:// -tv driver=v4l2.
v4l2 is the "video for linux" driver. I noticed OpenCV can be installed with such driver by compiling it with -DWITH_V4L and -DWITH_LIBV4L (v4l USE flag in Gentoo). After recompiling OpenCV with it, it successfully recognized the webcam. GTK support seems to be needed to display the image.

Labwindows fails to compile- says it is missing a dll that is already in project

I'm trying to use openCV with LabWindows 2012SP1. I've got a simple project attempting to run a simple "Hello World" program in debug mode.
The code I'm trying to run is
#include <cv.h>
#include <highgui.h>
// Create a window to show the image
cvNamedWindow( "My Cool Window", CV_WINDOW_AUTOSIZE );
IplImage *img = cvCreateImage( cvSize( 300, 100 ), IPL_DEPTH_8U, 3 );
double hScale = 1.0;
double vScale = 1.0;
double shear = 0.0;
int lineWidth = 2;
// Initialize the font
CvFont font;
cvInitFont( &font, CV_FONT_HERSHEY_SCRIPT_COMPLEX, hScale, vScale, shear, lineWidth, 8 );
// Write on the image ...
CvScalar color = CV_RGB( 0, 51, 102 );
cvPutText( img, "Hello World!", cvPoint( 60, 60 ), &font, color );
// ... and show it to the world !
cvShowImage( "My Cool Window", img );
// Wait until the user wants to exit
cvWaitKey(0);
and I have the following libraries added:
opencv_core247d.lib (32-bit)
opencv_highgui247d.lib (32-bit)
opencv_imgproc247d.lib (32-bit)
opencv_imgproc247d.dll
However, when I go to run the program in debug mode, I get an error telling me:
The program can't start because opencv_imgproc247d.dll is missing
from your computer. Try reinstalling the program to fix this problem.
I'm more than a little bit confused at this point, as I have the DLL in question added to the project.
Help?
you need to add the location of the opencv dll's to the 'PATH' env var.
please don't use the old c-api (won't be supported in near future), ( IplImages, cv* functions ). use cv::Mat and the c++ api(namespace cv) instead.

OpenCV - CvVideoWriter codec error in raspbian

I'm making something like black box in raspberry pi.
I set OpenCV 2.4.3 and many video libraries.
( I referred this site - Opencv cannot acces my webcam )
And I compiled this sample code.
#include <stdio.h>
#include "opencv/cv.h"
#include "opencv/highgui.h"
#include "opencv/cxcore.h"
int main(void){
CvCapture* capture = cvCaptureFromCAM(0);
cvNameWindow("video", 1);
double fps = cvGetCaptureProperty(capture, CV_CAP_PROP_FPS);
CvSize frame_size = cvSize((int)cvGetCaptureProperty(capture, CV_CAP_PROP_FRAME_WIDTH), (int)cvGetCaptureProperty(capture, CV_CAP_PROP_FRAME_HEIGHT));
CvVideoWriter* writer = cvCreateVideoWriter("out.avi", -1, fps, frame_size, 1);
IpImage* frame;
while(1){
frame = cvQueryFrame(capture);
cvShowImage("video", frame);
if(cvWaitKey(38) == 27){
break;
}
}
cvReleaseVideoWriter(&writer);
cvReleaseCapture(&capture);
cvDestroyWindow("video");
return 0;
}
This code compiled successfully.
But when i run this process, there are some error.
OpenCV Error: Unsupported format or combination of formats (Gstreamer Opencv backend doesn't support this codec acutally.) in CvVideoWriter_GStreamer::open, file /home/pi/OpenCV-2.4.3/modules/highgui/src/cap_gstreamer.cpp, line 479
terminate called after throwing an instance of 'cv::Exception'
what(): /home/pi/OpenCV-2.4.3/modules/highgui/src/cap_gstreamer.cpp:479: error: (-210) Gstreamer Opencv backend doesn't support this codec acutally. in function CvVideoWriter_GStreamer::open
Aborted
So, i changed codec part in 'cvCreateVideoWriter' instead of -1.
I tried many types of codec like 'CV_FOURCC('M','J','P','G')' and so on..
but I cannot fix this problem.
How can i solve this problem? Please help me..

read/write avi video on MAC using openCV

I am trying to read avi video and write it again as it is without any change using openCV 2.4.0 on MAC 10.6.8
My videos is grayscale with frame_rate = 25 and Codec = 827737670 which is FFV1 (I guess)
The problem is ....
when I read and write the video as it is .... I see many changes in size and in color ...
After 3 or 4 times of writing I can see the video start to be (Pink) color !!!
I am not sure what is the problem !!!
this is my code for the people who interest
Appreciate your help in advance :D
Seereen
Note : I have on my computer FFMPEG V 0.11 (I do not know if this important)
{
int main (int argc, char * const argv[]) {
char name[50];
if (argc==1)
{
printf("\nEnter the name of the video:");
scanf("%s",name);
} else if (argc == 2)
strcpy(name, argv[1]);
else
{
printf("To run this program you should enter the name of the program at least, or you can enter the name of the program then the file name");
return 0;
}
cvNamedWindow( "Read the video", CV_WINDOW_AUTOSIZE );
// GET video
CvCapture* capture = cvCreateFileCapture( name );
if (!capture )
{
printf( "Unable to read input video." );
return 0;
}
double fps = cvGetCaptureProperty( capture,CV_CAP_PROP_FPS);
printf( "fps %f ",fps );
int codec = cvGetCaptureProperty( capture,CV_CAP_PROP_FOURCC);
printf( "codec %d ",codec );
// Read frame
IplImage* frame = cvQueryFrame( capture );
// INIT the video writer
CvVideoWriter *writer = cvCreateVideoWriter( "x7.avi", codec, fps, cvGetSize(frame),1);
while(1)
{
cvWriteFrame( writer, frame );
cvShowImage( "Read the video", frame );
// READ next frame
frame = cvQueryFrame( capture );
if( !frame )
break;
char c = cvWaitKey(33);
if( c == 27 )
break;
}
// CLEAN everything
cvReleaseImage( &frame );
cvReleaseCapture( &capture );
cvReleaseVideoWriter( &writer );
cvDestroyWindow( "Read the video" );
return 0;}
}
Check this list of fourcc codes, and search for the uncompressed ones, like HFYU.
You also might find this article interesting: Truly lossless video recording with OpenCV.
EDIT:
I have a Mac OS X 10.7.5 at my disposal and since you gave us the video for testing I decided to share my findings.
I wrote the following source code for testing purposes: it loads your video file and writes it to a new file out.avi while preserving the codec information:
#include <cv.h>
#include <highgui.h>
#include <iostream>
int main(int argc, char* argv[])
{
// Load input video
cv::VideoCapture input_cap(argv[1]);
if (!input_cap.isOpened())
{
std::cout << "!!! Input video could not be opened" << std::endl;
return -1;
}
// Setup output video
cv::VideoWriter output_cap("out.avi",
input_cap.get(CV_CAP_PROP_FOURCC),
input_cap.get(CV_CAP_PROP_FPS),
cv::Size(input_cap.get(CV_CAP_PROP_FRAME_WIDTH), input_cap.get(CV_CAP_PROP_FRAME_HEIGHT)));
if (!output_cap.isOpened())
{
std::cout << "!!! Output video could not be opened" << std::endl;
return -1;
}
// Loop to read from input and write to output
cv::Mat frame;
while (true)
{
if (!input_cap.read(frame))
break;
output_cap.write(frame);
}
input_cap.release();
output_cap.release();
return 0;
}
The output video presented the same characteristics of the input:
Codec: FFMpeg Video 1 (FFV1)
Resolution: 720x480
Frame rate: 25
Decoded format: Planar 4:2:0 YUV
and it looked fine when playing.
I'm using OpenCV 2.4.3.
I figure out the problem ,,,,,
The original videos written in YUV240 pixel format (and it is gray)
the openCV read the video on BGR by default , so each time when I read it the openCV convert the pixel values to BGR
after few time of reading and writing , the error start to be bigger (because the conversion operation)
that why the pixels values change .....and I see the video pink !
The Solution is , read and write this kind of videos by FFMPEG project which provide YUV240 and many other format
there is a code can do this operation in the tutorial of FFMPEG
I hope this can help the others who face similar problem

ALSA equivalent to /dev/audio dump?

This will be my poorest question ever...
On an old netbook, I installed an even older version of Debian, and toyed around a bit. One of the rather pleasing results was a very basic MP3 player (using libmpg123), integrated for adding background music to a little application doing something completely different. I grew rather fond of this little solution.
In that program, I dumped the decoded audio (from mpg123_decode()) to /dev/audio via a simple fwrite().
This worked fine - on the netbook.
Now, I came to understand that /dev/audio was something done by OSS, and is no longer supported on newer (ALSA) machines. Sure enough, my laptop (running a current Linux Mint) does not have this device.
So apparently I have to use ALSA instead. Searching the web, I've found a couple of tutorials, and they pretty much blow my mind. Modes, parameters, capabilities, access type, sample format, sample rate, number of channels, number of periods, period size... I understand that ALSA is a powerful API for the ambitious, but that's not what I am looking for (or have the time to grok). All I am looking for is how to play the output of mpg123_decode (the format of which I don't even know, not being an audio geek by a long shot).
Can anybody give me some hints on what needs to be done?
tl;dr
How do I get ALSA to play raw audio data?
There's an OSS compatibility layer for ALSA in the alsa-oss package. Install it and run your program inside the "aoss" program. Or, modprobe the modules listed here:
http://wiki.debian.org/SoundFAQ/#line-105
Then, you'll need to change your program to use "/dev/dsp" or "/dev/dsp0" instead of "/dev/audio". It should work how you remembered... but you might want to cross your fingers just in case.
You could install sox and open a pipe to the play command with the correct samplerate and sample size arguments.
Using ALSA directly is overly complicated, so I hope a Gstreamer solution is fine to you too. Gstreamer gives a nice abstraction to ALSA/OSS/Pulseaudio/you name it -- and is ubiquitous in the Linux world.
I wrote a little library that will open a FILE object where you can fwrite PCM data into:
Gstreamer file. The actual code is less than 100 lines.
Use use it like that:
FILE *output = fopen_gst(rate, channels, bit_depth); // open audio output file
while (have_more_data) fwrite(data, amount, 1, output); // output audio data
fclose(output); // close the output file
I added an mpg123 example, too.
Here is the whole file (in case Github get's out of business ;-) ):
/**
* gstreamer_file.c
* Copyright 2012 René Kijewski <rene.SURNAME#fu-berlin.de>
* License: LGPL 3.0 (http://www.gnu.org/licenses/lgpl-3.0)
*/
#include "gstreamer_file.h"
#include <stdbool.h>
#include <stdlib.h>
#include <unistd.h>
#include <glib.h>
#include <gst/gst.h>
#ifndef _GNU_SOURCE
# error "You need to add -D_GNU_SOURCE to the GCC parameters!"
#endif
/**
* Cookie passed to the callbacks.
*/
typedef struct {
/** { file descriptor to read from, fd to write to } */
int pipefd[2];
/** Gstreamer pipeline */
GstElement *pipeline;
} cookie_t;
static ssize_t write_gst(void *cookie_, const char *buf, size_t size) {
cookie_t *cookie = cookie_;
return write(cookie->pipefd[1], buf, size);
}
static int close_gst(void *cookie_) {
cookie_t *cookie = cookie_;
gst_element_set_state(cookie->pipeline, GST_STATE_NULL); /* we are finished */
gst_object_unref(GST_OBJECT(cookie->pipeline)); /* we won't access the pipeline anymore */
close(cookie->pipefd[0]); /* we won't write anymore */
close(cookie->pipefd[1]); /* we won't read anymore */
free(cookie); /* dispose the cookie */
return 0;
}
FILE *fopen_gst(long rate, int channels, int depth) {
/* initialize Gstreamer */
if (!gst_is_initialized()) {
GError *error;
if (!gst_init_check(NULL, NULL, &error)) {
g_error_free(error);
return NULL;
}
}
/* get a cookie */
cookie_t *cookie = malloc(sizeof(*cookie));
if (!cookie) {
return NULL;
}
/* open a pipe to be used between the caller and the Gstreamer pipeline */
if (pipe(cookie->pipefd) != 0) {
close(cookie->pipefd[0]);
close(cookie->pipefd[1]);
free(cookie);
return NULL;
}
/* set up the pipeline */
char description[256];
snprintf(description, sizeof(description),
"fdsrc fd=%d ! " /* read from a file descriptor */
"audio/x-raw-int, rate=%ld, channels=%d, " /* get PCM data */
"endianness=1234, width=%d, depth=%d, signed=true ! "
"audioconvert ! audioresample ! " /* convert/resample if needed */
"autoaudiosink", /* output to speakers (using ALSA, OSS, Pulseaudio ...) */
cookie->pipefd[0], rate, channels, depth, depth);
cookie->pipeline = gst_parse_launch_full(description, NULL,
GST_PARSE_FLAG_FATAL_ERRORS, NULL);
if (!cookie->pipeline) {
close(cookie->pipefd[0]);
close(cookie->pipefd[1]);
free(cookie);
return NULL;
}
/* open a FILE with specialized write and close functions */
cookie_io_functions_t io_funcs = { NULL, write_gst, NULL, close_gst };
FILE *result = fopencookie(cookie, "w", io_funcs);
if (!result) {
close_gst(cookie);
return NULL;
}
/* start the pipeline (of cause it will wait for some data first) */
gst_element_set_state(cookie->pipeline, GST_STATE_PLAYING);
return result;
}
And ten years later, the "actual" answer is found: That's the wrong way to do it in the first place.
libmpg123 comes with a companion library, libout123, which abstracts the underlying audio system for you. Based on libmpg123 example code:
#include <stdlib.h>
#include "mpg123.h"
#include "out123.h"
int main()
{
mpg123_handle * _mpg_handle;
out123_handle * _out_handle;
double rate, channels, encoding;
size_t position, buffer_size;
unsigned char * buffer;
char filename[] = "Example.mp3";
mpg123_open( _mpg_handle, filename );
mpg123_getformat( _mpg_handle, &rate, &channels, &encoding );
out123_open( _out_handle, NULL, NULL );
mpg123_format_none( _mpg_handle );
mpg123_format( _mpg_handle, rate, channels, encoding );
out123_start( _out_handle, rate, channels, encoding );
buffer_size = mpg123_outblock( _mpg_handle );
buffer = malloc( buffer_size );
do
{
mpg123_read( _mpg_handle, buffer.get(), buffer_size, &position );
out123_play( _out_handle, buffer.get(), position );
} while ( position );
out123_close( _out_handle );
mpg123_close( _mpg_handle );
free( buffer );
}

Resources