I am using C to run a phase retrieval algorithm on images.
I am using ImageJ to convert the .png into a text image that I then read into my code and run the code.
At the end I have printed the output to a text array, and then have to read it from imageJ as a text image.
I was wondering if there is a way to get an image straight from c?
You could try to use OpenCv, an Open source library of programming functions for computer vision tasks.
It could read and write on diferent image file formats using imread and imwrite functions, as you could see on this example from OpenCv's documentation web site:
#include <cv.h>
#include <highgui.h>
using namespace cv;
int main( int argc, char** argv )
{
char* imageName = argv[1];
Mat image;
image = imread( imageName, 1 );
if( argc != 2 || !image.data )
{
printf( " No image data \n " );
return -1;
}
Mat gray_image;
cvtColor( image, gray_image, CV_BGR2GRAY );
imwrite( "../../images/Gray_Image.jpg", gray_image );
namedWindow( imageName, CV_WINDOW_AUTOSIZE );
namedWindow( "Gray image", CV_WINDOW_AUTOSIZE );
imshow( imageName, image );
imshow( "Gray image", gray_image );
waitKey(0);
return 0;
}
Hope it helps!
hi guys i am executing some sample programs at my macbook using opencv and this is my code:
#include "stdio.h"
#include "cv.h"
#include "highgui.h"
int main( int argc, char **argv )
{
CvCapture *capture = 0;
IplImage *frame = 0;
int key = 0;
/* initialize camera */
capture = cvCaptureFromCAM( 0 );
/* always check */
if ( !capture ) {
fprintf( stderr, "Cannot open initialize webcam!\n" );
return 1;
}
/* create a window for the video */
cvNamedWindow( "result", CV_WINDOW_AUTOSIZE );
while(1>0)
{
/* get a frame */
frame = cvQueryFrame( capture );
/* always check */
if(!frame ) break;
/* display current frame */
cvShowImage( "result", frame );
waitKey(10);
/* exit if user press 'Esc' */
key = cvWaitKey( 20 );
if((char)key==27 )
break;
}
/* free memory */
cvReleaseCapture( &capture );
cvDestroyWindow( "result" );
return 0;
}
the code was working fine on the Macbookpro about a year ago ( OSX snow leopard ) but at the macbook (lion) i only get this at the console: Cleaned up camera. No isight, no image... nothing, only that message? any advise o.0?
ps? i changed the number at Caoture FromCAM to 300 ( iEEE cameras ) or 500 (quicktime ) then i have no message but still no image.
Never mind guys, apparently is an issue on the current opencv version 2.6.x. I unistalled ffmpeg brew uninstall ffmepg and opencv brew uninstall opencv
then i changed my opencv version cd /usr/local/Library/Taps/homebrew-science i searched other version ( isight was working under 4.5.5 ) brew versions opencv and i added the 2.4.5 git chekout ae74fe9 opencv.rb finally i installed opencv using brew install opencv and it is done isight works great :).
ps: isight camera will work with cvCaptureFromCAM( 500 ) not 0, -1 or 300.
ps2: omg i am so happy :3
I'm making something like black box in raspberry pi.
I set OpenCV 2.4.3 and many video libraries.
( I referred this site - Opencv cannot acces my webcam )
And I compiled this sample code.
#include <stdio.h>
#include "opencv/cv.h"
#include "opencv/highgui.h"
#include "opencv/cxcore.h"
int main(void){
CvCapture* capture = cvCaptureFromCAM(0);
cvNameWindow("video", 1);
double fps = cvGetCaptureProperty(capture, CV_CAP_PROP_FPS);
CvSize frame_size = cvSize((int)cvGetCaptureProperty(capture, CV_CAP_PROP_FRAME_WIDTH), (int)cvGetCaptureProperty(capture, CV_CAP_PROP_FRAME_HEIGHT));
CvVideoWriter* writer = cvCreateVideoWriter("out.avi", -1, fps, frame_size, 1);
IpImage* frame;
while(1){
frame = cvQueryFrame(capture);
cvShowImage("video", frame);
if(cvWaitKey(38) == 27){
break;
}
}
cvReleaseVideoWriter(&writer);
cvReleaseCapture(&capture);
cvDestroyWindow("video");
return 0;
}
This code compiled successfully.
But when i run this process, there are some error.
OpenCV Error: Unsupported format or combination of formats (Gstreamer Opencv backend doesn't support this codec acutally.) in CvVideoWriter_GStreamer::open, file /home/pi/OpenCV-2.4.3/modules/highgui/src/cap_gstreamer.cpp, line 479
terminate called after throwing an instance of 'cv::Exception'
what(): /home/pi/OpenCV-2.4.3/modules/highgui/src/cap_gstreamer.cpp:479: error: (-210) Gstreamer Opencv backend doesn't support this codec acutally. in function CvVideoWriter_GStreamer::open
Aborted
So, i changed codec part in 'cvCreateVideoWriter' instead of -1.
I tried many types of codec like 'CV_FOURCC('M','J','P','G')' and so on..
but I cannot fix this problem.
How can i solve this problem? Please help me..
This will be my poorest question ever...
On an old netbook, I installed an even older version of Debian, and toyed around a bit. One of the rather pleasing results was a very basic MP3 player (using libmpg123), integrated for adding background music to a little application doing something completely different. I grew rather fond of this little solution.
In that program, I dumped the decoded audio (from mpg123_decode()) to /dev/audio via a simple fwrite().
This worked fine - on the netbook.
Now, I came to understand that /dev/audio was something done by OSS, and is no longer supported on newer (ALSA) machines. Sure enough, my laptop (running a current Linux Mint) does not have this device.
So apparently I have to use ALSA instead. Searching the web, I've found a couple of tutorials, and they pretty much blow my mind. Modes, parameters, capabilities, access type, sample format, sample rate, number of channels, number of periods, period size... I understand that ALSA is a powerful API for the ambitious, but that's not what I am looking for (or have the time to grok). All I am looking for is how to play the output of mpg123_decode (the format of which I don't even know, not being an audio geek by a long shot).
Can anybody give me some hints on what needs to be done?
tl;dr
How do I get ALSA to play raw audio data?
There's an OSS compatibility layer for ALSA in the alsa-oss package. Install it and run your program inside the "aoss" program. Or, modprobe the modules listed here:
http://wiki.debian.org/SoundFAQ/#line-105
Then, you'll need to change your program to use "/dev/dsp" or "/dev/dsp0" instead of "/dev/audio". It should work how you remembered... but you might want to cross your fingers just in case.
You could install sox and open a pipe to the play command with the correct samplerate and sample size arguments.
Using ALSA directly is overly complicated, so I hope a Gstreamer solution is fine to you too. Gstreamer gives a nice abstraction to ALSA/OSS/Pulseaudio/you name it -- and is ubiquitous in the Linux world.
I wrote a little library that will open a FILE object where you can fwrite PCM data into:
Gstreamer file. The actual code is less than 100 lines.
Use use it like that:
FILE *output = fopen_gst(rate, channels, bit_depth); // open audio output file
while (have_more_data) fwrite(data, amount, 1, output); // output audio data
fclose(output); // close the output file
I added an mpg123 example, too.
Here is the whole file (in case Github get's out of business ;-) ):
/**
* gstreamer_file.c
* Copyright 2012 René Kijewski <rene.SURNAME#fu-berlin.de>
* License: LGPL 3.0 (http://www.gnu.org/licenses/lgpl-3.0)
*/
#include "gstreamer_file.h"
#include <stdbool.h>
#include <stdlib.h>
#include <unistd.h>
#include <glib.h>
#include <gst/gst.h>
#ifndef _GNU_SOURCE
# error "You need to add -D_GNU_SOURCE to the GCC parameters!"
#endif
/**
* Cookie passed to the callbacks.
*/
typedef struct {
/** { file descriptor to read from, fd to write to } */
int pipefd[2];
/** Gstreamer pipeline */
GstElement *pipeline;
} cookie_t;
static ssize_t write_gst(void *cookie_, const char *buf, size_t size) {
cookie_t *cookie = cookie_;
return write(cookie->pipefd[1], buf, size);
}
static int close_gst(void *cookie_) {
cookie_t *cookie = cookie_;
gst_element_set_state(cookie->pipeline, GST_STATE_NULL); /* we are finished */
gst_object_unref(GST_OBJECT(cookie->pipeline)); /* we won't access the pipeline anymore */
close(cookie->pipefd[0]); /* we won't write anymore */
close(cookie->pipefd[1]); /* we won't read anymore */
free(cookie); /* dispose the cookie */
return 0;
}
FILE *fopen_gst(long rate, int channels, int depth) {
/* initialize Gstreamer */
if (!gst_is_initialized()) {
GError *error;
if (!gst_init_check(NULL, NULL, &error)) {
g_error_free(error);
return NULL;
}
}
/* get a cookie */
cookie_t *cookie = malloc(sizeof(*cookie));
if (!cookie) {
return NULL;
}
/* open a pipe to be used between the caller and the Gstreamer pipeline */
if (pipe(cookie->pipefd) != 0) {
close(cookie->pipefd[0]);
close(cookie->pipefd[1]);
free(cookie);
return NULL;
}
/* set up the pipeline */
char description[256];
snprintf(description, sizeof(description),
"fdsrc fd=%d ! " /* read from a file descriptor */
"audio/x-raw-int, rate=%ld, channels=%d, " /* get PCM data */
"endianness=1234, width=%d, depth=%d, signed=true ! "
"audioconvert ! audioresample ! " /* convert/resample if needed */
"autoaudiosink", /* output to speakers (using ALSA, OSS, Pulseaudio ...) */
cookie->pipefd[0], rate, channels, depth, depth);
cookie->pipeline = gst_parse_launch_full(description, NULL,
GST_PARSE_FLAG_FATAL_ERRORS, NULL);
if (!cookie->pipeline) {
close(cookie->pipefd[0]);
close(cookie->pipefd[1]);
free(cookie);
return NULL;
}
/* open a FILE with specialized write and close functions */
cookie_io_functions_t io_funcs = { NULL, write_gst, NULL, close_gst };
FILE *result = fopencookie(cookie, "w", io_funcs);
if (!result) {
close_gst(cookie);
return NULL;
}
/* start the pipeline (of cause it will wait for some data first) */
gst_element_set_state(cookie->pipeline, GST_STATE_PLAYING);
return result;
}
And ten years later, the "actual" answer is found: That's the wrong way to do it in the first place.
libmpg123 comes with a companion library, libout123, which abstracts the underlying audio system for you. Based on libmpg123 example code:
#include <stdlib.h>
#include "mpg123.h"
#include "out123.h"
int main()
{
mpg123_handle * _mpg_handle;
out123_handle * _out_handle;
double rate, channels, encoding;
size_t position, buffer_size;
unsigned char * buffer;
char filename[] = "Example.mp3";
mpg123_open( _mpg_handle, filename );
mpg123_getformat( _mpg_handle, &rate, &channels, &encoding );
out123_open( _out_handle, NULL, NULL );
mpg123_format_none( _mpg_handle );
mpg123_format( _mpg_handle, rate, channels, encoding );
out123_start( _out_handle, rate, channels, encoding );
buffer_size = mpg123_outblock( _mpg_handle );
buffer = malloc( buffer_size );
do
{
mpg123_read( _mpg_handle, buffer.get(), buffer_size, &position );
out123_play( _out_handle, buffer.get(), position );
} while ( position );
out123_close( _out_handle );
mpg123_close( _mpg_handle );
free( buffer );
}
How I can filter video stream from camera in MacOS X. I write quicktime sequence grabber channel component, but it`s work only if app used SG API. If app used QTKit Capture the component is not worked.
Somebody know how I can implement it?
You could use OpenCV for video processing, it's a cross platform image/video processing library: http://opencv.willowgarage.com
Your code would look something like this:
CvCapture* capture = NULL;
if ((capture = cvCaptureFromCAM(-1)) == NULL)
{
std::cerr << "!!! ERROR: vCaptureFromCAM No camera found\n";
return -1;
}
cvNamedWindow("webcam", CV_WINDOW_AUTOSIZE);
cvMoveWindow("webcam", 50, 50);
cvQueryFrame(capture);
IplImage* src = NULL;
for (;;)
{
if ((src = cvQueryFrame(capture)) == NULL)
{
std::cerr << "!!! ERROR: vQueryFrame\n";
break;
}
// perform processing on src->imageData
cvShowImage("webcam", &src);
char key_pressed = cvWaitKey(2);
if (key_pressed == 27)
break;
}
cvReleaseCapture(&camera);
I had success using OpenCV on Mac OS X using cvCaptureFromCAM(0) instead of passing it -1. On linux, -1 seems to do Ok.
It looks like there should be cvReleaseCapture(&capture); at the end.