I am using a 35MM EO Megapixel Fixed FL Lens Edmund Optics camera, OpenCV 2.4.6, and Ubuntu 12.04 LTS as my development environment. I am also using C to develop, not C++. The camera has an API that I am following, and I believe I have set everything up correctly. I initialize the camera, set memory locations, and freeze the video. I then use OpenCV to get the image from memory, but my image is nothing like what it should be (may be seen below). Is my image data pulling data from a junk memory location? How can I access the image saved by "is_FreezeVideo" for image processing done by OpenCV? The image that is printed out can be seen here http://i.imgur.com/kW6aqB3.png
The code I am using is below.
#include "../Include/Camera.h"
#include <wchar.h>
#include <locale.h>
#include <stdlib.h>
#include <stdio.h>
#include <cv.h>
#include <highgui.h>
//#include <opencv2/opencv.hpp>
#include <ueye.h>
// uEye variables
HIDS m_hCam; // handle to room
HWND m_hWndDisplay; // handle to diplay window
int m_Ret; // return value of uEye SDK functions
int m_nColorMode = 0; // Y8/RGB16/RGB24/REG32
int m_nBitsPerPixel=8; // number of bits needed store one pixel
int m_nSizeX = 1280; // width of video
int m_nSizeY = 1024; // height of video
int m_lMemoryId; // grabber memory - buffer ID
char* m_pcImageMemory; // grabber memory - pointer to buffer
int m_nRenderMode = IS_RENDER_FIT_TO_WINDOW; //render mode
void getAzimuth(){
}
void getElevation(){
}
void initializeCamera(){
if (m_hCam !=0 ) {
//free old image mem.
is_FreeImageMem (m_hCam, m_pcImageMemory, m_lMemoryId);
is_ExitCamera (m_hCam);
}
// init room
m_hCam = (HIDS) 0; // open next room
m_Ret = is_InitCamera (&m_hCam, NULL); // init room
if (m_Ret == IS_SUCCESS) {
// retrieve original image size
SENSORINFO sInfo;
is_GetSensorInfo (m_hCam, &sInfo);
m_nSizeX = sInfo.nMaxWidth;
m_nSizeY = sInfo.nMaxHeight;
printf("Width: %d Height: ", m_nSizeX, m_nSizeY);
// setup the color depth to the current windows setting
is_GetColorDepth (m_hCam, &m_nBitsPerPixel, &m_nColorMode);
is_SetColorMode (m_hCam, m_nColorMode);
//printf ("m_nBitsPerPixel=%i m_nColorMode=%i \n", m_nBitsPerPixel, IS_CM_BAYER_RG8);
// memory initialization
is_AllocImageMem (m_hCam, m_nSizeX, m_nSizeY, m_nBitsPerPixel, &m_pcImageMemory, &m_lMemoryId);
//set memory active
is_SetImageMem (m_hCam, m_pcImageMemory, m_lMemoryId);
// display initialization
is_SetImageSize (m_hCam, m_nSizeX, m_nSizeY);
is_SetImagePos(m_hCam, 0, 0);
is_SetDisplayMode (m_hCam, IS_SET_DM_DIB);
} else {
printf("No Camera Initialized! %c", 10);
}
if (m_hCam !=0) {
INT dummy;
char *pMem, *pLast;
double fps = 0.0;
if (is_FreezeVideo (m_hCam, IS_WAIT) == IS_SUCCESS) {
m_Ret = is_GetActiveImageMem(m_hCam, &pLast, &dummy);
m_Ret = is_GetImageMem(m_hCam, (void**)&pLast);
}
IplImage* tmpImg = cvCreateImageHeader (cvSize (m_nSizeX, m_nSizeY), IPL_DEPTH_8U, 1);
tmpImg->imageData = &m_pcImageMemory;
cvNamedWindow("src",1);
cvShowImage("src",tmpImg);
cvWaitKey(0);
}
}
Thanks
You need to use is_ImageFile function to save the image file to a filename.You can see the sample example from the is_ImageFile function.You can save it to the format(bmp,png,jpeg) you need.
regards,
Sreenivas
The problem was the camera properties. After setting brightness and other properties, we now get an actual image
Related
I am trying to read a raw RGBA image using imLIb2 (https://docs.enlightenment.org/api/imlib2/html/ -> according to this page it seems like they accept RGBA data for images)
#include <Imlib2.h>
#include <stdlib.h>
#include <string.h>
#include <stdio.h>
int main(int argc, char **argv)
{
/* an image handle */
Imlib_Image image;
/* load the image */
Imlib_Load_Error error;
image = imlib_load_image_with_error_return("rgba.raw", &error);
printf("load error:%d", error);
if (image)
{
imlib_context_set_image(image);
imlib_image_set_format("png");
/* save the image */
imlib_save_image("working.png");
}
else
{
printf("not loaded\n");
}
}
loading other images formats like png and Jpeg work properly but when trying to load an RGBA image I get the error "IMLIB_LOAD_ERROR_NO_LOADER_FOR_FILE_FORMAT". Could someone tell me if I am missing something or should add Some header to the RGBA image or should I call some more functions before opening an RGBA image?
If Imlib2 doesn't support reading RgbA images is there any alternative C-library that can read rgb image and do scaling like functions?
So this for if someone is facing the same issue
Thanks to #mark-setchell for contributing!!
magickcore Api is an alternative C-library that can be used to perform functions on raw RGB.
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <time.h>
#include <magick/ImageMagick.h>
int main(int argc, char **argv)
{
ExceptionInfo
exception;
Image
*image,
*images,
*resize_image,
*thumbnails;
ImageInfo
*image_info;
/*
Initialize the image info structure and read an image.
*/
InitializeMagick(NULL);
GetExceptionInfo(&exception);
image_info = CloneImageInfo((ImageInfo *)NULL);
image_info->size = "1920x1080";
image_info->depth = 8;
(void)strcpy(image_info->filename, "image.rgba");
images = ReadImage(image_info, &exception);
if (images == (Image *)NULL)
exit(1);
resize_image = MinifyImage(images, &exception);
if (resize_image == (Image *)NULL)
printf("error \n");
DestroyImageInfo(image_info);
DestroyMagick();
return (0);
}
for reading raw images the depth and the WxH have to be specified for the image. the above is a very small example for reducing the size in half. (https://imagemagick.org/script/magick-core.php).
I'm using libVLC to process and record video from an IP camera but can't get the overlay to work while recording.
Meaning if I comment out the code that duplicates the stream for saving it to a file - the overlay works.
But if I leave the code in - the video is recorded but no overlay is on the rendered video either on the screen or in the file.
Using libVLC 2.06 on Windows 7 (x64). But this problem is unchanged with the 32 bit version.
Source for Console Project in Visual Studio:
// Vlc_ConsoleApp.cpp : Defines the entry point for the console application.
//
#include "stdafx.h"
#include <stdio.h>
#include <stdlib.h>
#include <vlc/vlc.h>
#include <windows.h>
int _tmain(int argc, _TCHAR* argv[])
{
libvlc_instance_t * inst;
libvlc_media_player_t *mp;
libvlc_media_t *m;
char* arguments[] = { "-I",
"dummy",
"--ignore-config",
"--no-video-title",
"--sub-filter=marq",
"--plugin-path=C:/Software_Development/Software_Libraries/VLC/vlc-2.0.6_x64/plugins"};
char* duplicateStreamOption = ":sout=#stream_out_duplicate{dst=display,dst=std{access=file,sub-filter=marq,mux=ts,dst=c:/temp/test_go.mpg}}";
/* Load the VLC engine */
inst = libvlc_new (6, arguments);
/* Create a new item */
m = libvlc_media_new_location (inst, "rtsp://#192.168.2.168");
/* add option to record duplicate stream to file */
/* if I comment this out - then marquee works */
//libvlc_media_add_option(m, duplicateStreamOption);
/* Create a media player playing environement */
mp = libvlc_media_player_new_from_media (m);
/* No need to keep the media now */
libvlc_media_release (m);
/* play the media_player */
libvlc_media_player_play (mp);
Sleep (10000); /* Let it play for 10 seconds */
/* throw up a marquee */
libvlc_video_set_marquee_int(mp, libvlc_marquee_Enable, 1);
libvlc_video_set_marquee_string(mp, libvlc_marquee_Text, "Hello- Marquee");
libvlc_video_set_marquee_int(mp, libvlc_marquee_Opacity, 50);
libvlc_video_set_marquee_int(mp, libvlc_marquee_X, 10);
libvlc_video_set_marquee_int(mp, libvlc_marquee_Y, 10);
libvlc_video_set_marquee_int(mp, libvlc_marquee_Timeout, 4000); // 4 secs
libvlc_video_set_marquee_int(mp, libvlc_marquee_Size, 40);
libvlc_video_set_marquee_int(mp, libvlc_marquee_Color, 0xFF0000);
Sleep (10000); /* play some more */
/* Stop playing */
libvlc_media_player_stop (mp);
/* Free the media_player */
libvlc_media_player_release (mp);
libvlc_release (inst);
return 0;
}
Try "--sub-source=marq" in your options instead of "--sub-filter=marq"
Why are you using the mux=ts option when the file extension is .mpg?
In this link https://wiki.videolan.org/Documentation:Streaming_HowTo/Receive_and_Save_a_Stream/ you can see some of the muxer options.
For your issue i whould recommend creating different media players. Create a media player with just the rtsp link and the overlay and then create another media player giving again the rtsp link but adding the save to file option. Then in the second media player you dont have to duplicate. Use the example in the link.
I am using MagickCore in imagemagick Q8 and I can't set specific pixel, this is my code:
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#include <magick/MagickCore.h>
#include <string.h>
int main(int argc,char **argv)
{
Image *imagen;
ImageInfo *imagen_info;
ExceptionInfo *exception;
PixelPacket *q;
MagickCoreGenesis(*argv,MagickTrue);
exception=AcquireExceptionInfo();
imagen_info = AcquireImageInfo();
(void) CopyMagickString(imagen_info->filename,argv[1],MaxTextExtent);
ReadImage(imagen_info, exception);
q = GetAuthenticPixels(imagen,0,0,1,1,exception);
q->red = 255;
q->green = 123;
q->blue = 220;
SyncAuthenticPixels(imagen,exception);
/* Write the image then destroy it. */
WriteImage(imagen_info, imagen);
DestroyImage(imagen);
DestroyExceptionInfo(exception);
MagickCoreTerminus();
return 0;
}
I am trying to read an image from a file and then edit a pixel and then save image to disk.
What am I doing wrong?
From the example provided, your imagen variable remains in a NULL pointer. It should be assigned by the return value of ReadImage.
imagen = ReadImage(imagen_info, exception);
The only other issue I see would be the assignment of color values on the PixelPacket. Assuming your working with RGB, you would need to calculate the Quantum color value.
q->red = 255 * QuantumRange;
q->green = 123 * QuantumRange;
q->blue = 220 * QuantumRange;
Note: this will issue a compiler warning, see docs for working with colors
i am trying to get a screenshot of the screen or a window. I tried using functions from X11
and it works fine. The problem is that getting the pixels from XImage takes a lot of time.
Than i tried to look for some answers on how to do it using openGL. Here's what i've got:
#include <stdlib.h>
#include <stdio.h>
#include <cstdio>
#include <GL/glut.h>
#include <GL/gl.h>
#include <GL/glx.h>
#include <X11/Xlib.h>
int main(int argc, char **argv)
{
int width=1200;
int height=800;
//_____________________________----
Display *dpy;
Window root;
GLint att[] = { GLX_RGBA, GLX_DEPTH_SIZE, 24, GLX_DOUBLEBUFFER, None };
XVisualInfo *vi;
GLXContext glc;
dpy = XOpenDisplay(NULL);
if ( !dpy ) {
printf("\n\tcannot connect to X server\n\n");
exit(0);
}
root = DefaultRootWindow(dpy);
vi = glXChooseVisual(dpy, 0, att);
if (!vi) {
printf("\n\tno appropriate visual found\n\n");
exit(0);
}
glXMakeCurrent(dpy, root, glc);
glc = glXCreateContext(dpy, vi, NULL, GL_TRUE);
printf("vendor: %s\n", (const char*)glGetString(GL_VENDOR));
//____________________________________________
glXMakeCurrent(dpy, root, glc);
glEnable(GL_DEPTH_TEST);
GLubyte* pixelBuffer = new GLubyte[sizeof(GLubyte)*width*height*3*3];
glReadBuffer(GL_FRONT);
GLint ReadBuffer;
glGetIntegerv(GL_READ_BUFFER,&ReadBuffer);
glPixelStorei(GL_READ_BUFFER,GL_RGB);
GLint PackAlignment;
glGetIntegerv(GL_PACK_ALIGNMENT,&PackAlignment);
glPixelStorei(GL_PACK_ALIGNMENT,1);
glReadPixels(0, 0, width, height, GL_RGB, GL_UNSIGNED_INT, pixelBuffer);
int i;
for (i=0;i<100;i++) printf("%u\n",((unsigned int *)pixelBuffer)[i]);
return 0;
}
when i run the program it returns an error:
X Error of failed request: BadAccess (attempt to access private resource denied)
Major opcode of failed request: 199 ()
Minor opcode of failed request: 26
Serial number of failed request: 20
Current serial number in output stream: 20
if i comment the line with glXMakeCurrent(dpy, root, glc); before glc = glXCreateContext(dpy, vi, NULL, GL_TRUE); it returns no erros, but all the pixels are 0.
How should i go about this problem? I am new to openGL and maybe i am missing something important here. Maybe also another way of getting pixels from the screen or specific window exists?
I don't think what you are trying to do is possible. You can't use OpenGL to read pixels from window you don't own and which probably don't even use OpenGL. You need to stick to X11.
If you have XImage you can get raw pixels from ximage->data. Just make sure you are reading it in correct format.
http://tronche.com/gui/x/xlib/graphics/images.html
You can use XShmGetImage, but you have to query the extensions of the X11 server first, to make sure MIT-SHM extension is available. You also need to know how to setup and use a shared memory segment for this.
Querying the Extension:
http://git.videolan.org/?p=ffmpeg.git;a=blob;f=libavdevice/x11grab.c#l224
Getting the image:
http://git.videolan.org/?p=ffmpeg.git;a=blob;f=libavdevice/x11grab.c#l537
The following runs once at 140 fps on my platform. xcb_image_get() (called with XCB_IMAGE_FORMAT_Z_PIXMAP) will store all pixels in ximg->data, pixel by pixel. On my platform, each pixel is 32 bits, each channel is 8 bits, and there's 3 channels (to 8 bits per pixel are unused).
/*
gcc ss.c -o ss -lxcb -lxcb-image && ./ss
*/
#include <stdio.h>
#include <xcb/xcb_image.h>
xcb_screen_t* xcb_get_screen(xcb_connection_t* connection){
const xcb_setup_t* setup = xcb_get_setup(connection); // I think we don't need to free/destroy this!
return xcb_setup_roots_iterator(setup).data;
}
void xcb_image_print(xcb_image_t* ximg){
printf("xcb_image_print() Printing a (%u,%u) `xcb_image_t` of %u bytes\n\n", ximg->height, ximg->width, ximg->size);
for(int i=0; i < ximg->size; i += 4){
printf(" ");
printf("%02x", ximg->data[i+3]);
printf("%02x", ximg->data[i+2]);
printf("%02x", ximg->data[i+1]);
printf("%02x", ximg->data[i+0]);
}
puts("\n");
}
int main(){
// Connect to the X server
xcb_connection_t* connection = xcb_connect(NULL, NULL);
xcb_screen_t* screen = xcb_get_screen(connection);
// Get pixels!
xcb_image_t* ximg = xcb_image_get(connection, screen->root, 0, 0, screen->width_in_pixels, screen->height_in_pixels, 0xffffffff, XCB_IMAGE_FORMAT_Z_PIXMAP);
// ... Now all pixels are in ximg->data!
xcb_image_print(ximg);
// Clean-up
xcb_image_destroy(ximg);
xcb_disconnect(connection);
}
I'm working on a texture management and animation solution for a small side project of mine. Although the project uses Allegro for rendering and input, my question mostly revolves around C and memory management. I wanted to post it here to get thoughts and insight into the approach, as I'm terrible when it comes to pointers.
Essentially what I'm trying to do is load all of my texture resources into a central manager (textureManager) - which is essentially an array of structs containing ALLEGRO_BITMAP objects. The textures stored within the textureManager are mostly full sprite sheets.
From there, I have an anim(ation) struct, which contains animation-specific information (along with a pointer to the corresponding texture within the textureManager).
To give you an idea, here's how I setup and play the players 'walk' animation:
createAnimation(&player.animations[0], "media/characters/player/walk.png", player.w, player.h);
playAnimation(&player.animations[0], 10);
Rendering the animations current frame is just a case of blitting a specific region of the sprite sheet stored in textureManager.
For reference, here's the code for anim.h and anim.c. I'm sure what I'm doing here is probably a terrible approach for a number of reasons. I'd like to hear about them! Am I opening myself to any pitfalls? Will this work as I'm hoping?
anim.h
#ifndef ANIM_H
#define ANIM_H
#define ANIM_MAX_FRAMES 10
#define MAX_TEXTURES 50
struct texture {
bool active;
ALLEGRO_BITMAP *bmp;
};
struct texture textureManager[MAX_TEXTURES];
typedef struct tAnim {
ALLEGRO_BITMAP **sprite;
int w, h;
int curFrame, numFrames, frameCount;
float delay;
} anim;
void setupTextureManager(void);
int addTexture(char *filename);
int createAnimation(anim *a, char *filename, int w, int h);
void playAnimation(anim *a, float delay);
void updateAnimation(anim *a);
#endif
anim.c
void setupTextureManager() {
int i = 0;
for(i = 0; i < MAX_TEXTURES; i++) {
textureManager[i].active = false;
}
}
int addTextureToManager(char *filename) {
int i = 0;
for(i = 0; i < MAX_TEXTURES; i++) {
if(!textureManager[i].active) {
textureManager[i].bmp = al_load_bitmap(filename);
textureManager[i].active = true;
if(!textureManager[i].bmp) {
printf("Error loading texture: %s", filename);
return -1;
}
return i;
}
}
return -1;
}
int createAnimation(anim *a, char *filename, int w, int h) {
int textureId = addTextureToManager(filename);
if(textureId > -1) {
a->sprite = textureManager[textureId].bmp;
a->w = w;
a->h = h;
a->numFrames = al_get_bitmap_width(a->sprite) / w;
printf("Animation loaded with %i frames, given resource id: %i\n", a->numFrames, textureId);
} else {
printf("Texture manager full\n");
return 1;
}
return 0;
}
void playAnimation(anim *a, float delay) {
a->curFrame = 0;
a->frameCount = 0;
a->delay = delay;
}
void updateAnimation(anim *a) {
a->frameCount ++;
if(a->frameCount >= a->delay) {
a->frameCount = 0;
a->curFrame ++;
if(a->curFrame >= a->numFrames) {
a->curFrame = 0;
}
}
}
You may want to consider a more flexible Animation structure that contains an array of Frame structures. Each frame structure could contain the frame delay, an x/y hotspot offset, etc. This way different frames of the same animation could be different sizes and delays. But if you don't need those features, then what you're doing is fine.
I assume you'll be running the logic at a fixed frame rate (constant # of logical frames per second)? If so, then the delay parameters should work out well.
A quick comment regarding your code:
textureManager[i].active = true;
You probably shouldn't mark it as active until after you've checked if the bitmap loaded.
Also note that Allegro 4.9/5.0 is fully backed by OpenGL or D3D textures and, as such, large bitmaps will fail to load on some video cards! This could be a problem if you are generating large sprite sheets. As of the current version, you have to work around it yourself.
You could do something like:
al_set_new_bitmap_flags(ALLEGRO_MEMORY_BITMAP);
ALLEGRO_BITMAP *sprite_sheet = al_load_bitmap("sprites.png");
al_set_new_bitmap_flags(0);
if (!sprite_sheet) return -1; // error
// loop over sprite sheet, creating new video bitmaps for each frame
for (i = 0; i < num_sprites; ++i)
{
animation.frame[i].bmp = al_create_bitmap( ... );
al_set_target_bitmap(animation.frame[i].bmp);
al_draw_bitmap_region( sprite_sheet, ... );
}
al_destroy_bitmap(sprite_sheet);
al_set_target_bitmap(al_get_backbuffer());
To be clear: this is a video card limitation. So a large sprite sheet may work on your computer but fail to load on another. The above approach loads the sprite sheet into a memory bitmap (essentially guaranteed to succeed) and then creates a new, smaller hardware accelerated video bitmap per frame.
Are you sure you need a pointer to pointer for ALLEGRO_BITMAP **sprite; in anim?
IIRC Allegro BITMAP-handles are pointers already, so there is no need double-reference them, since you seem to only want to store one Bitmap per animation.
You ought to use ALLEGRO_BITMAP *sprite; in anim.
I do not see any other problems with your code.