Improper Tiff image generated from RGB888 data - c

I'm trying to convert RGB888 image data to TIFF image. But code generates improper image.
I'm reading the RGB data from text file.Here is the output image
the black region shouldn't be there it seems that code is making low RGB values to zero.
I'm creating tiff image without alpha channel.Please help me comprehend the issue.
TIFF *out= TIFFOpen("new.tif", "w");
int sampleperpixel = 3;
uint32 width=320;
uint32 height=240;
unsigned char image[width*height*sampleperpixel];
int pixval;
int count=0;
FILE *ptr=fopen("data.txt","r");
if (ptr!=NULL)
{
while(count<width*height)
{fscanf(ptr,"%d",&pixval);
*(image+count)=(unsigned char)pixval;
count++;}
}
printf("%d\n",count);
TIFFSetField(out, TIFFTAG_IMAGEWIDTH, width); // set the width of the image
TIFFSetField(out, TIFFTAG_IMAGELENGTH, height); // set the height of the image
TIFFSetField(out, TIFFTAG_SAMPLESPERPIXEL, sampleperpixel); // set number of channels per pixel
TIFFSetField(out, TIFFTAG_BITSPERSAMPLE, 8); // set the size of the channels
TIFFSetField(out, TIFFTAG_ORIENTATION, ORIENTATION_TOPLEFT); // set the origin of the image.
// Some other essential fields to set that you do not have to understand for now.
TIFFSetField(out, TIFFTAG_PLANARCONFIG, PLANARCONFIG_CONTIG);
TIFFSetField(out, TIFFTAG_PHOTOMETRIC, PHOTOMETRIC_RGB);
tsize_t linebytes = sampleperpixel * width;
unsigned char *buf = NULL;
//if (TIFFScanlineSize(out)<linebytes)
// buf =(unsigned char *)_TIFFmalloc(linebytes);
//else
buf = (unsigned char *)_TIFFmalloc(TIFFScanlineSize(out));
TIFFSetField(out, TIFFTAG_ROWSPERSTRIP, TIFFDefaultStripSize(out, width*sampleperpixel));
uint32 row ;
//Now writing image to the file one strip at a time
for (row = 0; row < height; row++)
{
//memcpy(buf, &image[(height-row-1)*linebytes], linebytes); // check the index here, and figure out why not using h*linebytes
memcpy(buf, &image[(row)*linebytes], linebytes);
if (TIFFWriteScanline(out, buf, row, 0) < 0)
break;
}
TIFFClose(out);
if (buf)
_TIFFfree(buf);

I found your example at this tutorial. Since you are not the original author, you should always post a link to help others and avoid plagiarism.
The example works great except the part you added. To read file into byte array, use this instead (after you read what fread does):
fread(image, 1, sizeof image, ptr);
in your original code, you omitted sampleperpixel
while(count<width*height*sampleperpixel) ...
so you read only one third of the image.

Related

How to view/save AVFrame have format AV_PIX_FMT_YUVJ420P to file

I have a AVFrame and I want to save it to file. If I only store frame->data[0] to file, the image will be Grey image, how to view full color? I use C language.
Do you have any suggestions on what I should read to understand and do these things by myself?
A relatively simple way to save and view the image is writing Y, U and V (planar) data to binary file, and using FFmpeg CLI to convert the binary file to RGB.
Some background:
yuvj420p in FFmpeg (libav) terminology applies YUV420 "full range" format.
I suppose the j in yuvj comes from JPEG - JPEG images uses "full range" YUV420 format.
Most of the video files use "limited range" (or TV range) YUV format.
In "limited range", Y range is [16, 235], U range is [16, 240] and V range is [0, 240].
In "full range", Y range is [0, 255], U range is [0, 255] and V range is [0, 255].
yuvj420p is deprecated, and supposed to be marked using yuv420p combined with dst_range 1 (or src_range 1) in FFmpeg CLI. I never looked for a way to define "full range" in C.
yuvj420p in FFmpeg (libav) applies "planar" format.
Separate planes for Y channel, for U channel and for V channel.
Y plane is given in full resolution, and U, V are down-scaled by a factor of x2 in each axis.
Illustration:
Y - data[0]: YYYYYYYYYYYY
YYYYYYYYYYYY
YYYYYYYYYYYY
YYYYYYYYYYYY
U - data[1]: UUUUUU
UUUUUU
UUUUUU
V - data[2]: VVVVVV
VVVVVV
VVVVVV
In C, each "plane" is stored in a separate buffer in memory.
When writing the data to a binary file, we may simply write the buffers to the file one after the other.
For demonstration, I am reusing my following answer.
I copied and paste the complete answer, and replaced YUV420 with YUVJ420.
In the example, the input format is NV12 (and I kept it).
The input format is irrelevant (you may ignore it) - only the output format is relevant for your question.
I have created a "self contained" code sample that demonstrates the conversion from NV12 to YUV420 (yuvj420p) using sws_scale.
Start by building synthetic input frame using FFmpeg (command line tool).
The command creates 320x240 video frame in raw NV12 format:
ffmpeg -y -f lavfi -i testsrc=size=320x240:rate=1 -vcodec rawvideo -pix_fmt nv12 -frames 1 -f rawvideo nv12_image.bin
The next code sample applies the following stages:
Allocate memory for the source frame (in NV12 format).
Read NV12 data from binary file (for testing).
Allocate memory for the destination frame (in YUV420 / yuvj420 format).
Apply color space conversion (using sws_scale).
Write the converted YUV420 (yuvj420) data to binary file (for testing).
Here is the complete code:
//Use extern "C", because the code is built as C++ (cpp file) and not C.
extern "C"
{
#include <libswscale/swscale.h>
#include <libavformat/avformat.h>
#include <libswresample/swresample.h>
#include <libavutil/pixdesc.h>
#include <libavutil/imgutils.h>
}
int main()
{
int width = 320;
int height = 240; //The code sample assumes height is even.
int align = 0;
AVPixelFormat srcPxlFormat = AV_PIX_FMT_NV12;
AVPixelFormat dstPxlFormat = AV_PIX_FMT_YUVJ420P;
int sts;
//Source frame allocation
////////////////////////////////////////////////////////////////////////////
AVFrame* pNV12Frame = av_frame_alloc();
pNV12Frame->format = srcPxlFormat;
pNV12Frame->width = width;
pNV12Frame->height = height;
sts = av_frame_get_buffer(pNV12Frame, align);
if (sts < 0)
{
return -1; //Error!
}
////////////////////////////////////////////////////////////////////////////
//Read NV12 data from binary file (for testing)
////////////////////////////////////////////////////////////////////////////
//Use FFmpeg for building raw NV12 image (used as input).
//ffmpeg -y -f lavfi -i testsrc=size=320x240:rate=1 -vcodec rawvideo -pix_fmt nv12 -frames 1 -f rawvideo nv12_image.bin
FILE* f = fopen("nv12_image.bin", "rb");
if (f == NULL)
{
return -1; //Error!
}
//Read Y channel from nv12_image.bin (Y channel size is width x height).
//Reading row by row is required in rare cases when pNV12Frame->linesize[0] != width
uint8_t* Y = pNV12Frame->data[0]; //Pointer to Y color channel of the NV12 frame.
for (int row = 0; row < height; row++)
{
fread(Y + (uintptr_t)row * pNV12Frame->linesize[0], 1, width, f); //Read row (width pixels) to Y0.
}
//Read UV channel from nv12_image.bin (UV channel size is width x height/2).
uint8_t* UV = pNV12Frame->data[1]; //Pointer to UV color channels of the NV12 frame (ordered as UVUVUVUV...).
for (int row = 0; row < height / 2; row++)
{
fread(UV + (uintptr_t)row * pNV12Frame->linesize[1], 1, width, f); //Read row (width pixels) to UV0.
}
fclose(f);
////////////////////////////////////////////////////////////////////////////
//Destination frame allocation
////////////////////////////////////////////////////////////////////////////
AVFrame* pYUV420Frame = av_frame_alloc();
pYUV420Frame->format = dstPxlFormat;
pYUV420Frame->width = width;
pYUV420Frame->height = height;
sts = av_frame_get_buffer(pYUV420Frame, align);
if (sts < 0)
{
return -1; //Error!
}
////////////////////////////////////////////////////////////////////////////
//Color space conversion
////////////////////////////////////////////////////////////////////////////
SwsContext* sws_context = sws_getContext(width,
height,
srcPxlFormat,
width,
height,
dstPxlFormat,
SWS_FAST_BILINEAR,
NULL,
NULL,
NULL);
if (sws_context == NULL)
{
return -1; //Error!
}
sts = sws_scale(sws_context, //struct SwsContext* c,
pNV12Frame->data, //const uint8_t* const srcSlice[],
pNV12Frame->linesize, //const int srcStride[],
0, //int srcSliceY,
pNV12Frame->height, //int srcSliceH,
pYUV420Frame->data, //uint8_t* const dst[],
pYUV420Frame->linesize); //const int dstStride[]);
if (sts != pYUV420Frame->height)
{
return -1; //Error!
}
////////////////////////////////////////////////////////////////////////////
//Write YUV420 (yuvj420p) data to binary file (for testing)
////////////////////////////////////////////////////////////////////////////
//Use FFmpeg for converting the binary image to PNG after saving the data.
//ffmpeg -y -f rawvideo -video_size 320x240 -pixel_format yuvj420p -i yuvj420_image.bin -pix_fmt rgb24 rgb_image.png
f = fopen("yuvj420_image.bin", "wb");
if (f == NULL)
{
return -1; //Error!
}
//Write Y channel to yuvj420_image.bin (Y channel size is width x height).
//Writing row by row is required in rare cases when pYUV420Frame->linesize[0] != width
Y = pYUV420Frame->data[0]; //Pointer to Y color channel of the YUV420 frame.
for (int row = 0; row < height; row++)
{
fwrite(Y + (uintptr_t)row * pYUV420Frame->linesize[0], 1, width, f); //Write row (width pixels) to file.
}
//Write U channel to yuvj420_image.bin (U channel size is width/2 x height/2).
uint8_t* U = pYUV420Frame->data[1]; //Pointer to U color channels of the YUV420 frame.
for (int row = 0; row < height / 2; row++)
{
fwrite(U + (uintptr_t)row * pYUV420Frame->linesize[1], 1, width / 2, f); //Write row (width/2 pixels) to file.
}
//Write V channel to yuv420_image.bin (V channel size is width/2 x height/2).
uint8_t* V = pYUV420Frame->data[2]; //Pointer to V color channels of the YUV420 frame.
for (int row = 0; row < height / 2; row++)
{
fwrite(V + (uintptr_t)row * pYUV420Frame->linesize[2], 1, width / 2, f); //Write row (width/2 pixels) to file.
}
fclose(f);
////////////////////////////////////////////////////////////////////////////
//Cleanup
////////////////////////////////////////////////////////////////////////////
sws_freeContext(sws_context);
av_frame_free(&pYUV420Frame);
av_frame_free(&pNV12Frame);
////////////////////////////////////////////////////////////////////////////
return 0;
}
The execution shows a warning message (that we may ignore):
[swscaler # 000002a19227e640] deprecated pixel format used, make sure you did set range correctly
For viewing the output as colored image:
After executing the code, execute FFmpeg (command line tool).
The following command converts the raw binary frame (in YUV420 / yuvj420p format) to PNG (in RGB format).
ffmpeg -y -f rawvideo -video_size 320x240 -pixel_format yuvj420p -i yuvj420_image.bin -pix_fmt rgb24 rgb_image.png
Sample output (after converting from yuvj420p to PNG image file format):
AV_PIX_FMT_YUVJ420P is a planar format.
data[0] is just a Y frame (grayscale), for the full image with the color you need to take into consideration:
data[1] and data[2] for the U and V part of the frame respectively.
And it seems this format (AV_PIX_FMT_YUVJ420P) is deprecated in favor of the more common AV_PIX_FMT_YUV420P format, use this if it's up to you.
You must convert to AV_PIX_FMT_BGRA
AVFrame* frameYUV; //frame YUVJ420P
AVFrame* frameGRB = av_frame_alloc();
frameGRB->width = frameYUV->width;
frameGRB->height= frameYUV->height;
frameGRB->format = AV_PIX_FMT_BGRA;
av_frame_get_buffer(frameGRB, 0);
SwsContext *sws_context = sws_getContext(frameYUV->width, frameYUV->height, AV_PIX_FMT_YUVJ420P, frameGRB->width, frameGRB->height, AV_PIX_FMT_BGRA, SWS_BICUBIC, NULL, NULL, NULL);
if (sws_context != NULL){
sws_scale(sws_context, frameYUV->data, frameYUV->linesize, 0, frameYUV->height, frameGRB->data, frameGRB->linesize);
}
Array pixel is:
void* imageBuff = (void*)frameGRB->data[0];
Save file image:
HANDLE hFileBmp = CreateFile(szFilePath, GENERIC_READ | GENERIC_WRITE, FILE_SHARE_READ, NULL, CREATE_ALWAYS, FILE_ATTRIBUTE_NORMAL, NULL);
if (hFileBmp != INVALID_HANDLE_VALUE) {
int iWidth = frameGRB->width, iHeight = frameGRB->height;
BITMAPINFOHEADER BitmapInfoHeader;
ZeroMemory(&BitmapInfoHeader, sizeof(BitmapInfoHeader));
BitmapInfoHeader.biSize = sizeof(BITMAPINFOHEADER);
BitmapInfoHeader.biWidth = iWidth;
BitmapInfoHeader.biHeight = -iHeight;
BitmapInfoHeader.biPlanes = 1;
BitmapInfoHeader.biBitCount = 32;
BitmapInfoHeader.biCompression = BI_RGB;
BITMAPFILEHEADER BitmapFileHeader;
ZeroMemory(&BitmapFileHeader, sizeof(BitmapFileHeader));
BitmapFileHeader.bfOffBits = sizeof(BITMAPFILEHEADER) + sizeof(BITMAPINFOHEADER);
DWORD dwBitmapSize = iWidth * 4 * iHeight;
BitmapFileHeader.bfSize = dwBitmapSize + BitmapFileHeader.bfOffBits;
BitmapFileHeader.bfType = 0x4D42; //BM
DWORD dwBytesWritten = 0;
if (WriteFile(hFileBmp, &BitmapFileHeader, sizeof(BitmapFileHeader), &dwBytesWritten, NULL) == TRUE) {
if (WriteFile(hFileBmp, &BitmapInfoHeader, sizeof(BitmapInfoHeader), &dwBytesWritten, NULL) == TRUE) {
WriteFile(hFileBmp, imageBuff, dwBitmapSize, &dwBytesWritten, NULL);
}
}
CloseHandle(hFileBmp);
}

calculating width of utf8 string rendered by free type

I rendered a utf8 presentation B string (e.g. "\xef\xbb\x9d\xef\xba\x8e\xef\xaf\xbe\xef\xba\xad\x20") using freetype, and for alignment calculating I need to retrieve width of glyphs of the rendered string.
this is my code and seems obtained value for width is not valid. then whats problem and how i could calculate exact width parameter of the rendered string without using Harfbuzz or any other higher level library ?
size_t len = utf8strlen( text);
FT_Library library;
FT_Face face;
FT_GlyphSlot slot;
FT_Error error;
int row = 0;
int wdt = 0;
int pch = 0;
error = FT_Init_FreeType( &library ); /* initialize library */
error = FT_New_Face( library, "font.ttf", 0, &face );/* create face object */
error = FT_Set_Pixel_Sizes( face, 52, 24 );
for ( size_t cntr = 0; cntr < len; cntr++ )
{
/* load glyph image into the slot (erase previous one) */
error = FT_Load_Char( face, text[cntr], FT_LOAD_RENDER );
if ( error )
continue; /* ignore errors */
// draw glyph image anti-aliased
FT_Render_Glyph( face->glyph, FT_RENDER_MODE_NORMAL );
// get dimensions of bitmap
row += face->glyph->bitmap.rows;
wdt += face->glyph->bitmap.width;
pch += face->glyph->bitmap.pitch;
}

Pixel manipulation using sdl

I am trying to manipulate pixel using sdl and manage to read them up now. Below is my sample code. When I print I this printf("\npixelvalue is is : %d",MyPixel); I get values like this
11275780
11275776
etc
I know these are not in hex form but how to manipulate say I want to filter just the blue colors out? Secondly after manipulation how to generate the new image?
#include "SDL.h"
int main( int argc, char* argv[] )
{
SDL_Surface *screen, *image;
SDL_Event event;
Uint8 *keys;
int done = 0;
if (SDL_Init(SDL_INIT_VIDEO) == -1)
{
printf("Can't init SDL: %s\n", SDL_GetError());
exit(1);
}
atexit(SDL_Quit);
SDL_WM_SetCaption("sample1", "app.ico");
/* obtain the SDL surfance of the video card */
screen = SDL_SetVideoMode(640, 480, 24, SDL_HWSURFACE);
if (screen == NULL)
{
printf("Can't set video mode: %s\n", SDL_GetError());
exit(1);
}
printf("Loading here");
/* load BMP file */
image = SDL_LoadBMP("testa.bmp");
Uint32* pixels = (Uint32*)image->pixels;
int width = image->w;
int height = image->h;
printf("Widts is : %d",image->w);
for(int iH = 1; iH<=height; iH++)
for(int iW = 1; iW<=width; iW++)
{
printf("\nIh is : %d",iH);
printf("\nIw is : %d",iW);
Uint32* MyPixel = pixels + ( (iH-1) + image->w ) + iW;
printf("\npixelvalue is is : %d",MyPixel);
}
if (image == NULL) {
printf("Can't load image of tux: %s\n", SDL_GetError());
exit(1);
}
/* Blit image to the video surface */
SDL_BlitSurface(image, NULL, screen, NULL);
SDL_UpdateRect(screen, 0, 0, screen->w, screen->h);
/* free the image if it is no longer needed */
SDL_FreeSurface(image);
/* process the keyboard event */
while (!done)
{
// Poll input queue, run keyboard loop
while ( SDL_PollEvent(&event) )
{
if ( event.type == SDL_QUIT )
{
done = 1;
break;
}
}
keys = SDL_GetKeyState(NULL);
if (keys[SDLK_q])
{
done = 1;
}
// Release CPU for others
SDL_Delay(100);
}
// Release memeory and Quit SDL
SDL_FreeSurface(screen);
SDL_Quit();
return 0;
}
Use SDL_MapRGB and SDL_MapRGBA to sort colors out. SDL will filter it out for you, based on surface format.
Just like this:
Uint32 rawpixel = getpixel(surface, x, y);
Uint8 red, green, blue;
SDL_GetRGB(rawpixel, surface->format, &red, &green, &blue);
You are printing the value of the pointer MyPixel. To get the value you have to dereference the pointer to the pixel value like this: *MyPixel
Then the printf would look like this:
printf("\npixelvalue is : %d and the address of that pixel is: %p\n",*MyPixel , MyPixel);
Other errors:
Your for loops are incorrect. You should loop from 0 to less than width or height, or else you will read uninitialized memory.
You didn't lock the surface. Although you are only reading the pixels and nothing should go wrong it is still not correct.
Test for correctness if the image pointer comes after you are already using the pointer. Put the test right after the initialization.
If I recall correctly I used sdl_gfx for pixel manipulation.
It also contains function like drawing a circle, oval etc.

How to extract PPM Image Properties from an ImageMagick Wand using C?

In order to convert almost any type of image into a PPM I'm using ImageMagick's wand API.
From the wand how do I extract the PPM properties of width, height, modval and raw RGB data?
Here is some skeleton code.
Many thanks in advance for reading the question.
/* Read an image. */
MagickWandGenesis();
magick_wand = NewMagickWand();
status = MagickReadImage(magick_wand, argv[1]);
if (status == MagickFalse)
ThrowWandException(magick_wand);
/* TODO convert to P6 PPM */
/* TODO get PPM properties */
ppm->width = ...
ppm->height = ...
ppm->modval = 3 * ppm->width;
ppm->data = malloc(ppm->width * ppm->height * 3);
/* TODO fill ppm->data */
From ImageMagick Forum
width = MagickGetImageWidth(magick_wand);
height = MagickGetImageHeight(magick_wand);
ppm->width = width;
ppm->height = height;
ppm->modval = 3 * width;
ppm->data = malloc (3 * width * height);
status = MagickExportImagePixels(magick_wand, 0, 0, width, height, "RGB",
CharPixel, ppm->data);

Writing AVI files in OpenCV

There example on the net and code given in Learn OpenCv,Orielly.
After many attempts the out.avi file is written with 0 bytes.
I wonder where i went wrong.
The following are the code i used...
int main(int argc, char* argv[]) {
CvCapture* input = cvCaptureFromFile(argv[1]);
IplImage* image = cvRetrieveFrame(input);
if (!image) {
printf("Unable to read input");
return 0;
}
CvSize imgSize;
imgSize.width = image->width;
imgSize.height = image->height;
double fps = cvGetCaptureProperty(
input,
CV_CAP_PROP_FPS
);
CvVideoWriter *writer = cvCreateVideoWriter(
"out.avi",
CV_FOURCC('M', 'J', 'P', 'G'),
fps,
imgSize
);
IplImage* colourImage;
//Keep processing frames...
for (;;) {
//Get a frame from the input video.
colourImage = cvQueryFrame(input);
cvWriteFrame(writer, colourImage);
}
cvReleaseVideoWriter(&writer);
cvReleaseCapture(&input);
}
My bet is that cvCreateVideoWriter returns NULL. Just step through it to see if it's true. In that case, the problem is probably with CV_FOURCC(..) which doesnt find the codec and force a return 0;
you can try using -1 instead of CV_FOURCC. There is gonna be a prompt during runtime for you to chose the appropriate codec
When i google this problem i meet an answer: "OpenCV on mac os x don`t support avi write until it will be compiled with a ffmpeg"
For me seem to wrok this solution
http://article.gmane.org/gmane.comp.lib.opencv/16005
You need to provide the full path to
the file with the movie in
cvCreateVideoWriter. I don't know
whether it's only an Mac OS X port
issue, but might be, since
QTNewDataReferenceFromFullPathCFString
from the QT backend is used.
hey This code works in DevC++ try it:
#include<cv.h>
#include<highgui.h>
#include<cvaux.h>
#include<cvcam.h>
#include<cxcore.h>
int main()
{
CvVideoWriter *writer = 0;
int isColor = 1;
int fps = 5; // or 30
int frameW = 1600; //640; // 744 for firewire cameras
int frameH = 1200; //480; // 480 for firewire cameras
//writer=cvCreateVideoWriter("out.avi",CV_FOURCC('P','I','M','1'),
// fps,cvSize(frameW,frameH),isColor);
writer=cvCreateVideoWriter("out.avi",-1,
fps,cvSize(frameW,frameH),isColor);
IplImage* img = 0;
img=cvLoadImage("CapturedFrame_0.jpg");
cvWriteFrame(writer,img); // add the frame to the file
img=cvLoadImage("CapturedFrame_1.jpg");
cvWriteFrame(writer,img);
img=cvLoadImage("CapturedFrame_2.jpg");
cvWriteFrame(writer,img);
img=cvLoadImage("CapturedFrame_3.jpg");
cvWriteFrame(writer,img);
img=cvLoadImage("CapturedFrame_4.jpg");
cvWriteFrame(writer,img);
img=cvLoadImage("CapturedFrame_5.jpg");
cvWriteFrame(writer,img);
cvReleaseVideoWriter(&writer);
return 0;
}
I compiled it and ran it, works fine.
(I did not see above whether you got your answer or not .. but for this particular thing I worked very hard earlier and suddenly I just did it, from some code snippets.)
It's a codec issue. Try out all the possible codecs (option -1 in cvCreateVideo). In my case Microsoft Video 1 worked well.
Maybe you could try inserting a printf("Frame found\n") inside the for(;;) to see if it is actually capturing frames. Or even better:
if(colourImage == NULL) {
printf("Warning - got NULL colourImage\n");
continue;
}
cvNamedWindow( "test", 1);
cvShowImage( "test", colourImage );
cvWaitKey( 0 );
cvDestroyWindow( "test" );
Then see if you get any windows, and if they contain the right contents.
This code worked fine:
cv.h
highgui.h
cvaux.h
cvcam.h
cxcore.h
int main(){
CvVideoWriter *writer = 0;
int isColor = 1;
int fps = 5; // or 30
IplImage* img = 0;
img=cvLoadImage("animTest_1.bmp");
int frameW = img->width; //640; // 744 for firewire cameras
int frameH = img->height; //480; // 480 for firewire cameras
writer=cvCreateVideoWriter("out.avi",-1,
fps,cvSize(frameW,frameH),1);
cvWriteFrame(writer, img); // add the frame to the file
char *FirstFile,fF[20]="",*fileNoStr,fns[4]="";
fileNoStr=fns;
for(int fileNo;fileNo<100;fileNo++){
FirstFile=fF;
itoa(fileNo,fileNoStr,10);
FirstFile=strcat ( FirstFile,"animTest_");
FirstFile=strcat ( FirstFile,fileNoStr);
FirstFile=strcat ( FirstFile,".bmp");
printf(" \n%s .",FirstFile);
img=cvLoadImage(FirstFile);
cvWriteFrame(writer, img);
}
cvReleaseVideoWriter(&writer);
return 0;
}
I think the problem you're encountering is that your "for" loop never ends; therefore, cvReleaseVideoWriter(&writer); and cvReleaseCapture(&input); never get called. Try something like for(int i=0; i<200; i++) and see if you end up with a working video.
Often video is written to a temporary files before being finalized on disk. If your file isn't finalized, there won't be anything to see.
Hope that helps.

Resources