How to extract PPM Image Properties from an ImageMagick Wand using C? - c

In order to convert almost any type of image into a PPM I'm using ImageMagick's wand API.
From the wand how do I extract the PPM properties of width, height, modval and raw RGB data?
Here is some skeleton code.
Many thanks in advance for reading the question.
/* Read an image. */
MagickWandGenesis();
magick_wand = NewMagickWand();
status = MagickReadImage(magick_wand, argv[1]);
if (status == MagickFalse)
ThrowWandException(magick_wand);
/* TODO convert to P6 PPM */
/* TODO get PPM properties */
ppm->width = ...
ppm->height = ...
ppm->modval = 3 * ppm->width;
ppm->data = malloc(ppm->width * ppm->height * 3);
/* TODO fill ppm->data */

From ImageMagick Forum
width = MagickGetImageWidth(magick_wand);
height = MagickGetImageHeight(magick_wand);
ppm->width = width;
ppm->height = height;
ppm->modval = 3 * width;
ppm->data = malloc (3 * width * height);
status = MagickExportImagePixels(magick_wand, 0, 0, width, height, "RGB",
CharPixel, ppm->data);

Related

How to view/save AVFrame have format AV_PIX_FMT_YUVJ420P to file

I have a AVFrame and I want to save it to file. If I only store frame->data[0] to file, the image will be Grey image, how to view full color? I use C language.
Do you have any suggestions on what I should read to understand and do these things by myself?
A relatively simple way to save and view the image is writing Y, U and V (planar) data to binary file, and using FFmpeg CLI to convert the binary file to RGB.
Some background:
yuvj420p in FFmpeg (libav) terminology applies YUV420 "full range" format.
I suppose the j in yuvj comes from JPEG - JPEG images uses "full range" YUV420 format.
Most of the video files use "limited range" (or TV range) YUV format.
In "limited range", Y range is [16, 235], U range is [16, 240] and V range is [0, 240].
In "full range", Y range is [0, 255], U range is [0, 255] and V range is [0, 255].
yuvj420p is deprecated, and supposed to be marked using yuv420p combined with dst_range 1 (or src_range 1) in FFmpeg CLI. I never looked for a way to define "full range" in C.
yuvj420p in FFmpeg (libav) applies "planar" format.
Separate planes for Y channel, for U channel and for V channel.
Y plane is given in full resolution, and U, V are down-scaled by a factor of x2 in each axis.
Illustration:
Y - data[0]: YYYYYYYYYYYY
YYYYYYYYYYYY
YYYYYYYYYYYY
YYYYYYYYYYYY
U - data[1]: UUUUUU
UUUUUU
UUUUUU
V - data[2]: VVVVVV
VVVVVV
VVVVVV
In C, each "plane" is stored in a separate buffer in memory.
When writing the data to a binary file, we may simply write the buffers to the file one after the other.
For demonstration, I am reusing my following answer.
I copied and paste the complete answer, and replaced YUV420 with YUVJ420.
In the example, the input format is NV12 (and I kept it).
The input format is irrelevant (you may ignore it) - only the output format is relevant for your question.
I have created a "self contained" code sample that demonstrates the conversion from NV12 to YUV420 (yuvj420p) using sws_scale.
Start by building synthetic input frame using FFmpeg (command line tool).
The command creates 320x240 video frame in raw NV12 format:
ffmpeg -y -f lavfi -i testsrc=size=320x240:rate=1 -vcodec rawvideo -pix_fmt nv12 -frames 1 -f rawvideo nv12_image.bin
The next code sample applies the following stages:
Allocate memory for the source frame (in NV12 format).
Read NV12 data from binary file (for testing).
Allocate memory for the destination frame (in YUV420 / yuvj420 format).
Apply color space conversion (using sws_scale).
Write the converted YUV420 (yuvj420) data to binary file (for testing).
Here is the complete code:
//Use extern "C", because the code is built as C++ (cpp file) and not C.
extern "C"
{
#include <libswscale/swscale.h>
#include <libavformat/avformat.h>
#include <libswresample/swresample.h>
#include <libavutil/pixdesc.h>
#include <libavutil/imgutils.h>
}
int main()
{
int width = 320;
int height = 240; //The code sample assumes height is even.
int align = 0;
AVPixelFormat srcPxlFormat = AV_PIX_FMT_NV12;
AVPixelFormat dstPxlFormat = AV_PIX_FMT_YUVJ420P;
int sts;
//Source frame allocation
////////////////////////////////////////////////////////////////////////////
AVFrame* pNV12Frame = av_frame_alloc();
pNV12Frame->format = srcPxlFormat;
pNV12Frame->width = width;
pNV12Frame->height = height;
sts = av_frame_get_buffer(pNV12Frame, align);
if (sts < 0)
{
return -1; //Error!
}
////////////////////////////////////////////////////////////////////////////
//Read NV12 data from binary file (for testing)
////////////////////////////////////////////////////////////////////////////
//Use FFmpeg for building raw NV12 image (used as input).
//ffmpeg -y -f lavfi -i testsrc=size=320x240:rate=1 -vcodec rawvideo -pix_fmt nv12 -frames 1 -f rawvideo nv12_image.bin
FILE* f = fopen("nv12_image.bin", "rb");
if (f == NULL)
{
return -1; //Error!
}
//Read Y channel from nv12_image.bin (Y channel size is width x height).
//Reading row by row is required in rare cases when pNV12Frame->linesize[0] != width
uint8_t* Y = pNV12Frame->data[0]; //Pointer to Y color channel of the NV12 frame.
for (int row = 0; row < height; row++)
{
fread(Y + (uintptr_t)row * pNV12Frame->linesize[0], 1, width, f); //Read row (width pixels) to Y0.
}
//Read UV channel from nv12_image.bin (UV channel size is width x height/2).
uint8_t* UV = pNV12Frame->data[1]; //Pointer to UV color channels of the NV12 frame (ordered as UVUVUVUV...).
for (int row = 0; row < height / 2; row++)
{
fread(UV + (uintptr_t)row * pNV12Frame->linesize[1], 1, width, f); //Read row (width pixels) to UV0.
}
fclose(f);
////////////////////////////////////////////////////////////////////////////
//Destination frame allocation
////////////////////////////////////////////////////////////////////////////
AVFrame* pYUV420Frame = av_frame_alloc();
pYUV420Frame->format = dstPxlFormat;
pYUV420Frame->width = width;
pYUV420Frame->height = height;
sts = av_frame_get_buffer(pYUV420Frame, align);
if (sts < 0)
{
return -1; //Error!
}
////////////////////////////////////////////////////////////////////////////
//Color space conversion
////////////////////////////////////////////////////////////////////////////
SwsContext* sws_context = sws_getContext(width,
height,
srcPxlFormat,
width,
height,
dstPxlFormat,
SWS_FAST_BILINEAR,
NULL,
NULL,
NULL);
if (sws_context == NULL)
{
return -1; //Error!
}
sts = sws_scale(sws_context, //struct SwsContext* c,
pNV12Frame->data, //const uint8_t* const srcSlice[],
pNV12Frame->linesize, //const int srcStride[],
0, //int srcSliceY,
pNV12Frame->height, //int srcSliceH,
pYUV420Frame->data, //uint8_t* const dst[],
pYUV420Frame->linesize); //const int dstStride[]);
if (sts != pYUV420Frame->height)
{
return -1; //Error!
}
////////////////////////////////////////////////////////////////////////////
//Write YUV420 (yuvj420p) data to binary file (for testing)
////////////////////////////////////////////////////////////////////////////
//Use FFmpeg for converting the binary image to PNG after saving the data.
//ffmpeg -y -f rawvideo -video_size 320x240 -pixel_format yuvj420p -i yuvj420_image.bin -pix_fmt rgb24 rgb_image.png
f = fopen("yuvj420_image.bin", "wb");
if (f == NULL)
{
return -1; //Error!
}
//Write Y channel to yuvj420_image.bin (Y channel size is width x height).
//Writing row by row is required in rare cases when pYUV420Frame->linesize[0] != width
Y = pYUV420Frame->data[0]; //Pointer to Y color channel of the YUV420 frame.
for (int row = 0; row < height; row++)
{
fwrite(Y + (uintptr_t)row * pYUV420Frame->linesize[0], 1, width, f); //Write row (width pixels) to file.
}
//Write U channel to yuvj420_image.bin (U channel size is width/2 x height/2).
uint8_t* U = pYUV420Frame->data[1]; //Pointer to U color channels of the YUV420 frame.
for (int row = 0; row < height / 2; row++)
{
fwrite(U + (uintptr_t)row * pYUV420Frame->linesize[1], 1, width / 2, f); //Write row (width/2 pixels) to file.
}
//Write V channel to yuv420_image.bin (V channel size is width/2 x height/2).
uint8_t* V = pYUV420Frame->data[2]; //Pointer to V color channels of the YUV420 frame.
for (int row = 0; row < height / 2; row++)
{
fwrite(V + (uintptr_t)row * pYUV420Frame->linesize[2], 1, width / 2, f); //Write row (width/2 pixels) to file.
}
fclose(f);
////////////////////////////////////////////////////////////////////////////
//Cleanup
////////////////////////////////////////////////////////////////////////////
sws_freeContext(sws_context);
av_frame_free(&pYUV420Frame);
av_frame_free(&pNV12Frame);
////////////////////////////////////////////////////////////////////////////
return 0;
}
The execution shows a warning message (that we may ignore):
[swscaler # 000002a19227e640] deprecated pixel format used, make sure you did set range correctly
For viewing the output as colored image:
After executing the code, execute FFmpeg (command line tool).
The following command converts the raw binary frame (in YUV420 / yuvj420p format) to PNG (in RGB format).
ffmpeg -y -f rawvideo -video_size 320x240 -pixel_format yuvj420p -i yuvj420_image.bin -pix_fmt rgb24 rgb_image.png
Sample output (after converting from yuvj420p to PNG image file format):
AV_PIX_FMT_YUVJ420P is a planar format.
data[0] is just a Y frame (grayscale), for the full image with the color you need to take into consideration:
data[1] and data[2] for the U and V part of the frame respectively.
And it seems this format (AV_PIX_FMT_YUVJ420P) is deprecated in favor of the more common AV_PIX_FMT_YUV420P format, use this if it's up to you.
You must convert to AV_PIX_FMT_BGRA
AVFrame* frameYUV; //frame YUVJ420P
AVFrame* frameGRB = av_frame_alloc();
frameGRB->width = frameYUV->width;
frameGRB->height= frameYUV->height;
frameGRB->format = AV_PIX_FMT_BGRA;
av_frame_get_buffer(frameGRB, 0);
SwsContext *sws_context = sws_getContext(frameYUV->width, frameYUV->height, AV_PIX_FMT_YUVJ420P, frameGRB->width, frameGRB->height, AV_PIX_FMT_BGRA, SWS_BICUBIC, NULL, NULL, NULL);
if (sws_context != NULL){
sws_scale(sws_context, frameYUV->data, frameYUV->linesize, 0, frameYUV->height, frameGRB->data, frameGRB->linesize);
}
Array pixel is:
void* imageBuff = (void*)frameGRB->data[0];
Save file image:
HANDLE hFileBmp = CreateFile(szFilePath, GENERIC_READ | GENERIC_WRITE, FILE_SHARE_READ, NULL, CREATE_ALWAYS, FILE_ATTRIBUTE_NORMAL, NULL);
if (hFileBmp != INVALID_HANDLE_VALUE) {
int iWidth = frameGRB->width, iHeight = frameGRB->height;
BITMAPINFOHEADER BitmapInfoHeader;
ZeroMemory(&BitmapInfoHeader, sizeof(BitmapInfoHeader));
BitmapInfoHeader.biSize = sizeof(BITMAPINFOHEADER);
BitmapInfoHeader.biWidth = iWidth;
BitmapInfoHeader.biHeight = -iHeight;
BitmapInfoHeader.biPlanes = 1;
BitmapInfoHeader.biBitCount = 32;
BitmapInfoHeader.biCompression = BI_RGB;
BITMAPFILEHEADER BitmapFileHeader;
ZeroMemory(&BitmapFileHeader, sizeof(BitmapFileHeader));
BitmapFileHeader.bfOffBits = sizeof(BITMAPFILEHEADER) + sizeof(BITMAPINFOHEADER);
DWORD dwBitmapSize = iWidth * 4 * iHeight;
BitmapFileHeader.bfSize = dwBitmapSize + BitmapFileHeader.bfOffBits;
BitmapFileHeader.bfType = 0x4D42; //BM
DWORD dwBytesWritten = 0;
if (WriteFile(hFileBmp, &BitmapFileHeader, sizeof(BitmapFileHeader), &dwBytesWritten, NULL) == TRUE) {
if (WriteFile(hFileBmp, &BitmapInfoHeader, sizeof(BitmapInfoHeader), &dwBytesWritten, NULL) == TRUE) {
WriteFile(hFileBmp, imageBuff, dwBitmapSize, &dwBytesWritten, NULL);
}
}
CloseHandle(hFileBmp);
}

How to pass Parameters?

I have been converting convert rose.png -sparse-color barycentric '0,0 black 69,0 white roseModified.png into MagickWand C API.
double arguments[6];
arguments[0] = 0.0;
arguments[1] = 0.0;
// arguments[2] = "black";
arguments[2] = 69.0;
arguments[3] = 0.0;
// arguments[5] = "white";
MagickSparseColorImage(wand0, BarycentricColorInterpolate, 4,arguments);
MagickWriteImage(wand0,"rose_cylinder_22.png");
I don't know how to pass the double argument. click here
for method's defenition.
UPDATE:
Source Image
After I executed convert rose.png -sparse-color barycentric '0,0 black 69,0 white' roseModified.png, I got below Image
I haven't got the output like this with my C program. There might be something with white and black.
For sparse colors, you need to convert the color to doubles for each channel. Depending on how dynamic you need to generate spares color points, you may need to start building basic stack-management methods.
Here's an example. (Mind that this is a quick example, and can be improved on greatly)
#include <stdlib.h>
#include <MagickWand/MagickWand.h>
// Let's create a structure to keep track of arguments.
struct arguments {
size_t count;
double * values;
};
// Set-up structure, and allocate enough memory for all colors.
void allocate_arguments(struct arguments * stack, size_t size)
{
stack->count = 0;
// (2 coords + 3 color channel) * number of colors
stack->values = malloc(sizeof(double) * (size * 5));
}
// Append a double value to structure.
void push_double(struct arguments * stack, double value)
{
stack->values[stack->count++] = value;
}
// Append all parts of a color to structure.
void push_color(struct arguments * stack, PixelWand * color)
{
push_double(stack, PixelGetRed(color));
push_double(stack, PixelGetGreen(color));
push_double(stack, PixelGetBlue(color));
}
#define NUMBER_OF_COLORS 2
int main(int argc, const char * argv[]) {
MagickWandGenesis();
MagickWand * wand;
PixelWand ** colors;
struct arguments A;
allocate_arguments(&A, NUMBER_OF_COLORS);
colors = NewPixelWands(NUMBER_OF_COLORS);
PixelSetColor(colors[0], "black");
PixelSetColor(colors[1], "white");
// 0,0 black
push_double(&A, 0);
push_double(&A, 0);
push_color(&A, colors[0]);
// 69,0 white
push_double(&A, 69);
push_double(&A, 0);
push_color(&A, colors[1]);
// convert rose:
wand = NewMagickWand();
MagickReadImage(wand, "rose:");
// -sparse-color barycentric '0,0 black 69,0 white'
MagickSparseColorImage(wand, BarycentricColorInterpolate, A.count, A.values);
MagickWriteImage(wand, "/tmp/output.png");
MagickWandTerminus();
return 0;
}

FFMpeg libswscale XBGR32 to NV12 almost working but colors are wrong

I am getting a Linx DMA-BUF from the GPU in XBGR32 format and I need to use FFMpeg's libswscale to convert it on the ARM to NV12.
I've been able to almost get it working based on various SO posts and the documentation.
Here is the LCD screen that is the actual output of the GPU:
Here is the encoded data from the NV12 frame created by libswscale - it should show the same colors as the screen:
Please note that the pictures were not taken at exactly the same time.
What am I doing wrong?
Here is the relevant part of my code:
/* Prepare to mmap the GBM BO */
union gbm_bo_handle handleUnion = gbm_bo_get_handle(bo);
struct drm_omap_gem_info req;
req.handle = handleUnion.s32;
ret = drmCommandWriteRead(m_DRMController.fd, DRM_OMAP_GEM_INFO,&req, sizeof(req));
if (ret) {
qDebug() << "drmCommandWriteRead(): Cannot set write/read";
}
// Perform actual memory mapping of GPU output
gpuMmapFrame = (char *)mmap(0, req.size, PROT_READ | PROT_WRITE, MAP_SHARED,m_DRMController.fd, req.offset);
assert(gpuMmapFrame != MAP_FAILED);
// Use ffmpeg to do the SW BGR32 to NV12
static SwsContext *swsCtx = NULL;
int width = RNDTO2( convertWidth );
int height = RNDTO2( convertHeight );
int ystride = RNDTO32 ( width );
int uvstride = RNDTO32 ( width / 2 );
int uvsize = uvstride * ( height / 2 );
void *plane[] = { y, u, u + uvsize, 0 };
int stride[] = { ystride, uvstride, uvstride, 0 };
// Output of GPU is XBGR32
// #define V4L2_PIX_FMT_XBGR32 v4l2_fourcc('X', 'R', '2', '4') /* 32 BGRX-8-8-8-8 */
swsCtx = sws_getCachedContext( swsCtx, convertWidth, convertHeight, AV_PIX_FMT_BGR32, width, height, AV_PIX_FMT_NV12,
SWS_FAST_BILINEAR , NULL, NULL, NULL );
int linesize[1] = { convertWidth * 4 };
const uint8_t *inData[1] = { (uint8_t*)gpuMmapFrame };
if ( gpuMmapFrame ) {
sws_scale( swsCtx, (const uint8_t *const *)inData, linesize, 0, convertHeight, (uint8_t *const *)plane, stride );
}
// Unmap it
munmap(gpuMmapFrame,req.size);
// Now send the frame to the encoder
dce.submitFrameToEncode(swEncodeBufferNV12);

Improper Tiff image generated from RGB888 data

I'm trying to convert RGB888 image data to TIFF image. But code generates improper image.
I'm reading the RGB data from text file.Here is the output image
the black region shouldn't be there it seems that code is making low RGB values to zero.
I'm creating tiff image without alpha channel.Please help me comprehend the issue.
TIFF *out= TIFFOpen("new.tif", "w");
int sampleperpixel = 3;
uint32 width=320;
uint32 height=240;
unsigned char image[width*height*sampleperpixel];
int pixval;
int count=0;
FILE *ptr=fopen("data.txt","r");
if (ptr!=NULL)
{
while(count<width*height)
{fscanf(ptr,"%d",&pixval);
*(image+count)=(unsigned char)pixval;
count++;}
}
printf("%d\n",count);
TIFFSetField(out, TIFFTAG_IMAGEWIDTH, width); // set the width of the image
TIFFSetField(out, TIFFTAG_IMAGELENGTH, height); // set the height of the image
TIFFSetField(out, TIFFTAG_SAMPLESPERPIXEL, sampleperpixel); // set number of channels per pixel
TIFFSetField(out, TIFFTAG_BITSPERSAMPLE, 8); // set the size of the channels
TIFFSetField(out, TIFFTAG_ORIENTATION, ORIENTATION_TOPLEFT); // set the origin of the image.
// Some other essential fields to set that you do not have to understand for now.
TIFFSetField(out, TIFFTAG_PLANARCONFIG, PLANARCONFIG_CONTIG);
TIFFSetField(out, TIFFTAG_PHOTOMETRIC, PHOTOMETRIC_RGB);
tsize_t linebytes = sampleperpixel * width;
unsigned char *buf = NULL;
//if (TIFFScanlineSize(out)<linebytes)
// buf =(unsigned char *)_TIFFmalloc(linebytes);
//else
buf = (unsigned char *)_TIFFmalloc(TIFFScanlineSize(out));
TIFFSetField(out, TIFFTAG_ROWSPERSTRIP, TIFFDefaultStripSize(out, width*sampleperpixel));
uint32 row ;
//Now writing image to the file one strip at a time
for (row = 0; row < height; row++)
{
//memcpy(buf, &image[(height-row-1)*linebytes], linebytes); // check the index here, and figure out why not using h*linebytes
memcpy(buf, &image[(row)*linebytes], linebytes);
if (TIFFWriteScanline(out, buf, row, 0) < 0)
break;
}
TIFFClose(out);
if (buf)
_TIFFfree(buf);
I found your example at this tutorial. Since you are not the original author, you should always post a link to help others and avoid plagiarism.
The example works great except the part you added. To read file into byte array, use this instead (after you read what fread does):
fread(image, 1, sizeof image, ptr);
in your original code, you omitted sampleperpixel
while(count<width*height*sampleperpixel) ...
so you read only one third of the image.

C - Convert GIF to JPG

I need to convert a GIF image to Jpeg image using C programming language. I searched the web, but I didn't find an example which could help me. Any suggestion are appreciated!
EDIT: I want to do this using an cross-platform open-source library like SDL.
Try the GD or ImageMagick libraries
I found libafterimage to be incredibly simple to use.
In this snippet I also scale the image to at most width or at most height, while preserving aspect:
#include <libAfterImage/afterimage.h>
int convert_image_to_jpeg_of_size(const char* infile, const char* outfile, const double max_width, const double max_height)
{
ASImage* im;
ASVisual* asv;
ASImage* scaled_im;
double height;
double width;
double pixelzoom;
double proportion;
im = file2ASImage(infile, 0xFFFFFFFF, SCREEN_GAMMA, 0, ".", NULL);
if (!im) {
return 1;
}
proportion = (double)im->width / (double)im->height;
asv = create_asvisual(NULL, 0, 0, NULL);
if (proportion > 1) {
/* Oblong. */
width = max_width;
pixelzoom = max_width / im->width;
height = (double)im->height * pixelzoom;
} else {
height = max_height;
pixelzoom = max_height / im->height;
width = (double)im->width * pixelzoom;
}
scaled_im = scale_asimage(asv, im, width, height, ASA_ASImage, 0, ASIMAGE_QUALITY_DEFAULT);
/* writing result into the file */
ASImage2file(scaled_im, NULL, outfile, ASIT_Jpeg, NULL);
destroy_asimage(&scaled_im);
destroy_asimage(&im);
return 0;
}
Not the easiest to use, but the fastest way is almost surely using libavcodec/libavformat from ffmpeg.

Resources