C: Convert array to RGB image - c

In C, I have a 1D array of unsigned chars (ie, between 0 to 255) of length 3*DIM*DIM which represents a DIM*DIM pixel image, where the first 3 pixels are the RGB levels of the first pixel, the second 3 pixels are the RGB levels of the second pixel, etc. I would like to save it as a PNG image. What is the easiest, most lightweight method of doing this conversion?
Obviously OpenGL can read and display images of this form (GLUT_RGB), but DIM is larger than the dimensions of my monitor screen, so simply displaying the image and taking a screenshot is not an option.
At the moment, I have been doing the conversion by simply saving the array to a CSV file, loading it in Mathematica, and then exporting it as a PNG, but this is very slow (~8 minutes for a single 7000*7000 pixel image).

There are many excellent third party libraries you can use to convert an array of pixel-data to a picture.
libPNG is a long standing standard library for png images.
LodePNG also seems like a good candidate.
Finally, ImageMagick is a great library that supports many different image formats.
All these support C, and are relatively easy to use.

Related

way to know yuv details (dimensions, formats and types)

I have a input.yuv image which I wants to use in my code as a input.
But I want to know whether it is 422,420 or 444 format and also wants to know whether it is planner and packed and what is its width, height and stride.
When I saw this image using https://rawpixels.net/ tool, I can see the perfect image with gray scale with dimensions 1152x512. But when I do with yuv420p or other options, the color and luma components are not with correct offset and hence showing the mixture of color and gray scale image with different offset(2 images in same screen).
Is there any way to write a C code to find above mentioned yuv details (dimensions, formats and types) ?
Not really. Files with a .yuv extension just contain raw pixel data normally in a planar format.
That would typically be width * height of luma pixels followed by either width/2height/2 (420) or widthheight/2 (422) Cb and Cr components.
They cam be 8 or 10 bits per pixel with 10 bits per pixel usually stored in 2 bytes. It's just really a case of trial and error to find out what it is.
Occasionally you find all sorts of arrangements of Y, Cb, Cr in files with a .yuv extension. Planar is most common though.

RGBA png alpha processing

I have an RGBA PNG file that is(I think) the capture of a signature from a digitizing tablet. Extracting out the image, ALL RGB triplets are 0,0,0 and the alpha channel values are non zero if the pixel is to carry a tone in the final image. I get all of that.
This PNG only has a IHDR, IDAT, and IEND chunks.
My first question is, are my RGB pixels considered the foreground or
the background? What might be the proper terminology to describe this
file/image?
What equation do I use to apply the alpha to the RGB.
Looking at the alpha values, I can see how to come up with a number, but what general equation would be used generate the appropriate RGB value, avoiding divide by 0 or overflow value errors if my RGBs had started out with non zero values.
I have been through the PNG spec and there's something I just don't get.
BTW, I am ultimately producing, in C, a PCL file intended for printing directly to a PCL LaserJet.
The image you display last is the foreground image. There is no foreground and background in a single image.
This link shows how to blend an image with alpha to another image.:
http://en.wikipedia.org/wiki/Alpha_compositing#Alpha_blending

Can I read a specific image row using libjpeg?

Using libjpeg, if possible, I would like to read a row from the middle of a JPEG image without reading all the preceding rows. Can this be done?
The answer is almost certainly "yes you can, but it will take more effort than you want".
A JPEG image is a stream of markers that contain either information global to the whole compressed image, or information related to specific portions of the image. The compression works by breaking the image into color planes, possibly changing color spaces to one where the color information can be down-sampled, and within each plane operating on 8x8 pixel blocks.
For instance, it is possible to rotate a compressed image by 90 degrees if it is sized such that it is made up of only whole blocks by only transposing the basic blocks and the coefficients inside each block; i.e. without uncompressing, rotating the real image, and recompressing.
Given that, your approach would be to parse the marker stream on the way into the library, passing all the markers that are global to the image, modifying any related to image size, and dropping markers containing coefficients that lie outside your cropping rectangle.
You will likely need to further crop the result if the restriction of cropping to complete basic blocks is too coarse.
What isn't clear to me is whether there is any real win over the alternative, which is to crop the results as it comes out of the library. The library is highly configurable, so you can provide an uncompressed data consumer function that discards all pixels outside your cropping rectangle and only saves pixels you want to keep.

conversion from rgb to yuv 4:2:0

how do i make a 160*70 bitmap image move over a 640*280 bitmap image and reflect off its edge after converting both bitmap images into yuv 4:4:4and write it into a single yuv file ? and how do i convert the same into yuv 4:2:0?could you please help me out as to how do i code the same in c?
Converting to YUV 4:4:4 - This is purely an affine transformation on each RGB vector. Just use the proper formula for whichever YUV variant you need. You'll probably want to separate the image into planes at this point too.
Converting to YUV 4:2:0 - This is purely a resampling problem. You need to resample the U and V planes to half width and half height. Do NOT just skip samples ("nearest-neighbor sampling"); this will result in very ugly aliasing. You could simply average the corresponding 2x2 squares or use a more advanced filter. For downsampling, area-average is pretty close to ideal anyway; gaussian may give mildly better results.
If you don't mind using library code, libswscale from ffmpeg can do both of these steps for you, and will do it very fast.
Finally, moving the small image across the big one: Is it purely a rectangular image or does it use an alpha channel? Either way you'll simply need to loop over the coordinates you want it to appear at and output an image for each point. If it's rectangular you just then copy pixels, whereas if it has an alpha channel you need to use that for alpha blending (interpolating between the pixel values according to the alpha value).
This wikipedia article has RGB -> YUV440.
And RGB -> YUV420 is described in the same article in this section.
I did not understand:
how do i make a 160*70 bitmap image
move over a 640*280 bitmap image and
reflect off its edge

Font graphics routines

How do you do your own fonts? I don't want a heavyweight algorithm (freetype, truetype, adobe, etc) and would be fine with pre-rendered bitmap fonts.
I do want anti-aliasing, and would like proportional fonts if possible.
I've heard I can use Gimp to do the rendering (with some post processing?)
I'm developing for an embedded device with an LCD. It's got a 32 bit processor, but I don't want to run Linux (overkill - too much code/data space for too little functionality that I would use)
C. C++ if necessary, but C is preferred. Algorithms and ideas/concepts are fine in any language...
-Adam
In my old demo-scene days I often drew all characters in the font in one big bitmap image. In the code, I stored the (X,Y) coordinates of each character in the font, as well as the width of each character. The height was usually constant throughout the font. If space isn't an issue, you can put all characters in a grid, that is - have a constant distance between the top-left corner of each character.
Rendering the text then becomes a matter of copying one letter at a time to the destination position. At that time, I usually reserved one color as being the "transparent" color, but you could definitely use an alpha-channel for this today.
A simpler approach, that can be used for small b/w fonts, is to define the characters directly in code:
LetterA db 01111100b
db 11000110b
db 11000110b
db 11111110b
db 11000110b
db 11000110b
The XPM file format is actually a file format with C syntax that can be used as a hybrid solution for storing the characters.
Pre-rendered bitmap fonts are probably the way to go. Render your font using whatever, arrange the characters in a grid, and save the image in a simple uncompressed format like PPM, BMP or TGA. If you want antialiasing, make sure to use a format that supports transparency (BMP and TGA do; PPM does not).
In order to support proportional widths, you'll need to extract the widths of each character from the grid. There's no simple way to do this, it depends on how you generate the grid. You could probably write some short little program to analyze each character and find the minimal bounding box. Once you have the width data, you put it in an auxiliary file which contains the coordinates and sizes of each character.
Finally, to render a string, you look up each character and bitblit its rectangle from the font bitmap onto your frame buffer, advancing the raster position by the width of the character.
We have successfully used the SRGP package for fonts. We did use fixed-pitch fonts, so I'm not sure if it can proportional fonts.
We're using bitmap fonts generated by anglecode#s bitmap font generator :
http://www.angelcode.com/products/bmfont/
This is very usable as it has XML output which will be easy to convert to any data format you need.
Angel Code's bmfont also adds kerning and better packing to the old alternative that was MudFont.

Resources