RGBA png alpha processing - c

I have an RGBA PNG file that is(I think) the capture of a signature from a digitizing tablet. Extracting out the image, ALL RGB triplets are 0,0,0 and the alpha channel values are non zero if the pixel is to carry a tone in the final image. I get all of that.
This PNG only has a IHDR, IDAT, and IEND chunks.
My first question is, are my RGB pixels considered the foreground or
the background? What might be the proper terminology to describe this
file/image?
What equation do I use to apply the alpha to the RGB.
Looking at the alpha values, I can see how to come up with a number, but what general equation would be used generate the appropriate RGB value, avoiding divide by 0 or overflow value errors if my RGBs had started out with non zero values.
I have been through the PNG spec and there's something I just don't get.
BTW, I am ultimately producing, in C, a PCL file intended for printing directly to a PCL LaserJet.

The image you display last is the foreground image. There is no foreground and background in a single image.
This link shows how to blend an image with alpha to another image.:
http://en.wikipedia.org/wiki/Alpha_compositing#Alpha_blending

Related

way to know yuv details (dimensions, formats and types)

I have a input.yuv image which I wants to use in my code as a input.
But I want to know whether it is 422,420 or 444 format and also wants to know whether it is planner and packed and what is its width, height and stride.
When I saw this image using https://rawpixels.net/ tool, I can see the perfect image with gray scale with dimensions 1152x512. But when I do with yuv420p or other options, the color and luma components are not with correct offset and hence showing the mixture of color and gray scale image with different offset(2 images in same screen).
Is there any way to write a C code to find above mentioned yuv details (dimensions, formats and types) ?
Not really. Files with a .yuv extension just contain raw pixel data normally in a planar format.
That would typically be width * height of luma pixels followed by either width/2height/2 (420) or widthheight/2 (422) Cb and Cr components.
They cam be 8 or 10 bits per pixel with 10 bits per pixel usually stored in 2 bytes. It's just really a case of trial and error to find out what it is.
Occasionally you find all sorts of arrangements of Y, Cb, Cr in files with a .yuv extension. Planar is most common though.

Blending text, rendered by FreeType in color and alpha

I am using FreeType to render some texts.
The surface where I want to draw the text is a bitmap image with format ARGB, pre-multiplied alpha.
The needed color of the text is also ARGB.
The rendered FT_Bitmap has format FT_PIXEL_MODE_LCD - it is as the text is rendered with white color on black background, with sub-pixel antialiasing.
So, for every pixel I have 3 numbers:
Da, Dr, Dg, Db - destination pixel ARGB (the background image).
Fr, Fg, Fb - FreeType rendered pixel (FT_Bitmap rendered with FT_RENDER_MODE_LCD)
Ca, Cr, Cg, Cb - The color of the text I want to use.
So, the question: How to properly combine these 3 numbers in order to get the result bitmap pixel.
The theoretical answers are OK and even better than code samples.
Interpet the FreeType data not as actual RGB colors (these 'raw' values are to draw text in black) but as intensities of the destination text color.
So the full intensity of each F color component is F*C/255. However, since your C also includes an alpha component, the intensity is scaled by it:
s' = F*C*A/(255 * 255)
assuming, of course, that F, C, and A are inside the usual range of 0..255. A is a fraction A/255, and the second division is to bring F*C back into the target range. s' is now the derived source color.
On to plotting it. Per color component, the new color gets add to D, and D in turn gets dimished by the source's alpha 255-A (scaled).
That leads to the full sum
D' = D*(255-A)/255 + F*C*A/(255 * 255)
equal to (moving one value to the right)
D' = (D*(255-A) + F*C*A/255)/255
for each separate channel r,g,b of D, F, C and A. The last one, alpha, also needs a separate calculation for each channel because your FreeType output data returns this format.
If the calculation is too slow, you could compare the visual result with not-LCD-optimized grayscale output from FreeType. I suspect that especially on 'busy' (not entirely monochrome) backgrounds the extra calculations are simply not worth it.
The numerical advantage of a pure grayscale input is that you only have to calculate A and 1-A once for each triplet of RGB colors.
The "background" also has an alpha channel but to draw text "on" it you can regard this as 'unused'. Drawing a transparent item onto another transparent item does not, in general, change its intrinsic transparency.
After some discovery, I found the right answer. It is disappointing.
It is impossible to draw subpixel rendered graphics (including fonts) on a transparent image with RGBA format.
In order to properly render such graphics, a format that supports separate alpha channels for every color is mandatory.
For example 48 bit per pixes: RrGgBg where r, g and b are the alpha channels for the red, green and blue collor channels respectively.

comparing bmps for brightness

I have a two bmp files of the same scene and I would like determine if one is more bright than the other.
Similarly I have a set of bmps with different contrasts and another set of bmps with different saturation.
How do I compare these images for brightness,contrast and saturation ? These test images are saved by a tool provided by the sensor manufacturer.
I am using gcc 4.5.
To compare the brightness of two images you need to compare the grey value of the pixels (yes, one by one). In the RGB colour space the brightness (grey value) is the mean of R,G and B, so you have brightness = (R+G+B) / 3
Comparing the contrast and especially the saturation will prove to be not that easy, for a start you could have a look at HSL and HSV but in general I'd suggest to get a good book on the image processing topic.
The answer of (R+G+B)/3 is really not even a good approximation of brightness (at least from what we know today)!
[BRIGHTNESS]
What you really SHOULD do is convert to another color scale and compare the brightness using that channel of a color scale that incorporates brightness into it. Look here!!!
Formula to determine brightness of RGB color
there are a great coupld of answers here that talk about conversion or RGB into luminance, etc...
[CONTRAST]
Contrast is a function of the spread of the pixel values throughout the full range of possible pixel values. One understands the contrast by putting together a histogram of all the pixels (where the x axis represents the a pixel value, and the y axis represents how many pixels are of that value), and analyzing the histogram to understand if there is good distribution throught the entire range, or not. Comparing contrast can be done many ways, but potentially a good starting point, would be to find the pixel-value center point (average of the histogram data) of each image, and potentially some histogram width parameter (where lets say the width is about the center point and is large enough to incorporate 90% of all pixels), and compare the center and width parameters of both images. This is ONLY a starting point.
[SATURATION]
To compare saturation, one might convert the image to the HSL colour space. The S in HSL stands for Saturation. Comparing saturation within this colour space becomes exactly like comparing brightness as outlined above!!!

transparency implementation in YUV422 using only Y

Lets say we have 2 images in YUV422 format and assume that the second image Y field of value 0x10 is being transparent and merged on to the first one with Cb and Cr overwritten.
The product of such merge results in ugly borders (divided pixel line efect) of solid shapes. Is there a way to produce a combination of values on borders, so the transition is smooth?
This problem is not specific to YUV4:2:2:, but occurs whenever binary transparency is used. The best solution is to use a four-channel image and include an alpha channel. Essentially, an alpha channel represents the "degree of opaque-ness" of each pixel. When two images with alpha-channels overlap, alpha blending produces a result that looks much better.
If you're stuck with YUV4:2:2 or can't add alpha channel, you could try smooth the transition the two images with a low-pass filter. This will hurt the definition of your edges, but might look better than doing nothing.

conversion from rgb to yuv 4:2:0

how do i make a 160*70 bitmap image move over a 640*280 bitmap image and reflect off its edge after converting both bitmap images into yuv 4:4:4and write it into a single yuv file ? and how do i convert the same into yuv 4:2:0?could you please help me out as to how do i code the same in c?
Converting to YUV 4:4:4 - This is purely an affine transformation on each RGB vector. Just use the proper formula for whichever YUV variant you need. You'll probably want to separate the image into planes at this point too.
Converting to YUV 4:2:0 - This is purely a resampling problem. You need to resample the U and V planes to half width and half height. Do NOT just skip samples ("nearest-neighbor sampling"); this will result in very ugly aliasing. You could simply average the corresponding 2x2 squares or use a more advanced filter. For downsampling, area-average is pretty close to ideal anyway; gaussian may give mildly better results.
If you don't mind using library code, libswscale from ffmpeg can do both of these steps for you, and will do it very fast.
Finally, moving the small image across the big one: Is it purely a rectangular image or does it use an alpha channel? Either way you'll simply need to loop over the coordinates you want it to appear at and output an image for each point. If it's rectangular you just then copy pixels, whereas if it has an alpha channel you need to use that for alpha blending (interpolating between the pixel values according to the alpha value).
This wikipedia article has RGB -> YUV440.
And RGB -> YUV420 is described in the same article in this section.
I did not understand:
how do i make a 160*70 bitmap image
move over a 640*280 bitmap image and
reflect off its edge

Resources