Get blurry result when writing bmp file in C [duplicate] - c

I am trying to make a program that outputs a 8-bit grayscale image to bmp file. I have found out that 8-bit BMP files are indexed. Is it possible to omit the color table and just put values instead of indexes in the color table? Or BMP format does not allow this?

You still need the LUT but it's very simple to create it. It's just 256 entries where the red, green and blue components are all equal to the grey scale intensity.

Related

way to know yuv details (dimensions, formats and types)

I have a input.yuv image which I wants to use in my code as a input.
But I want to know whether it is 422,420 or 444 format and also wants to know whether it is planner and packed and what is its width, height and stride.
When I saw this image using https://rawpixels.net/ tool, I can see the perfect image with gray scale with dimensions 1152x512. But when I do with yuv420p or other options, the color and luma components are not with correct offset and hence showing the mixture of color and gray scale image with different offset(2 images in same screen).
Is there any way to write a C code to find above mentioned yuv details (dimensions, formats and types) ?
Not really. Files with a .yuv extension just contain raw pixel data normally in a planar format.
That would typically be width * height of luma pixels followed by either width/2height/2 (420) or widthheight/2 (422) Cb and Cr components.
They cam be 8 or 10 bits per pixel with 10 bits per pixel usually stored in 2 bytes. It's just really a case of trial and error to find out what it is.
Occasionally you find all sorts of arrangements of Y, Cb, Cr in files with a .yuv extension. Planar is most common though.

RGBA png alpha processing

I have an RGBA PNG file that is(I think) the capture of a signature from a digitizing tablet. Extracting out the image, ALL RGB triplets are 0,0,0 and the alpha channel values are non zero if the pixel is to carry a tone in the final image. I get all of that.
This PNG only has a IHDR, IDAT, and IEND chunks.
My first question is, are my RGB pixels considered the foreground or
the background? What might be the proper terminology to describe this
file/image?
What equation do I use to apply the alpha to the RGB.
Looking at the alpha values, I can see how to come up with a number, but what general equation would be used generate the appropriate RGB value, avoiding divide by 0 or overflow value errors if my RGBs had started out with non zero values.
I have been through the PNG spec and there's something I just don't get.
BTW, I am ultimately producing, in C, a PCL file intended for printing directly to a PCL LaserJet.
The image you display last is the foreground image. There is no foreground and background in a single image.
This link shows how to blend an image with alpha to another image.:
http://en.wikipedia.org/wiki/Alpha_compositing#Alpha_blending

C: Convert array to RGB image

In C, I have a 1D array of unsigned chars (ie, between 0 to 255) of length 3*DIM*DIM which represents a DIM*DIM pixel image, where the first 3 pixels are the RGB levels of the first pixel, the second 3 pixels are the RGB levels of the second pixel, etc. I would like to save it as a PNG image. What is the easiest, most lightweight method of doing this conversion?
Obviously OpenGL can read and display images of this form (GLUT_RGB), but DIM is larger than the dimensions of my monitor screen, so simply displaying the image and taking a screenshot is not an option.
At the moment, I have been doing the conversion by simply saving the array to a CSV file, loading it in Mathematica, and then exporting it as a PNG, but this is very slow (~8 minutes for a single 7000*7000 pixel image).
There are many excellent third party libraries you can use to convert an array of pixel-data to a picture.
libPNG is a long standing standard library for png images.
LodePNG also seems like a good candidate.
Finally, ImageMagick is a great library that supports many different image formats.
All these support C, and are relatively easy to use.

Cannot convert Gray to BGR in OpenCV

Following is a snippet of a simple code to convert a grayscale image to RGB using cvCvtColor function in OpenCV.
input = cvLoadImage("test.jpg", CV_LOAD_IMAGE_GRAYSCALE);
output = cvCreateImage(cvSize(input->width, input->height), 8, 3);
cvCvtColor(input, output, CV_GRAY2BGR);
cvSaveImage("output.jpg", output);
Where test.jpg is a grayscale image.
But it doesn`t seem to be working properly, because output.jpg i.e the final output also is grayscale, same as the input itself. Why so ?
Any kind of help would be highly appreciated. Thanks in advance !
I think you misunderstand cvCvtColor. cvCvtColor(input, output, CV_GRAY2BGR); will change single channel image to 3-channel image. But if you look at the image, it will still look like a gray image because, for example, a gray pixel of 154 has been converted to RGB(154,154,154).
When you convert color image to gray image, all color information will be gone and not recoverable. Therefore you can't really make a gray image to visibly color image without additional information and corresponding operations.

conversion from rgb to yuv 4:2:0

how do i make a 160*70 bitmap image move over a 640*280 bitmap image and reflect off its edge after converting both bitmap images into yuv 4:4:4and write it into a single yuv file ? and how do i convert the same into yuv 4:2:0?could you please help me out as to how do i code the same in c?
Converting to YUV 4:4:4 - This is purely an affine transformation on each RGB vector. Just use the proper formula for whichever YUV variant you need. You'll probably want to separate the image into planes at this point too.
Converting to YUV 4:2:0 - This is purely a resampling problem. You need to resample the U and V planes to half width and half height. Do NOT just skip samples ("nearest-neighbor sampling"); this will result in very ugly aliasing. You could simply average the corresponding 2x2 squares or use a more advanced filter. For downsampling, area-average is pretty close to ideal anyway; gaussian may give mildly better results.
If you don't mind using library code, libswscale from ffmpeg can do both of these steps for you, and will do it very fast.
Finally, moving the small image across the big one: Is it purely a rectangular image or does it use an alpha channel? Either way you'll simply need to loop over the coordinates you want it to appear at and output an image for each point. If it's rectangular you just then copy pixels, whereas if it has an alpha channel you need to use that for alpha blending (interpolating between the pixel values according to the alpha value).
This wikipedia article has RGB -> YUV440.
And RGB -> YUV420 is described in the same article in this section.
I did not understand:
how do i make a 160*70 bitmap image
move over a 640*280 bitmap image and
reflect off its edge

Resources