'libpng', converting a PNG file from grayscale to RGB (to just one of the channels) - rgb

I have a piece of source code that reads and writes images to PNG files. However, it only writes the images as grayscale, precisely just black and white.
Now, how do I modify it such that it would write to one of the channels in RGB (R or G or B is fine)?
Is there a short tutorial about that pixel manipulation with libpng?

It isn't small or nice written tutorial, but always something:
http://www.libpng.org/pub/png/libpng-1.2.5-manual.html

Related

PNG IDAT chunk data calculation

I'm trying for learning propose to write my own png file in c language.
I have already read the PNG file format specification and i am now able to write a basic PNG(signature, IHDR CHUNK, 1px-IDAT CHUNK AND IEND CHUNK).
But now i want to write a array of pixels into the PNG, all chunks are going okk but I don't understand how to generate the IDAT data CHUNK.
In libPNG documentation they explain to deflate() the scanline, preced by the filer byte method.
My understanding of this problem:
A scanline is an array of horizontal pixels preceded by the filter method(0) then of size: (3*screenWidth + 1)
for each line (screenHeight value), I compress() the scanline with zlib and store it in a file.
But After generating the png, it is always black and the value is always 0 in hex editor.
what am i doing wrong? or is the the good way to generate IDAT data values?
You did not put any code in your question, so we have no way of knowing what you're actually doing. "I compress() the scanline with zlib and store it in a file." sounds wrong. You need to assemble all of the scan lines as described, and then compress the entire thing with zlib, once.

How to discover an image that was converted from one format to anothter?

Is it possible to trace back an image say a png image to a jpg? For example an image x.jpg that was converted to x.png. Is there a way of telling that x.png is essentially x.jpg, with the difference being formats?
If l convert img.jpg to img.png is it posssible for me to get back img.jpg?
I intend to check this in C.
As to your first question, the meta-information stored in PNG can tell what the original format or file was. But there is no requirement to store this meta-information in the file.
As to your second question: PNG is a lossless format. So if you decompress a Jpeg image into a bitmap and then encode that bitmap as PNG, you can at least get back from th PNG to the bitmap of the jpeg.
Getting back to the jpeg essentially means re-encoding (compressing) the bitmap, but to arrive at the bitwise identical Jpeg file means using the same compressor settings that were used to create the original Jpeg. As you probably don't know those settings (and it may depend on the compressor code too), I would say "No, you can't get back to the original Jpeg."

Convert BMP into grayscale issue

I started messing around with image processing and wanted to know if there is a way to make a gray scale image without using image processing libraries, I know the first three bits are given the information on the image, the format and size, from now on I want to make a gray scale image This is the code :
#include <stdio.h>
int main(){
FILE * binary = fopen("C:/Arabidopsis3.bmp", "rb");
FILE * bCopy=fopen("C:/copy.bmp","wb");
int get_char;
int i=3;
while ((get_char=fgetc(binary))!= EOF)
{
if(i>3)
{
doing the grayscale process
}
fputc(get_char, bCopy);
i++;
}
fclose(binary);
fclose(bCopy);
return 0;
}
As you can see I`m copying the bmp into copy.bmp but copy.bmp should be grayscale.
Any Suggestions?
To start, the BMP file format has a header file that gets stripped (not in memory). After the header, there is a region of memory that is a fixed size, but there are 7 different formats to what this next region is: http://en.wikipedia.org/wiki/BMP_file_format
You'll have to determine the format, then determine how many bytes are needed for each pixel. Then you'll need to know the pixel description format.
All of this is pretty easy if you just do a little research. I don't know that much about the file format, but you should be able to do it pretty easily if you just understand its structure.
Edit:
Well, if you know where each pixel value is located, you should be able to extract them just by walking along the pixel array with a pointer. For instance, if You know there are 8 pixels in an image, and each pixel is defined by 8 bytes, you can walk along the pixel array with an 8 byte pointer by placing an 8 byte pointer at the beginning the pixel array section and then do your Xor. I don't know exactly how to make something gray scale; I assume you just get rid of all the color values. As such, If the first byte describes the gray scale, and the rest is color data, you would just take the color data and make sure those bits are set to 0 then.
If this is a Windows bitmap, your unstructured approach is ill-advised. I'd suggest you read up on the Windows SDK for information on manipulating bmp files. Some keywords to point you in the right direction: BITMAPINFOHEADER, GetDIBits/SetDIBits. Also, GDI+ has some very convenient wrappers for loading and saving images.

How can I read image files?

I need to get the RGB value reading image. How can i do it in C?
The image format can be png,jpg,bmp or other usual format.
It has to be saved in a text file.
A very easy-to-use image library that can cover the reading and writing of all these formats would be FreeImage. It is primarily a C library, but there are also wrappers for C++, etc.
When you say "saved in a text file", that is pretty atypical for images due to the fact that binary formats are much more compact that storing raw string values for the pixel intensities. Additionally, many formats use compression, which would mean there isn't really a given "value" per-pixel ... instead the data must be decompressed before you can individually assign a value to every pixel. There are some image formats such as PPM that can be stored as ASCII data, but again, that's not necessarily the most efficient way to store a large image.
So for your workflow, you would use a library like FreeImage to read the values out of the image file, and then write back the uncompressed pixel values to a PPM file, or a custom-formatted text file.

Reading YUV images in C

How to read any yuv image?
How can the dimensions of an YUV image be passed for reading to a buffer?
Usually, when people talk about YUV they talk about YUV 4:2:0. Your reference to any YUV image is misleading, because there are a number of different formats, and each is handled differently. For example, raw YUV 4:2:0 (by convention, files with an extension .yuv) doesn't contain any dimension data; whereas y4m files typically do. So you really need to know what sort of image you're dealing with before you start reading it. For YUV 4:2:0, you just open the file and read the luminance (Y') and chrominance (CbCr) components in that order, keeping in mind the chrominance components have been decimated (quarter size of the luminance components). After that you typically convert Y'CbCr to RGB so you can display the image.
From what I've mentioned above, you simply can't determine the dimensions of any YUV image. Often, the dimensions are simply not in the file. You have to know them up front -- this is why most sites listing YUV files for download also list their frame dimensions (and for video, the number of frames). Most YUV files are either CIF (352x288) or QCIF (176x144), although there are some files digitized from analog video that are in the slightly different SIF format. Read about the different formats here.

Resources