I am trying to convert an image from RGB to black and white image (0 or 1). At first I have converted it to grayscale but now i'm stuck (the idea was to finde the middle which I did using matlabe and now I need to normalize evrything..).
Maybe someone knows how to do it?
Related
I am trying to make a program that outputs a 8-bit grayscale image to bmp file. I have found out that 8-bit BMP files are indexed. Is it possible to omit the color table and just put values instead of indexes in the color table? Or BMP format does not allow this?
You still need the LUT but it's very simple to create it. It's just 256 entries where the red, green and blue components are all equal to the grey scale intensity.
I have an RGBA PNG file that is(I think) the capture of a signature from a digitizing tablet. Extracting out the image, ALL RGB triplets are 0,0,0 and the alpha channel values are non zero if the pixel is to carry a tone in the final image. I get all of that.
This PNG only has a IHDR, IDAT, and IEND chunks.
My first question is, are my RGB pixels considered the foreground or
the background? What might be the proper terminology to describe this
file/image?
What equation do I use to apply the alpha to the RGB.
Looking at the alpha values, I can see how to come up with a number, but what general equation would be used generate the appropriate RGB value, avoiding divide by 0 or overflow value errors if my RGBs had started out with non zero values.
I have been through the PNG spec and there's something I just don't get.
BTW, I am ultimately producing, in C, a PCL file intended for printing directly to a PCL LaserJet.
The image you display last is the foreground image. There is no foreground and background in a single image.
This link shows how to blend an image with alpha to another image.:
http://en.wikipedia.org/wiki/Alpha_compositing#Alpha_blending
In C, I have a 1D array of unsigned chars (ie, between 0 to 255) of length 3*DIM*DIM which represents a DIM*DIM pixel image, where the first 3 pixels are the RGB levels of the first pixel, the second 3 pixels are the RGB levels of the second pixel, etc. I would like to save it as a PNG image. What is the easiest, most lightweight method of doing this conversion?
Obviously OpenGL can read and display images of this form (GLUT_RGB), but DIM is larger than the dimensions of my monitor screen, so simply displaying the image and taking a screenshot is not an option.
At the moment, I have been doing the conversion by simply saving the array to a CSV file, loading it in Mathematica, and then exporting it as a PNG, but this is very slow (~8 minutes for a single 7000*7000 pixel image).
There are many excellent third party libraries you can use to convert an array of pixel-data to a picture.
libPNG is a long standing standard library for png images.
LodePNG also seems like a good candidate.
Finally, ImageMagick is a great library that supports many different image formats.
All these support C, and are relatively easy to use.
Following is a snippet of a simple code to convert a grayscale image to RGB using cvCvtColor function in OpenCV.
input = cvLoadImage("test.jpg", CV_LOAD_IMAGE_GRAYSCALE);
output = cvCreateImage(cvSize(input->width, input->height), 8, 3);
cvCvtColor(input, output, CV_GRAY2BGR);
cvSaveImage("output.jpg", output);
Where test.jpg is a grayscale image.
But it doesn`t seem to be working properly, because output.jpg i.e the final output also is grayscale, same as the input itself. Why so ?
Any kind of help would be highly appreciated. Thanks in advance !
I think you misunderstand cvCvtColor. cvCvtColor(input, output, CV_GRAY2BGR); will change single channel image to 3-channel image. But if you look at the image, it will still look like a gray image because, for example, a gray pixel of 154 has been converted to RGB(154,154,154).
When you convert color image to gray image, all color information will be gone and not recoverable. Therefore you can't really make a gray image to visibly color image without additional information and corresponding operations.
Now, I'm having a very difficult problem
I convert a font to hex code, and can resize it
as follows :
1. I use lcd font maker software to create char A font Arial size 18
2. Then I convert to hex code
(see picture to understand)
How do I resize the image when data input is hex code or binary, and result is hex code or binary?
Please suggest me, or document related to this issue.
Here i dont know what do you mean by resize the image
1> In lcd font maker software you have created one image
2> Now you have encoded that image in digital form and made array of hex code.
Now i think you want to pass this array of hex code to some other program and want to recreate/decode that image back ? if then you can write one program using same reverse logic which can decode that data and create original image back.
And if you want to resize image then you need to understand/use the concept of pixcel sampleling and all stuff
Wiki page "Image scaling" could be good stating point. There is list of different scaling methods.
Result of scaling may look ugly, especially if you have 2 color display.
So consider use of two or more presized fonts.