Making a color completely transparent in OpenCV - c

I have a basic png file with two colors in it, green and magenta. What I'm looking to do is to take all the magenta pixels and make them transparent so that I can merge the image into another image.
An example would be if I have an image file of a 2D character on a magenta background. I would remove all the magenta in the background so that it's transparent. From there I would just take the image of the character and add it as a layer in another image so it looks like the character has been placed in an environment.
Thanks in advance.

That's the code i would use,
First, load your image :
IplImage *myImage;
myImage = cvLoadImage("/path/of/your/image.jpg");
Then use a mask like this to select the color, you should refer to the documentation. In the following, I want to select a blue (don't forget that in OpenCV images are in BGR format, therefore 125,0,0 is a blue (it corresponds to the lower bound) and 255,127,127 is blue with a certain tolerance and is the upper bound.
I chose lower and upper bound with a tolerance to take all the blue of your image, but you can select whatever you want...
cvInRangeS(image,
cvScalar(125.0, 0.0, 0.0),
cvScalar(255.0, 127.0, 127.0),
mask
);
Now we have selected the mask, let's inverse it (as we don't want to keep the mask, but to remove it)
cvNot(mask, mask);
And then copy your image with the mask,
IplImage *myImageWithTransparency; //You may need to initialize it before
cvCopy(myImage,myImageWithTransparency,mask);
Hope it could help,
Please refer to the OpenCVDocumentation for further information
Here it is
Julien,

Related

How to extract color using openCV

Hellow there~
I'm working to make app.
This app need feature that detect Rubik's cube color(Realtime).
I'm using OpenCV to implement the feature.
I try to set up ROI and I detect color in ROI.
I know how to detect specific color.
I used inRange function on hsv channel image.
It's good working.
But now I don't know how to check color on specific region.
Forexample,
rubik's cube color array
(00)Red/(01)Blue/(02)Blue
(10)Green/(11)White/(12)Orange
(20)Yellow/(21)Blue/(22)White.
I want to know (0,0)'s color. It's red.
I use inRange function like this inRange((0,0)_image, lower_color, upper_color, color_mask).
Now how to check (0,0)_image's color what is?
How to know that is red?
Thank you for your attention.
Please let me know.
You should convert your image into HSV color space, where H stands for hue -- that's the color. Hue 0 is red, 0.3 is about green, 0.7 is like blue, you'll figure it out quite easily.

Blending text, rendered by FreeType in color and alpha

I am using FreeType to render some texts.
The surface where I want to draw the text is a bitmap image with format ARGB, pre-multiplied alpha.
The needed color of the text is also ARGB.
The rendered FT_Bitmap has format FT_PIXEL_MODE_LCD - it is as the text is rendered with white color on black background, with sub-pixel antialiasing.
So, for every pixel I have 3 numbers:
Da, Dr, Dg, Db - destination pixel ARGB (the background image).
Fr, Fg, Fb - FreeType rendered pixel (FT_Bitmap rendered with FT_RENDER_MODE_LCD)
Ca, Cr, Cg, Cb - The color of the text I want to use.
So, the question: How to properly combine these 3 numbers in order to get the result bitmap pixel.
The theoretical answers are OK and even better than code samples.
Interpet the FreeType data not as actual RGB colors (these 'raw' values are to draw text in black) but as intensities of the destination text color.
So the full intensity of each F color component is F*C/255. However, since your C also includes an alpha component, the intensity is scaled by it:
s' = F*C*A/(255 * 255)
assuming, of course, that F, C, and A are inside the usual range of 0..255. A is a fraction A/255, and the second division is to bring F*C back into the target range. s' is now the derived source color.
On to plotting it. Per color component, the new color gets add to D, and D in turn gets dimished by the source's alpha 255-A (scaled).
That leads to the full sum
D' = D*(255-A)/255 + F*C*A/(255 * 255)
equal to (moving one value to the right)
D' = (D*(255-A) + F*C*A/255)/255
for each separate channel r,g,b of D, F, C and A. The last one, alpha, also needs a separate calculation for each channel because your FreeType output data returns this format.
If the calculation is too slow, you could compare the visual result with not-LCD-optimized grayscale output from FreeType. I suspect that especially on 'busy' (not entirely monochrome) backgrounds the extra calculations are simply not worth it.
The numerical advantage of a pure grayscale input is that you only have to calculate A and 1-A once for each triplet of RGB colors.
The "background" also has an alpha channel but to draw text "on" it you can regard this as 'unused'. Drawing a transparent item onto another transparent item does not, in general, change its intrinsic transparency.
After some discovery, I found the right answer. It is disappointing.
It is impossible to draw subpixel rendered graphics (including fonts) on a transparent image with RGBA format.
In order to properly render such graphics, a format that supports separate alpha channels for every color is mandatory.
For example 48 bit per pixes: RrGgBg where r, g and b are the alpha channels for the red, green and blue collor channels respectively.

RGBA png alpha processing

I have an RGBA PNG file that is(I think) the capture of a signature from a digitizing tablet. Extracting out the image, ALL RGB triplets are 0,0,0 and the alpha channel values are non zero if the pixel is to carry a tone in the final image. I get all of that.
This PNG only has a IHDR, IDAT, and IEND chunks.
My first question is, are my RGB pixels considered the foreground or
the background? What might be the proper terminology to describe this
file/image?
What equation do I use to apply the alpha to the RGB.
Looking at the alpha values, I can see how to come up with a number, but what general equation would be used generate the appropriate RGB value, avoiding divide by 0 or overflow value errors if my RGBs had started out with non zero values.
I have been through the PNG spec and there's something I just don't get.
BTW, I am ultimately producing, in C, a PCL file intended for printing directly to a PCL LaserJet.
The image you display last is the foreground image. There is no foreground and background in a single image.
This link shows how to blend an image with alpha to another image.:
http://en.wikipedia.org/wiki/Alpha_compositing#Alpha_blending

Detect Eye using HSV value in Open CV

I want to detect an Eye, I have some code where I can detect blue color object, so if I made changes(how I can?) then it would be possible for me to detect an eye. As the below color has its own specific range value so, if I specify the eye color HSV value then can I detect EYE with this method.
In this below code I am going to detect BLUE Color Object, please tell me that where I do changes in my code so that I could get EYE using Open CV.
IplImage* GetThresholdedImage(IplImage* img)
{
// Convert the image into an HSV image
IplImage* imgHSV = cvCreateImage(cvGetSize(img), 8, 3);
cvCvtColor(img, imgHSV, CV_BGR2HSV);
IplImage* imgThreshed = cvCreateImage(cvGetSize(img), 8, 1);
//For detecting BLUE color i have this HSV value,
cvInRangeS(imgHSV, cvScalar(112, 100, 100), cvScalar(124, 255, 255), imgThreshed);//this will not recognize the yellow color
cvReleaseImage(&imgHSV);
return imgThreshed;
}
Eye detection is much easier with Haar classifier.
link here
Such a simple method may work at extracting a blue object using some thresholding but even if it could be adapted using a different colour black? blue? green? Everyone has different eye colours. I don't see a non hacky method working for you using blob extraction like this based on a HSV threshold value. This method works well on large blocks of the same colour, i.e. removing a blue background.
Look more at shape, everyone has different coloured eyes but the shape is circular/ellipse ish. There are varients of the Hough Transform for detecting circles.
...the Hough transform has been extended to identifying positions of
arbitrary shapes, most commonly circles or ellipses.

Cannot convert Gray to BGR in OpenCV

Following is a snippet of a simple code to convert a grayscale image to RGB using cvCvtColor function in OpenCV.
input = cvLoadImage("test.jpg", CV_LOAD_IMAGE_GRAYSCALE);
output = cvCreateImage(cvSize(input->width, input->height), 8, 3);
cvCvtColor(input, output, CV_GRAY2BGR);
cvSaveImage("output.jpg", output);
Where test.jpg is a grayscale image.
But it doesn`t seem to be working properly, because output.jpg i.e the final output also is grayscale, same as the input itself. Why so ?
Any kind of help would be highly appreciated. Thanks in advance !
I think you misunderstand cvCvtColor. cvCvtColor(input, output, CV_GRAY2BGR); will change single channel image to 3-channel image. But if you look at the image, it will still look like a gray image because, for example, a gray pixel of 154 has been converted to RGB(154,154,154).
When you convert color image to gray image, all color information will be gone and not recoverable. Therefore you can't really make a gray image to visibly color image without additional information and corresponding operations.

Resources