I'm wirtting a program that changes all the image pixels to grayscale except for the red ones. At first, i thought it would be easier, but I'm having trouble trying to find the best way to determine if a pixel is red or not.
The first method I tried was a formula: Green < Red/2 && Blue < Red/1.5
results:
michael jordan
goldhill
Michael Jordan's image shows some not red pixels that pass the formula, like #7F3222 and #B15432. So i tried a different method, hue >= 345 || hue <= 9, trying to limit only the red part of the color wheel.
results:
michael jordan 2
goldhill 2
Michael Jordan's image now has less not red pixels and goldhill's image has more red pixels than before but still not what I want.
My methods are incorrect or just some adjustments are missing? if they're incorrect, how can I solve this problem?
Your question "How to identify 'real' red pixels", begs the question "what a red pixel actually is, especially if it has to be 'real'".
The RGB (red, green, blue) color model is not well suited to answer that question, therefore you should use the HSV (hue, saturation, value) model.
Hue defines the color in degrees (0 - 360 degrees)
Saturation defines the intensity of the color (0 - 100 %)
Value or Brightness defines the luminosity (0 - 100 %)
Steps:
convert RGB to HSV
if the H value is not red (+/- 30 degrees, you'll have to define a threshold range of what you consider to be red, 'real' red would be 0 degrees)
set S to 0 (zero), by doing so we remove the saturation of the color, which results in a gray shade
leave the brightness (V) as it is (or play around with it and see how it effects the results)
convert HSV to RGB
Convert from RGB to HSV and vice versa:
RGB to HSV
HSV to RGB
More info on HSV:
https://en.wikipedia.org/wiki/HSL_and_HSV
"All cats are gray in the dark"
Implement a dynamic color range. Adjust the 'red' range based on the brightness and/or saturation of the current pixel. Put a weight scale (on how much they affect the range in %) on the saturation and brightness values to determine your range ... play around to achieve the best results.
You used RGB, and HSV method, which it is good, and both are ok.
The problem is about defining red. Hue (or R) is not enough: it contains many other colours (in the broader sense): browns are dark/unsaturated reds (or oranges). Pink is also a tint of red (so red + white, so unsaturated).
So in your first method, I would add a condition: R > 127 (you must check yourself a good threshold). And possibly change the other conditions with a higher ratio of R to G and B and possibly adding also the ration R to (G+B). The first new added condition is about reds (and not "dark reds/browns), and brightness. Your two conditions are about "hue" (hue is defined by the top two values), and the last condition I wrote is about saturation.
You can do in a similar way for HSV: filter H (as you did), but you must filter also V (you want just bright reds), and also an high saturation, so you must filter all channels.
You should test yourself the saturation levels. The problem is that eyes adapt quickly to colours, so some images with a lot of redish colours are seen normally (less redish) by humans, but more red by above calculation. Etc. (so usually for such works there is some sliders to modify, e.v. you can try to automatize, but you need to find overall hue and brightness of image, and possibly complex methods, see CIECAM).
Related
I am using FreeType to render some texts.
The surface where I want to draw the text is a bitmap image with format ARGB, pre-multiplied alpha.
The needed color of the text is also ARGB.
The rendered FT_Bitmap has format FT_PIXEL_MODE_LCD - it is as the text is rendered with white color on black background, with sub-pixel antialiasing.
So, for every pixel I have 3 numbers:
Da, Dr, Dg, Db - destination pixel ARGB (the background image).
Fr, Fg, Fb - FreeType rendered pixel (FT_Bitmap rendered with FT_RENDER_MODE_LCD)
Ca, Cr, Cg, Cb - The color of the text I want to use.
So, the question: How to properly combine these 3 numbers in order to get the result bitmap pixel.
The theoretical answers are OK and even better than code samples.
Interpet the FreeType data not as actual RGB colors (these 'raw' values are to draw text in black) but as intensities of the destination text color.
So the full intensity of each F color component is F*C/255. However, since your C also includes an alpha component, the intensity is scaled by it:
s' = F*C*A/(255 * 255)
assuming, of course, that F, C, and A are inside the usual range of 0..255. A is a fraction A/255, and the second division is to bring F*C back into the target range. s' is now the derived source color.
On to plotting it. Per color component, the new color gets add to D, and D in turn gets dimished by the source's alpha 255-A (scaled).
That leads to the full sum
D' = D*(255-A)/255 + F*C*A/(255 * 255)
equal to (moving one value to the right)
D' = (D*(255-A) + F*C*A/255)/255
for each separate channel r,g,b of D, F, C and A. The last one, alpha, also needs a separate calculation for each channel because your FreeType output data returns this format.
If the calculation is too slow, you could compare the visual result with not-LCD-optimized grayscale output from FreeType. I suspect that especially on 'busy' (not entirely monochrome) backgrounds the extra calculations are simply not worth it.
The numerical advantage of a pure grayscale input is that you only have to calculate A and 1-A once for each triplet of RGB colors.
The "background" also has an alpha channel but to draw text "on" it you can regard this as 'unused'. Drawing a transparent item onto another transparent item does not, in general, change its intrinsic transparency.
After some discovery, I found the right answer. It is disappointing.
It is impossible to draw subpixel rendered graphics (including fonts) on a transparent image with RGBA format.
In order to properly render such graphics, a format that supports separate alpha channels for every color is mandatory.
For example 48 bit per pixes: RrGgBg where r, g and b are the alpha channels for the red, green and blue collor channels respectively.
I heard that simple
R*=f;
G*=f;
B*=f;
where f is a scalar value 0 .. 1.0 or more
Is not to much proper way of changing brightnes
of the color, but i cannot find some code snippet
to obtain something better (without to much studying
of a color theory)
Could maybe someone give me such snipped here? TNX
Convert the colour to HSL or HSV, then adjust the lightness (L) or value (V). If needed, convert back to RGB.
Because 0,0,0 is black and 255,255,255 is white, (with a grayscale inbetween) your formula is indeed a very good approximation for changing the brightness of a given color value.
It is not exact in terms of perceived brightness but works well enough for most applications.
A simple conversion from RGB to Lightness is:
L = 1/3 * (R+G+B)
As you may see from this formula, f*L and your approach are identical.
I have a two bmp files of the same scene and I would like determine if one is more bright than the other.
Similarly I have a set of bmps with different contrasts and another set of bmps with different saturation.
How do I compare these images for brightness,contrast and saturation ? These test images are saved by a tool provided by the sensor manufacturer.
I am using gcc 4.5.
To compare the brightness of two images you need to compare the grey value of the pixels (yes, one by one). In the RGB colour space the brightness (grey value) is the mean of R,G and B, so you have brightness = (R+G+B) / 3
Comparing the contrast and especially the saturation will prove to be not that easy, for a start you could have a look at HSL and HSV but in general I'd suggest to get a good book on the image processing topic.
The answer of (R+G+B)/3 is really not even a good approximation of brightness (at least from what we know today)!
[BRIGHTNESS]
What you really SHOULD do is convert to another color scale and compare the brightness using that channel of a color scale that incorporates brightness into it. Look here!!!
Formula to determine brightness of RGB color
there are a great coupld of answers here that talk about conversion or RGB into luminance, etc...
[CONTRAST]
Contrast is a function of the spread of the pixel values throughout the full range of possible pixel values. One understands the contrast by putting together a histogram of all the pixels (where the x axis represents the a pixel value, and the y axis represents how many pixels are of that value), and analyzing the histogram to understand if there is good distribution throught the entire range, or not. Comparing contrast can be done many ways, but potentially a good starting point, would be to find the pixel-value center point (average of the histogram data) of each image, and potentially some histogram width parameter (where lets say the width is about the center point and is large enough to incorporate 90% of all pixels), and compare the center and width parameters of both images. This is ONLY a starting point.
[SATURATION]
To compare saturation, one might convert the image to the HSL colour space. The S in HSL stands for Saturation. Comparing saturation within this colour space becomes exactly like comparing brightness as outlined above!!!
I have a project that would classify the color of a pixel. Whether it is red,violet, orange or simply any color in the color wheel. I know that there are over 16 million color combination for pixels. But I was able to read a web page that says its possible for me to do my project using the wavelengths of color. Please give me the formula to compute for the wavelength using RGB values. Thanks!
A pure color has a wavelength (any single color LED will have a specific wavelength).
Red, green and blue each have a range of wavelength. However, when you make an RGB color, you add these wavelengths together, which will NOT give you a new wavelength. The eye can't distinguish a yellow composed of one wavelength from that of adding red and green (just how the eye works).
I'd recommend reading up on color theory
http://en.wikipedia.org/wiki/RGB_color_model
Well RGB for a monitor maps to 3 independant levels of Red Green and Blue light, so there are (mostly) 3 distinct wavelengths present of any one percieved colour.
BUT If you can convert your RGB colour value to its equivilent HSL, the H part (Hue) is the dominant colour in so far as wavelength goes if you are prepared to ignore the saturation (think of it as whiteness).
Based on that you could approximate the dominante wavelength of a colour based on its H value.
Red light is roughly 630–740nm wavelength, Violet is roughly 380–450nm.
Working out wavelength is a bit tricky, and as Goblin mentioned, not always possible (another example is the colour obtained by mixing equal amounts of red and blue light. That purple has no single wavelength).
But if all you want to do is identify the colour by name, then the HSV model would be a good one to use. HSV is Hue (where the colour is around the colour wheel), Saturation (how much colour there is as opposed to being a shade of black/grey/white) and Value (how bright or dark the pixel is). In this case Hue is probably exactly what you want.
If you are using a .NET language, then you're in luck. See the Color.GetHue Method which does all the work for you.
Otherwise, see HSV at Wikipedia for more details.
In essence, if you have R, G and B as floats ranging from 0.0 to 1.0 (instead of ints from 0 to 255 for example), then:
M = max(R, G, B)
m = min(R, G, B)
C = M-m
if M = m then H' is undefined (The pixel is some shade of grey)
if M = R then H' = (G-B)/C mod 6
if M = G then H' = (B-R)/C + 2
if M = B then H' = (R-G)/C + 4
When converting RGB to HSV you then multiply H' by 60 degrees, but for your purposes H' is probably fine. It will be a float ranging from 0 to 6 (almost). 0 is Red (as is 6). 1 is Yellow, with values between 0 and 1 being shaded between Red and Yellow. So 0.5 would be Orange. The important landmarks are:
0 - Red
1 - Yellow
2 - Green
3 - Cyan
4 - Blue
5 - Purple
6 - Red (again)
Hope that helps.
http://en.wikipedia.org/wiki/Visible_spectrum
It is possible. See above. Gray background apparently makes it easier. You might get something like that on your own, and even improve on it. But to do it accurately will cost major dollars. U will need a colorimetry expert, a calibrated monitor and viewing environment (since what the dominant wavelength of your pixel is just means what monochromatic wavelength it approximates on your calibrated monitor in your calibrated viewing environment). All this will be a few thousand dollars. The work done at the above link, shown on wikipedia, does not seem that accurate but it is probably what you want.
Just convert the RGB to HSV then get the HSV value to degrees and this is the answer:
650 - 250 / 270 * D
where D is the degrees.
Considering...
Violet has a 380–450 nm wavelength, &
Blue has a 450–495 nm wavelength, &
Green has a 495–570 nm wavelength, &
Yellow has a 570–590 nm wavelength, &
Orange has a 590–620 nm wavelength, &
Red has a 620–750 nm wavelength,
then you just need to check if it is in those ranges, then you can classify it.
Hope this helps!
For a database app I'm trying to determine the average shade of a section of photo, against a colour scale.
Being a novice I'm finding this very difficult to explain so I've created a simple diagram showing exactly what I'm trying to achieve.
http://www.knockyoursocksoff.com/shade/
If anybody has the time to give me some ideas I'd be very grateful.
Best wishes,
Warren.
If you are using color photos, you should first convert the selected area from RBG (red, green, blue) to HSL/HSV (article).
HSL stands for "hue, saturation, lightness".1 The number you are interested in is the lightness.
In the most general terms, the lightness refers to how you perceive the brightness of a colored surface. It's hard to use the red/green/blue components to say whether a patch of red is brighter/darker than, say, a patch of blue. Converting to HSL takes care of that problem.
Once you have done the conversion, you can simply average the lightness values of your selected area.
Quick note on lightness values: Technically, you can't simply average the lightness values because the perception of lightness is not linear (article). But, unless you are writing a deeply scientific application, simply averaging the lightness will give you an "accurate enough" value.
1 In Adobe Photoshop, they call it HSB (hue, saturation, brightness)
I think I would start by just averaging the pixel values:
for x = start_x to end_x
for y = stary_y to end_y
total += getPixel(x,y)
shade = total / (xlen*ylen)
Its going to be more complicated if you're doing it in color.