How to get the wavelength of a pixel using RGB? - rgb

I have a project that would classify the color of a pixel. Whether it is red,violet, orange or simply any color in the color wheel. I know that there are over 16 million color combination for pixels. But I was able to read a web page that says its possible for me to do my project using the wavelengths of color. Please give me the formula to compute for the wavelength using RGB values. Thanks!

A pure color has a wavelength (any single color LED will have a specific wavelength).
Red, green and blue each have a range of wavelength. However, when you make an RGB color, you add these wavelengths together, which will NOT give you a new wavelength. The eye can't distinguish a yellow composed of one wavelength from that of adding red and green (just how the eye works).
I'd recommend reading up on color theory
http://en.wikipedia.org/wiki/RGB_color_model

Well RGB for a monitor maps to 3 independant levels of Red Green and Blue light, so there are (mostly) 3 distinct wavelengths present of any one percieved colour.
BUT If you can convert your RGB colour value to its equivilent HSL, the H part (Hue) is the dominant colour in so far as wavelength goes if you are prepared to ignore the saturation (think of it as whiteness).
Based on that you could approximate the dominante wavelength of a colour based on its H value.
Red light is roughly 630–740nm wavelength, Violet is roughly 380–450nm.

Working out wavelength is a bit tricky, and as Goblin mentioned, not always possible (another example is the colour obtained by mixing equal amounts of red and blue light. That purple has no single wavelength).
But if all you want to do is identify the colour by name, then the HSV model would be a good one to use. HSV is Hue (where the colour is around the colour wheel), Saturation (how much colour there is as opposed to being a shade of black/grey/white) and Value (how bright or dark the pixel is). In this case Hue is probably exactly what you want.
If you are using a .NET language, then you're in luck. See the Color.GetHue Method which does all the work for you.
Otherwise, see HSV at Wikipedia for more details.
In essence, if you have R, G and B as floats ranging from 0.0 to 1.0 (instead of ints from 0 to 255 for example), then:
M = max(R, G, B)
m = min(R, G, B)
C = M-m
if M = m then H' is undefined (The pixel is some shade of grey)
if M = R then H' = (G-B)/C mod 6
if M = G then H' = (B-R)/C + 2
if M = B then H' = (R-G)/C + 4
When converting RGB to HSV you then multiply H' by 60 degrees, but for your purposes H' is probably fine. It will be a float ranging from 0 to 6 (almost). 0 is Red (as is 6). 1 is Yellow, with values between 0 and 1 being shaded between Red and Yellow. So 0.5 would be Orange. The important landmarks are:
0 - Red
1 - Yellow
2 - Green
3 - Cyan
4 - Blue
5 - Purple
6 - Red (again)
Hope that helps.

http://en.wikipedia.org/wiki/Visible_spectrum
It is possible. See above. Gray background apparently makes it easier. You might get something like that on your own, and even improve on it. But to do it accurately will cost major dollars. U will need a colorimetry expert, a calibrated monitor and viewing environment (since what the dominant wavelength of your pixel is just means what monochromatic wavelength it approximates on your calibrated monitor in your calibrated viewing environment). All this will be a few thousand dollars. The work done at the above link, shown on wikipedia, does not seem that accurate but it is probably what you want.

Just convert the RGB to HSV then get the HSV value to degrees and this is the answer:
650 - 250 / 270 * D
where D is the degrees.
Considering...
Violet has a 380–450 nm wavelength, &
Blue has a 450–495 nm wavelength, &
Green has a 495–570 nm wavelength, &
Yellow has a 570–590 nm wavelength, &
Orange has a 590–620 nm wavelength, &
Red has a 620–750 nm wavelength,
then you just need to check if it is in those ranges, then you can classify it.
Hope this helps!

Related

How to identify real red pixels?

I'm wirtting a program that changes all the image pixels to grayscale except for the red ones. At first, i thought it would be easier, but I'm having trouble trying to find the best way to determine if a pixel is red or not.
The first method I tried was a formula: Green < Red/2 && Blue < Red/1.5
results:
michael jordan
goldhill
Michael Jordan's image shows some not red pixels that pass the formula, like #7F3222 and #B15432. So i tried a different method, hue >= 345 || hue <= 9, trying to limit only the red part of the color wheel.
results:
michael jordan 2
goldhill 2
Michael Jordan's image now has less not red pixels and goldhill's image has more red pixels than before but still not what I want.
My methods are incorrect or just some adjustments are missing? if they're incorrect, how can I solve this problem?
Your question "How to identify 'real' red pixels", begs the question "what a red pixel actually is, especially if it has to be 'real'".
The RGB (red, green, blue) color model is not well suited to answer that question, therefore you should use the HSV (hue, saturation, value) model.
Hue defines the color in degrees (0 - 360 degrees)
Saturation defines the intensity of the color (0 - 100 %)
Value or Brightness defines the luminosity (0 - 100 %)
Steps:
convert RGB to HSV
if the H value is not red (+/- 30 degrees, you'll have to define a threshold range of what you consider to be red, 'real' red would be 0 degrees)
set S to 0 (zero), by doing so we remove the saturation of the color, which results in a gray shade
leave the brightness (V) as it is (or play around with it and see how it effects the results)
convert HSV to RGB
Convert from RGB to HSV and vice versa:
RGB to HSV
HSV to RGB
More info on HSV:
https://en.wikipedia.org/wiki/HSL_and_HSV
"All cats are gray in the dark"
Implement a dynamic color range. Adjust the 'red' range based on the brightness and/or saturation of the current pixel. Put a weight scale (on how much they affect the range in %) on the saturation and brightness values to determine your range ... play around to achieve the best results.
You used RGB, and HSV method, which it is good, and both are ok.
The problem is about defining red. Hue (or R) is not enough: it contains many other colours (in the broader sense): browns are dark/unsaturated reds (or oranges). Pink is also a tint of red (so red + white, so unsaturated).
So in your first method, I would add a condition: R > 127 (you must check yourself a good threshold). And possibly change the other conditions with a higher ratio of R to G and B and possibly adding also the ration R to (G+B). The first new added condition is about reds (and not "dark reds/browns), and brightness. Your two conditions are about "hue" (hue is defined by the top two values), and the last condition I wrote is about saturation.
You can do in a similar way for HSV: filter H (as you did), but you must filter also V (you want just bright reds), and also an high saturation, so you must filter all channels.
You should test yourself the saturation levels. The problem is that eyes adapt quickly to colours, so some images with a lot of redish colours are seen normally (less redish) by humans, but more red by above calculation. Etc. (so usually for such works there is some sliders to modify, e.v. you can try to automatize, but you need to find overall hue and brightness of image, and possibly complex methods, see CIECAM).

Blending text, rendered by FreeType in color and alpha

I am using FreeType to render some texts.
The surface where I want to draw the text is a bitmap image with format ARGB, pre-multiplied alpha.
The needed color of the text is also ARGB.
The rendered FT_Bitmap has format FT_PIXEL_MODE_LCD - it is as the text is rendered with white color on black background, with sub-pixel antialiasing.
So, for every pixel I have 3 numbers:
Da, Dr, Dg, Db - destination pixel ARGB (the background image).
Fr, Fg, Fb - FreeType rendered pixel (FT_Bitmap rendered with FT_RENDER_MODE_LCD)
Ca, Cr, Cg, Cb - The color of the text I want to use.
So, the question: How to properly combine these 3 numbers in order to get the result bitmap pixel.
The theoretical answers are OK and even better than code samples.
Interpet the FreeType data not as actual RGB colors (these 'raw' values are to draw text in black) but as intensities of the destination text color.
So the full intensity of each F color component is F*C/255. However, since your C also includes an alpha component, the intensity is scaled by it:
s' = F*C*A/(255 * 255)
assuming, of course, that F, C, and A are inside the usual range of 0..255. A is a fraction A/255, and the second division is to bring F*C back into the target range. s' is now the derived source color.
On to plotting it. Per color component, the new color gets add to D, and D in turn gets dimished by the source's alpha 255-A (scaled).
That leads to the full sum
D' = D*(255-A)/255 + F*C*A/(255 * 255)
equal to (moving one value to the right)
D' = (D*(255-A) + F*C*A/255)/255
for each separate channel r,g,b of D, F, C and A. The last one, alpha, also needs a separate calculation for each channel because your FreeType output data returns this format.
If the calculation is too slow, you could compare the visual result with not-LCD-optimized grayscale output from FreeType. I suspect that especially on 'busy' (not entirely monochrome) backgrounds the extra calculations are simply not worth it.
The numerical advantage of a pure grayscale input is that you only have to calculate A and 1-A once for each triplet of RGB colors.
The "background" also has an alpha channel but to draw text "on" it you can regard this as 'unused'. Drawing a transparent item onto another transparent item does not, in general, change its intrinsic transparency.
After some discovery, I found the right answer. It is disappointing.
It is impossible to draw subpixel rendered graphics (including fonts) on a transparent image with RGBA format.
In order to properly render such graphics, a format that supports separate alpha channels for every color is mandatory.
For example 48 bit per pixes: RrGgBg where r, g and b are the alpha channels for the red, green and blue collor channels respectively.

Why there is no brown or grey in the CIE XY color space?

Why there is no brown or grey in the CIE XY color space?
The xy chromaticity graph isn't a color space; it's a two dimensional projection of a color space designed to separate hue and saturation from luminosity. To represent gray and brown you need this third dimension since grey is basically dark white and brown is dark orange. A 3 dimensional color space like xyY where Y is a third dimension representing luminosity has no trouble with grey and brown. In this case gray values would extent down from the white point and browns would extend below the orange.
One has to be really careful on the terminology: XY upper-case doesn't exists in colour science, the closest term would be CIE XYZ tristimulus values.
If you were referring to the CIE 1931 Chromaticity Diagram used to represent xy chromaticity coordinates, then it should written xy lower-case.
The CIE 1931 Chromaticity Diagram and all other chromaticity diagrams are usually drawn with colours at their maximum Luminance value. That being said, nothing prevent one to compute the chromaticity diagrams colours with a Luminance value different from 1 (100%) which would allow you to have greyish / brownish colours.
It is important to note that you can't actually properly display the visible spectrum colours as they are outside sRGB colourspace gamut. Thus any colour outside sRGB triangle is incorrectly represented and sometimes you find chromaticity diagrams where the diagram colours have been altered (saturation lowered essentially) to fit within sRGB colourspace gamut.
I'm adding an animated GIF so that you can see that the diagram is a 2d projection of the CIE xyY colourspace seen from the top.

How to find out straight lines in a image

I have an image file.(jpg or png)
This is having only 4 colors and few black lines.(600px X 600px image size).
There can be 2 or 4 or 6 black lines.
I need to get the (x1, y1) and (x2, y2) of each black lines.
Can be implemented with perl or c or matlab
Try applying the Hough Transform. It is especially effective at detecting lines.
One simple possibility to detect lines in images is calculating the image gradient.
For that calculate the gradient in either x or y direction (depending on the orientation of the lines) and then threshold the gradient to find out whether a black line is present.

From an image, how do I determine the shade?

For a database app I'm trying to determine the average shade of a section of photo, against a colour scale.
Being a novice I'm finding this very difficult to explain so I've created a simple diagram showing exactly what I'm trying to achieve.
http://www.knockyoursocksoff.com/shade/
If anybody has the time to give me some ideas I'd be very grateful.
Best wishes,
Warren.
If you are using color photos, you should first convert the selected area from RBG (red, green, blue) to HSL/HSV (article).
HSL stands for "hue, saturation, lightness".1 The number you are interested in is the lightness.
In the most general terms, the lightness refers to how you perceive the brightness of a colored surface. It's hard to use the red/green/blue components to say whether a patch of red is brighter/darker than, say, a patch of blue. Converting to HSL takes care of that problem.
Once you have done the conversion, you can simply average the lightness values of your selected area.
Quick note on lightness values: Technically, you can't simply average the lightness values because the perception of lightness is not linear (article). But, unless you are writing a deeply scientific application, simply averaging the lightness will give you an "accurate enough" value.
1 In Adobe Photoshop, they call it HSB (hue, saturation, brightness)
I think I would start by just averaging the pixel values:
for x = start_x to end_x
for y = stary_y to end_y
total += getPixel(x,y)
shade = total / (xlen*ylen)
Its going to be more complicated if you're doing it in color.

Resources