Why there is no brown or grey in the CIE XY color space? - rgb

Why there is no brown or grey in the CIE XY color space?

The xy chromaticity graph isn't a color space; it's a two dimensional projection of a color space designed to separate hue and saturation from luminosity. To represent gray and brown you need this third dimension since grey is basically dark white and brown is dark orange. A 3 dimensional color space like xyY where Y is a third dimension representing luminosity has no trouble with grey and brown. In this case gray values would extent down from the white point and browns would extend below the orange.

One has to be really careful on the terminology: XY upper-case doesn't exists in colour science, the closest term would be CIE XYZ tristimulus values.
If you were referring to the CIE 1931 Chromaticity Diagram used to represent xy chromaticity coordinates, then it should written xy lower-case.
The CIE 1931 Chromaticity Diagram and all other chromaticity diagrams are usually drawn with colours at their maximum Luminance value. That being said, nothing prevent one to compute the chromaticity diagrams colours with a Luminance value different from 1 (100%) which would allow you to have greyish / brownish colours.
It is important to note that you can't actually properly display the visible spectrum colours as they are outside sRGB colourspace gamut. Thus any colour outside sRGB triangle is incorrectly represented and sometimes you find chromaticity diagrams where the diagram colours have been altered (saturation lowered essentially) to fit within sRGB colourspace gamut.
I'm adding an animated GIF so that you can see that the diagram is a 2d projection of the CIE xyY colourspace seen from the top.

Related

How to identify real red pixels?

I'm wirtting a program that changes all the image pixels to grayscale except for the red ones. At first, i thought it would be easier, but I'm having trouble trying to find the best way to determine if a pixel is red or not.
The first method I tried was a formula: Green < Red/2 && Blue < Red/1.5
results:
michael jordan
goldhill
Michael Jordan's image shows some not red pixels that pass the formula, like #7F3222 and #B15432. So i tried a different method, hue >= 345 || hue <= 9, trying to limit only the red part of the color wheel.
results:
michael jordan 2
goldhill 2
Michael Jordan's image now has less not red pixels and goldhill's image has more red pixels than before but still not what I want.
My methods are incorrect or just some adjustments are missing? if they're incorrect, how can I solve this problem?
Your question "How to identify 'real' red pixels", begs the question "what a red pixel actually is, especially if it has to be 'real'".
The RGB (red, green, blue) color model is not well suited to answer that question, therefore you should use the HSV (hue, saturation, value) model.
Hue defines the color in degrees (0 - 360 degrees)
Saturation defines the intensity of the color (0 - 100 %)
Value or Brightness defines the luminosity (0 - 100 %)
Steps:
convert RGB to HSV
if the H value is not red (+/- 30 degrees, you'll have to define a threshold range of what you consider to be red, 'real' red would be 0 degrees)
set S to 0 (zero), by doing so we remove the saturation of the color, which results in a gray shade
leave the brightness (V) as it is (or play around with it and see how it effects the results)
convert HSV to RGB
Convert from RGB to HSV and vice versa:
RGB to HSV
HSV to RGB
More info on HSV:
https://en.wikipedia.org/wiki/HSL_and_HSV
"All cats are gray in the dark"
Implement a dynamic color range. Adjust the 'red' range based on the brightness and/or saturation of the current pixel. Put a weight scale (on how much they affect the range in %) on the saturation and brightness values to determine your range ... play around to achieve the best results.
You used RGB, and HSV method, which it is good, and both are ok.
The problem is about defining red. Hue (or R) is not enough: it contains many other colours (in the broader sense): browns are dark/unsaturated reds (or oranges). Pink is also a tint of red (so red + white, so unsaturated).
So in your first method, I would add a condition: R > 127 (you must check yourself a good threshold). And possibly change the other conditions with a higher ratio of R to G and B and possibly adding also the ration R to (G+B). The first new added condition is about reds (and not "dark reds/browns), and brightness. Your two conditions are about "hue" (hue is defined by the top two values), and the last condition I wrote is about saturation.
You can do in a similar way for HSV: filter H (as you did), but you must filter also V (you want just bright reds), and also an high saturation, so you must filter all channels.
You should test yourself the saturation levels. The problem is that eyes adapt quickly to colours, so some images with a lot of redish colours are seen normally (less redish) by humans, but more red by above calculation. Etc. (so usually for such works there is some sliders to modify, e.v. you can try to automatize, but you need to find overall hue and brightness of image, and possibly complex methods, see CIECAM).

Blending text, rendered by FreeType in color and alpha

I am using FreeType to render some texts.
The surface where I want to draw the text is a bitmap image with format ARGB, pre-multiplied alpha.
The needed color of the text is also ARGB.
The rendered FT_Bitmap has format FT_PIXEL_MODE_LCD - it is as the text is rendered with white color on black background, with sub-pixel antialiasing.
So, for every pixel I have 3 numbers:
Da, Dr, Dg, Db - destination pixel ARGB (the background image).
Fr, Fg, Fb - FreeType rendered pixel (FT_Bitmap rendered with FT_RENDER_MODE_LCD)
Ca, Cr, Cg, Cb - The color of the text I want to use.
So, the question: How to properly combine these 3 numbers in order to get the result bitmap pixel.
The theoretical answers are OK and even better than code samples.
Interpet the FreeType data not as actual RGB colors (these 'raw' values are to draw text in black) but as intensities of the destination text color.
So the full intensity of each F color component is F*C/255. However, since your C also includes an alpha component, the intensity is scaled by it:
s' = F*C*A/(255 * 255)
assuming, of course, that F, C, and A are inside the usual range of 0..255. A is a fraction A/255, and the second division is to bring F*C back into the target range. s' is now the derived source color.
On to plotting it. Per color component, the new color gets add to D, and D in turn gets dimished by the source's alpha 255-A (scaled).
That leads to the full sum
D' = D*(255-A)/255 + F*C*A/(255 * 255)
equal to (moving one value to the right)
D' = (D*(255-A) + F*C*A/255)/255
for each separate channel r,g,b of D, F, C and A. The last one, alpha, also needs a separate calculation for each channel because your FreeType output data returns this format.
If the calculation is too slow, you could compare the visual result with not-LCD-optimized grayscale output from FreeType. I suspect that especially on 'busy' (not entirely monochrome) backgrounds the extra calculations are simply not worth it.
The numerical advantage of a pure grayscale input is that you only have to calculate A and 1-A once for each triplet of RGB colors.
The "background" also has an alpha channel but to draw text "on" it you can regard this as 'unused'. Drawing a transparent item onto another transparent item does not, in general, change its intrinsic transparency.
After some discovery, I found the right answer. It is disappointing.
It is impossible to draw subpixel rendered graphics (including fonts) on a transparent image with RGBA format.
In order to properly render such graphics, a format that supports separate alpha channels for every color is mandatory.
For example 48 bit per pixes: RrGgBg where r, g and b are the alpha channels for the red, green and blue collor channels respectively.

I have a dot bouncing around an image. Need to calculate angles of reflection off of groups of pixels (surface of objects)

Suppose we have an image (pixel buffer) that is in black and white, so each pixel is either black or white (not gray scale).
Now somewhere in the middle of that images, place a green dot. It may have a radius of n for rendering purposed, but it is really a just point. Give the dot a randomly selected direction and speed, and start it moving. If the image is all white pixels, the dot will bounce off the edges of the image, infinitely wandering around the picture. This is quite easy... just reverse either the rise or run of the dot's vector.
Next, suppose the image has some globs of black pixels. As the dot encounters these globs of black pixels, the angle of reflection needs to be calculated. This is also quite easy of the the black pixels have a fixed slope, as in my sketch (blue X represents black pixels). You can find the slope of the blue Xs and easily calculate the new vector.
But how about the case where the black pixels form really unfriendly surfaces? What are some approaches to figuring out this angle?
This is the subject that I am interested in.
There must be some algorithms that exist for this kind of purpose, but I never ran across any in school. I am not asking how to code this, rather approaches to writing the algorithm to do this. I have a few ideas that I'll try, but if there are some standard ways to do this that exist, I'd like to learn about them.
Obviously I'd like to start with Black and White then move into RGBA.
I am looking for any reference material on just this sort of subject. Websites, books, or other references are very very welcome.
Also, if there are different StackOverflow tags that could be good, let me know.
Thanks much!
Edit********** More pics and information
Maybe I wasn't clear what I meant by "unfriendly surfaces". In the previous picture, our blue X's happened to just be a line. Picture a case where it is not a line, rather a wierd shape.
We start with our green pixel traveling at a slope of 2. Suppose it's vector is that of 12 pixels per frame. It would have a projected path like this:
But suppose instead of a nice friendly line, we have this:
In my mind I can kinda of see what is likely to happen if this were a ball and some walls.
Look for edge detection algorithms used in image processing. Some edge detectors also approximate the direction of edges.
You can think of the pixel neighborhood of the green dot, maybe somewhere between 3x3 and 7x7, as a small edge direction detection problem. One approach would take two passes at the pixels:
In the first pass, smooth the sharp black/white pixels using a Gaussian filter.
In the second pass, apply an edge detection operator, such as Sobel, Prewitt or Roberts to produce the X and Y derivatives of the pixels' intensity. You can then approximate the direction as:
angle = arctan(dx/dy)
The motivation for the smoothing pass is to give the edge detection operator information from farther-away pixels.
The Wikipedia page on the Canny edge detector has a good discussion on obtaining the direction (the "gradient") of an edge, including an example of a particular Gaussian filter you can use for smoothing.
Am doing something similar with a ball and randomly generated backgrounds.
The filter and edge detection is highly technical but all other processes using a 5*5 or 3*3 grid seem similarly difficult.
However, I think I may have a cheap way around this. Assuming a ball travelling in any direction, scan all leading edges of the ball - a semicircle. The further to the edge of the ball the collision occurs the closer to vertical is the collision. Again, I think, this should allow you to easily infer the background normal and from there the answer is fairly simple

comparing bmps for brightness

I have a two bmp files of the same scene and I would like determine if one is more bright than the other.
Similarly I have a set of bmps with different contrasts and another set of bmps with different saturation.
How do I compare these images for brightness,contrast and saturation ? These test images are saved by a tool provided by the sensor manufacturer.
I am using gcc 4.5.
To compare the brightness of two images you need to compare the grey value of the pixels (yes, one by one). In the RGB colour space the brightness (grey value) is the mean of R,G and B, so you have brightness = (R+G+B) / 3
Comparing the contrast and especially the saturation will prove to be not that easy, for a start you could have a look at HSL and HSV but in general I'd suggest to get a good book on the image processing topic.
The answer of (R+G+B)/3 is really not even a good approximation of brightness (at least from what we know today)!
[BRIGHTNESS]
What you really SHOULD do is convert to another color scale and compare the brightness using that channel of a color scale that incorporates brightness into it. Look here!!!
Formula to determine brightness of RGB color
there are a great coupld of answers here that talk about conversion or RGB into luminance, etc...
[CONTRAST]
Contrast is a function of the spread of the pixel values throughout the full range of possible pixel values. One understands the contrast by putting together a histogram of all the pixels (where the x axis represents the a pixel value, and the y axis represents how many pixels are of that value), and analyzing the histogram to understand if there is good distribution throught the entire range, or not. Comparing contrast can be done many ways, but potentially a good starting point, would be to find the pixel-value center point (average of the histogram data) of each image, and potentially some histogram width parameter (where lets say the width is about the center point and is large enough to incorporate 90% of all pixels), and compare the center and width parameters of both images. This is ONLY a starting point.
[SATURATION]
To compare saturation, one might convert the image to the HSL colour space. The S in HSL stands for Saturation. Comparing saturation within this colour space becomes exactly like comparing brightness as outlined above!!!

How to get the wavelength of a pixel using RGB?

I have a project that would classify the color of a pixel. Whether it is red,violet, orange or simply any color in the color wheel. I know that there are over 16 million color combination for pixels. But I was able to read a web page that says its possible for me to do my project using the wavelengths of color. Please give me the formula to compute for the wavelength using RGB values. Thanks!
A pure color has a wavelength (any single color LED will have a specific wavelength).
Red, green and blue each have a range of wavelength. However, when you make an RGB color, you add these wavelengths together, which will NOT give you a new wavelength. The eye can't distinguish a yellow composed of one wavelength from that of adding red and green (just how the eye works).
I'd recommend reading up on color theory
http://en.wikipedia.org/wiki/RGB_color_model
Well RGB for a monitor maps to 3 independant levels of Red Green and Blue light, so there are (mostly) 3 distinct wavelengths present of any one percieved colour.
BUT If you can convert your RGB colour value to its equivilent HSL, the H part (Hue) is the dominant colour in so far as wavelength goes if you are prepared to ignore the saturation (think of it as whiteness).
Based on that you could approximate the dominante wavelength of a colour based on its H value.
Red light is roughly 630–740nm wavelength, Violet is roughly 380–450nm.
Working out wavelength is a bit tricky, and as Goblin mentioned, not always possible (another example is the colour obtained by mixing equal amounts of red and blue light. That purple has no single wavelength).
But if all you want to do is identify the colour by name, then the HSV model would be a good one to use. HSV is Hue (where the colour is around the colour wheel), Saturation (how much colour there is as opposed to being a shade of black/grey/white) and Value (how bright or dark the pixel is). In this case Hue is probably exactly what you want.
If you are using a .NET language, then you're in luck. See the Color.GetHue Method which does all the work for you.
Otherwise, see HSV at Wikipedia for more details.
In essence, if you have R, G and B as floats ranging from 0.0 to 1.0 (instead of ints from 0 to 255 for example), then:
M = max(R, G, B)
m = min(R, G, B)
C = M-m
if M = m then H' is undefined (The pixel is some shade of grey)
if M = R then H' = (G-B)/C mod 6
if M = G then H' = (B-R)/C + 2
if M = B then H' = (R-G)/C + 4
When converting RGB to HSV you then multiply H' by 60 degrees, but for your purposes H' is probably fine. It will be a float ranging from 0 to 6 (almost). 0 is Red (as is 6). 1 is Yellow, with values between 0 and 1 being shaded between Red and Yellow. So 0.5 would be Orange. The important landmarks are:
0 - Red
1 - Yellow
2 - Green
3 - Cyan
4 - Blue
5 - Purple
6 - Red (again)
Hope that helps.
http://en.wikipedia.org/wiki/Visible_spectrum
It is possible. See above. Gray background apparently makes it easier. You might get something like that on your own, and even improve on it. But to do it accurately will cost major dollars. U will need a colorimetry expert, a calibrated monitor and viewing environment (since what the dominant wavelength of your pixel is just means what monochromatic wavelength it approximates on your calibrated monitor in your calibrated viewing environment). All this will be a few thousand dollars. The work done at the above link, shown on wikipedia, does not seem that accurate but it is probably what you want.
Just convert the RGB to HSV then get the HSV value to degrees and this is the answer:
650 - 250 / 270 * D
where D is the degrees.
Considering...
Violet has a 380–450 nm wavelength, &
Blue has a 450–495 nm wavelength, &
Green has a 495–570 nm wavelength, &
Yellow has a 570–590 nm wavelength, &
Orange has a 590–620 nm wavelength, &
Red has a 620–750 nm wavelength,
then you just need to check if it is in those ranges, then you can classify it.
Hope this helps!

Resources