Measuring amount of light using OpenCV? - c

My idea is to use vision sensor (camera) to measure amount of light. I would
like to know is it possible to acquire information about light intensity using
OpenCV?
Is there some openCV function property which can get such information from the
scene?
Many thanks in advance.

Light intensity is a tough one because most color systems can't tell the difference between lightness of a color and light intensity. You can get some idea of the light intensity in a scene by measuring the "lightness" of the scene overall. The saturation might also give you a decent clue.
To do this I would convert the color space to HSL and then aggregate the L channel to get a very rough measure of Lightness. The S channel is the Saturation.
OpenCV has this natively* but it's not a difficult operation. The Wikipedia page has the formulas.
http://en.wikipedia.org/wiki/HSL_and_HSV
*Thanks jlengrand and Seçkin Savaşçı

Related

how to extract photometric data from images

Hello I have some confusions about extracting data from images and I know lots of image processing experts are in here. I would appreciate if someone can help me realize some concepts. how we can get some information,like intensity of a the light source from images? I know we can extract RGB value, but these values are associated with the surfaces and not with the light source spectra (I am talking about white light source with different spectra not monochromatic wavelength). is there a way to extract some information of the light source from images with matlab? should we convert color images to greyscale images? if yes, can you explain how grey scale giving us intensity (or other photometric data)? I know about HDRI so feel free to refer to them
You can in each language get the red (=r), green (=g), blue (=b) and alpha bytes of each pixel of an image. The internet give you many formulas to calculate the different possible values on base of the amount of red, green and blue.
For example this link provides how to calculate the hsv value with r, g and b.
It is more or less a question HOW (language, libraries) you want to do it.

clarifying gamma topic (in general)

I understand gamma topic but maybe not 100% and
want to ask somebody to answer and clarify my
doubts.
As I understand
there is a natural linear color space where color
with value 100 is exactly four times brighter then color
with value 25 and so on (it would be good to hold and
process color images in that format/space
when you will copy such linear light image onto
device you would get wrong colors becouse generally
some medium values would appear to dark sou you
generally need to rise up this middle values by something
like x <- power(x, 1/2.2) (or something)
this is reasonably clear, but now
are the normal everyday bitmap or jotpeg images
gamma precomputed ? If they are when i would do
some things al blending and so should i do some
inverted-gamma to boath them to convert them to
linear then add them and then gamma-correct them
to result?
when out of scope would apperr it is better co cut
colors or resacle it linearly into more wide range or
something else?
tnx for answers
There's nothing "natural" about linear color values. Your own eyes respond to brightness logarithmically, and the phosphors in televisions and monitors respond exponentially.
GIF and JPEG files do not specify a gamma value (JPEG can with extra EXIF data), so it's anybody's guess what their color values really mean. PNG does, but most people ignore it or get it wrong, so they can't be trusted either. If you're displaying images from a file, you'll just have to guess or experiment. A reasonable guess is to use 2.2 unless you know the file came from an Apple device, in which case use 1.0 (Apple devices are generally linear).
If you need to store images accurately, Use JPEG from a camera with good EXIF data embedded for photos, and maybe something like TIFF for uncompressed images. And use high-quality software that understands these things. Unfortunately, that's pretty rare and/or expensive.

Image processing..back ground subtraction

I have a sequence of images taken from a camera. The images consists of hand and surroundings. I need to remove everything except the hand.
I am new to Image processing. Would anyone help me in regard with the above Question. I am comfortable using C and Matlab.
A really simple approach if you have a stationary background and a moving hand (and quite a few images!) is simply to take the average of the set of images away from each image. If nothing else, it's a gentle introduction to Matlab.
The name of the problem you are trying to solve is "Image Segmentation". The Wikipedia page here: wiki is a good start.
If lighting consistency isn't a problem for you, I'd suggest starting with simple RGB thresholding and see how far that gets you before trying anything more complicated.
Have a look at OpenCV, a FOSS library for computer vision applications. Specifically, see the Video Surveillance module. For a walk through of background subtraction in MATLAB, see this EETimes article.
Can you specify what kind of images you have. Is the background moving or static? For a static background it is a bit straightforward. You simply need to subtract the incoming image from the background image. You can use some morphological operations to make it look better. They all depend on the quality of images that you have. If you have moving background I would suggest you go for color based segmentation. Convert the image to YCbCr then threshold appropriately. I know there are some papers available on it(However I dont have time to locate them). I suggest reading them first. Here is one link which might help you. Read the skin segmentation part.
http://www.stanford.edu/class/ee368/Project_03/Project/reports/ee368group08.pdf
background subtraction is simple to implement (estimate background as average of all frames, then subtract each frame from background and threshold resulting absolute difference) but unfortunately only works well if 1. camera has manual gain and exposure 2. lighting conditions do not change 3.background is stationary. 4. the background is visible for much longer than the foreground.
given your description i assume these are not the case - so what you can use - as already pointed out - is colour as a means of segmenting foreground from background. as it's a hand you are trying to isolate best bet is to learn the hand colour. opencv provides some means of doing this. if you want to do this yourself you just get the colour of some of the hand pixels (you would need to specify this manually for at least one frame) and convert them to HUE (which encapsulates the colour in a brightness independen way. skin colour has a very constant hue) and then make a HUE histogram. compare this to the rest of the pixels and then decided if the hue is simmilar enough.

ImageProcessing in WPF (Fant BitmapScalingMode)

My application presents an image that can be scaled to a certain size. I'm using the Image WPF control with the scaling method of FANT.
However, there is no documentation how this scaling algorithm works.
Can anyone reference me to the relevant link for this algorithm description?
Nir
Avery Lee of VirtualDub states that it's a box filter for downscaling and linear for upscaling. If I'm not mistaken, "box filter" here means basically that each output pixel is a "flat" average of several input pixels.
In practice, it's a lot more blurry for downscaling than GDI's cubic downscaling, so the theory about averaging sounds about right.
I know what it is, but I couldn't find much on Google either :(
http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=4056711 is the appropriate paper I think; behind a pay-wall.
You don't need to understand the algorithm to use it. You should explicitly make the choice each time you create a bitmap control that is scaled whether you want it high-quality scaled or low quality scaled.

From an image, how do I determine the shade?

For a database app I'm trying to determine the average shade of a section of photo, against a colour scale.
Being a novice I'm finding this very difficult to explain so I've created a simple diagram showing exactly what I'm trying to achieve.
http://www.knockyoursocksoff.com/shade/
If anybody has the time to give me some ideas I'd be very grateful.
Best wishes,
Warren.
If you are using color photos, you should first convert the selected area from RBG (red, green, blue) to HSL/HSV (article).
HSL stands for "hue, saturation, lightness".1 The number you are interested in is the lightness.
In the most general terms, the lightness refers to how you perceive the brightness of a colored surface. It's hard to use the red/green/blue components to say whether a patch of red is brighter/darker than, say, a patch of blue. Converting to HSL takes care of that problem.
Once you have done the conversion, you can simply average the lightness values of your selected area.
Quick note on lightness values: Technically, you can't simply average the lightness values because the perception of lightness is not linear (article). But, unless you are writing a deeply scientific application, simply averaging the lightness will give you an "accurate enough" value.
1 In Adobe Photoshop, they call it HSB (hue, saturation, brightness)
I think I would start by just averaging the pixel values:
for x = start_x to end_x
for y = stary_y to end_y
total += getPixel(x,y)
shade = total / (xlen*ylen)
Its going to be more complicated if you're doing it in color.

Resources