how to extract photometric data from images - rgb

Hello I have some confusions about extracting data from images and I know lots of image processing experts are in here. I would appreciate if someone can help me realize some concepts. how we can get some information,like intensity of a the light source from images? I know we can extract RGB value, but these values are associated with the surfaces and not with the light source spectra (I am talking about white light source with different spectra not monochromatic wavelength). is there a way to extract some information of the light source from images with matlab? should we convert color images to greyscale images? if yes, can you explain how grey scale giving us intensity (or other photometric data)? I know about HDRI so feel free to refer to them

You can in each language get the red (=r), green (=g), blue (=b) and alpha bytes of each pixel of an image. The internet give you many formulas to calculate the different possible values on base of the amount of red, green and blue.
For example this link provides how to calculate the hsv value with r, g and b.
It is more or less a question HOW (language, libraries) you want to do it.

Related

LABVIEW matrix to graylevel picture

I'm using camera mightex bte-b050-u
and using the example the company gave using labview I created a 2D array and want to transform it into a grayscale photo. I have no idea how to do it since I don't know labview and using the code in matlab it has many errors.
I'd appreciate if someone could please let me know how I can take this 2D array and represent it as a grayscale photo. (beam profiler if you wish)
P.s
I have no idea why they ignored 28 "words" so I just tried going with that logic and transform it to my camera according to the pixels I have.
I would encourage you to read about intensity graph.
And use the linked article which would explain on how to change the color scale on the graph.
Changing the color on an Intensity Graph
Here is an example that converts a 2-D array into an image: Convert Array to Image.
In short, use Flatten Pixmap.vi, Draw Flattened Pixmap.vi, and Picture to Pixmap.vi from the NI-IMAQ palette, which is part of the Vision Application Software SDK.

How can I determine the colorspace (RGB) profile of my data?

I have a standard jpeg image, which I use within some commercial software to colorize other data (by mapping the image's color onto the data). Then I export the colored data from this software to an XYRGB ascii file, i.e. I store the data information in the first two columns of each row and then the three RGB colors in the last three columns.
Since I need to convert the color to CIELab or CIELuv, it seems I need to know which exact colorspace (RGB, sRGB, gamma, whitepoint - you name it) my RGB values are in. But the question is: How can I find out? Or could I just assume a certain profile being a good approximation?
(Remark: The company of the commercial software I used was not able to tell me any specifics...)
If you don't know the provenance of the image, there's not anything you can do to determine the color space from the RGB data alone. It's a little like having a blueprint without a scale. You could guess and check with an application like Photoshop that can assign a profile to an image but even then it's not always obvious which is correct unless the image contains colors you can recognize as correct.
For many images sRGB is good guess. Most image on the web are sRGB and many non-color managed apps assume sRGB. But just understand that it is still a guess. If color accuracy is critical, you need the profile.

Get DCT values from image

I am pretty new to everything related to image processing so please have that in mind. I have been trying to understand how to get the DCT dc coefficients of luminance blocks from an image. I have read some stuf from here https://en.wikipedia.org/wiki/Discrete_cosine_transform#DCT-I and other sources but i am not sure i got it. So if we have an RGB image and we can get all the red, green, blue values from it how can we use them to get the DCT(m,n) where m,n the position of the each pixel?

clarifying gamma topic (in general)

I understand gamma topic but maybe not 100% and
want to ask somebody to answer and clarify my
doubts.
As I understand
there is a natural linear color space where color
with value 100 is exactly four times brighter then color
with value 25 and so on (it would be good to hold and
process color images in that format/space
when you will copy such linear light image onto
device you would get wrong colors becouse generally
some medium values would appear to dark sou you
generally need to rise up this middle values by something
like x <- power(x, 1/2.2) (or something)
this is reasonably clear, but now
are the normal everyday bitmap or jotpeg images
gamma precomputed ? If they are when i would do
some things al blending and so should i do some
inverted-gamma to boath them to convert them to
linear then add them and then gamma-correct them
to result?
when out of scope would apperr it is better co cut
colors or resacle it linearly into more wide range or
something else?
tnx for answers
There's nothing "natural" about linear color values. Your own eyes respond to brightness logarithmically, and the phosphors in televisions and monitors respond exponentially.
GIF and JPEG files do not specify a gamma value (JPEG can with extra EXIF data), so it's anybody's guess what their color values really mean. PNG does, but most people ignore it or get it wrong, so they can't be trusted either. If you're displaying images from a file, you'll just have to guess or experiment. A reasonable guess is to use 2.2 unless you know the file came from an Apple device, in which case use 1.0 (Apple devices are generally linear).
If you need to store images accurately, Use JPEG from a camera with good EXIF data embedded for photos, and maybe something like TIFF for uncompressed images. And use high-quality software that understands these things. Unfortunately, that's pretty rare and/or expensive.

Measuring amount of light using OpenCV?

My idea is to use vision sensor (camera) to measure amount of light. I would
like to know is it possible to acquire information about light intensity using
OpenCV?
Is there some openCV function property which can get such information from the
scene?
Many thanks in advance.
Light intensity is a tough one because most color systems can't tell the difference between lightness of a color and light intensity. You can get some idea of the light intensity in a scene by measuring the "lightness" of the scene overall. The saturation might also give you a decent clue.
To do this I would convert the color space to HSL and then aggregate the L channel to get a very rough measure of Lightness. The S channel is the Saturation.
OpenCV has this natively* but it's not a difficult operation. The Wikipedia page has the formulas.
http://en.wikipedia.org/wiki/HSL_and_HSV
*Thanks jlengrand and Seçkin Savaşçı

Resources