Get DCT values from image - dct

I am pretty new to everything related to image processing so please have that in mind. I have been trying to understand how to get the DCT dc coefficients of luminance blocks from an image. I have read some stuf from here https://en.wikipedia.org/wiki/Discrete_cosine_transform#DCT-I and other sources but i am not sure i got it. So if we have an RGB image and we can get all the red, green, blue values from it how can we use them to get the DCT(m,n) where m,n the position of the each pixel?

Related

Finding max-min pixel luminance on screen/in texture without GLSL support

In my 2D map application, I have 16-bit heightmap textures containing altitudes in meters associated to a point on the map.
When I draw these textures on the screen, I would like to display an analysis such that the pixel referring to the highest altitude on the screen is white, the pixel referring to the lowest altitude in the screen is black and the values in-between are interpolated between those two.
I'm using an older OpenGL version and thus do not have access to modern pipeline functionality like GLSL or PBO (Which somehow can make getting color buffer contents to CPU side much more efficient than glReadPixels, as I've heard).
I have access to ATI_fragment_shader extension which makes possible to use a basic fragment shader to merge R and G channels in these textures and get a single float grayscale luminance value.
Then I would've been able to re-color these pixels again inside shader (Map them to 0-1 range) based on maximum and minimum pixel luminance values but I don't know what they are.
My question is, between the pixels currently on the screen, how do I find the pixels with maximum and minimum luminance values? Or as an alternative, how do I find these values inside a texture? (Because I could make a glCopyTexImage2D call after drawing the texture with grayscale luminance values on the screen and retrieve the data as a texture).
Stuff I've tried or read about so far:
-If I could somehow get current pixel RGB values in the color buffer to CPU side, I could find what I need manually and then use them. However, reading color buffer contents with glReadPixels is unacceptably slow. It's no use even if I set it up so that it completes one read operation over multiple frames.
-Downsampling the texture to 1x1 size until the last standing pixel is either minimum or maximum value and then using this 1x1 texture inside shader. I have no idea how to achieve this without GLSL and texel fetching support since I would have to look up the pixel which is to the right, up and up-right of the current one and find a min/max value between them.

how to extract photometric data from images

Hello I have some confusions about extracting data from images and I know lots of image processing experts are in here. I would appreciate if someone can help me realize some concepts. how we can get some information,like intensity of a the light source from images? I know we can extract RGB value, but these values are associated with the surfaces and not with the light source spectra (I am talking about white light source with different spectra not monochromatic wavelength). is there a way to extract some information of the light source from images with matlab? should we convert color images to greyscale images? if yes, can you explain how grey scale giving us intensity (or other photometric data)? I know about HDRI so feel free to refer to them
You can in each language get the red (=r), green (=g), blue (=b) and alpha bytes of each pixel of an image. The internet give you many formulas to calculate the different possible values on base of the amount of red, green and blue.
For example this link provides how to calculate the hsv value with r, g and b.
It is more or less a question HOW (language, libraries) you want to do it.

RGBA png alpha processing

I have an RGBA PNG file that is(I think) the capture of a signature from a digitizing tablet. Extracting out the image, ALL RGB triplets are 0,0,0 and the alpha channel values are non zero if the pixel is to carry a tone in the final image. I get all of that.
This PNG only has a IHDR, IDAT, and IEND chunks.
My first question is, are my RGB pixels considered the foreground or
the background? What might be the proper terminology to describe this
file/image?
What equation do I use to apply the alpha to the RGB.
Looking at the alpha values, I can see how to come up with a number, but what general equation would be used generate the appropriate RGB value, avoiding divide by 0 or overflow value errors if my RGBs had started out with non zero values.
I have been through the PNG spec and there's something I just don't get.
BTW, I am ultimately producing, in C, a PCL file intended for printing directly to a PCL LaserJet.
The image you display last is the foreground image. There is no foreground and background in a single image.
This link shows how to blend an image with alpha to another image.:
http://en.wikipedia.org/wiki/Alpha_compositing#Alpha_blending

Making a color completely transparent in OpenCV

I have a basic png file with two colors in it, green and magenta. What I'm looking to do is to take all the magenta pixels and make them transparent so that I can merge the image into another image.
An example would be if I have an image file of a 2D character on a magenta background. I would remove all the magenta in the background so that it's transparent. From there I would just take the image of the character and add it as a layer in another image so it looks like the character has been placed in an environment.
Thanks in advance.
That's the code i would use,
First, load your image :
IplImage *myImage;
myImage = cvLoadImage("/path/of/your/image.jpg");
Then use a mask like this to select the color, you should refer to the documentation. In the following, I want to select a blue (don't forget that in OpenCV images are in BGR format, therefore 125,0,0 is a blue (it corresponds to the lower bound) and 255,127,127 is blue with a certain tolerance and is the upper bound.
I chose lower and upper bound with a tolerance to take all the blue of your image, but you can select whatever you want...
cvInRangeS(image,
cvScalar(125.0, 0.0, 0.0),
cvScalar(255.0, 127.0, 127.0),
mask
);
Now we have selected the mask, let's inverse it (as we don't want to keep the mask, but to remove it)
cvNot(mask, mask);
And then copy your image with the mask,
IplImage *myImageWithTransparency; //You may need to initialize it before
cvCopy(myImage,myImageWithTransparency,mask);
Hope it could help,
Please refer to the OpenCVDocumentation for further information
Here it is
Julien,

conversion from rgb to yuv 4:2:0

how do i make a 160*70 bitmap image move over a 640*280 bitmap image and reflect off its edge after converting both bitmap images into yuv 4:4:4and write it into a single yuv file ? and how do i convert the same into yuv 4:2:0?could you please help me out as to how do i code the same in c?
Converting to YUV 4:4:4 - This is purely an affine transformation on each RGB vector. Just use the proper formula for whichever YUV variant you need. You'll probably want to separate the image into planes at this point too.
Converting to YUV 4:2:0 - This is purely a resampling problem. You need to resample the U and V planes to half width and half height. Do NOT just skip samples ("nearest-neighbor sampling"); this will result in very ugly aliasing. You could simply average the corresponding 2x2 squares or use a more advanced filter. For downsampling, area-average is pretty close to ideal anyway; gaussian may give mildly better results.
If you don't mind using library code, libswscale from ffmpeg can do both of these steps for you, and will do it very fast.
Finally, moving the small image across the big one: Is it purely a rectangular image or does it use an alpha channel? Either way you'll simply need to loop over the coordinates you want it to appear at and output an image for each point. If it's rectangular you just then copy pixels, whereas if it has an alpha channel you need to use that for alpha blending (interpolating between the pixel values according to the alpha value).
This wikipedia article has RGB -> YUV440.
And RGB -> YUV420 is described in the same article in this section.
I did not understand:
how do i make a 160*70 bitmap image
move over a 640*280 bitmap image and
reflect off its edge

Resources