how do i make a 160*70 bitmap image move over a 640*280 bitmap image and reflect off its edge after converting both bitmap images into yuv 4:4:4and write it into a single yuv file ? and how do i convert the same into yuv 4:2:0?could you please help me out as to how do i code the same in c?
Converting to YUV 4:4:4 - This is purely an affine transformation on each RGB vector. Just use the proper formula for whichever YUV variant you need. You'll probably want to separate the image into planes at this point too.
Converting to YUV 4:2:0 - This is purely a resampling problem. You need to resample the U and V planes to half width and half height. Do NOT just skip samples ("nearest-neighbor sampling"); this will result in very ugly aliasing. You could simply average the corresponding 2x2 squares or use a more advanced filter. For downsampling, area-average is pretty close to ideal anyway; gaussian may give mildly better results.
If you don't mind using library code, libswscale from ffmpeg can do both of these steps for you, and will do it very fast.
Finally, moving the small image across the big one: Is it purely a rectangular image or does it use an alpha channel? Either way you'll simply need to loop over the coordinates you want it to appear at and output an image for each point. If it's rectangular you just then copy pixels, whereas if it has an alpha channel you need to use that for alpha blending (interpolating between the pixel values according to the alpha value).
This wikipedia article has RGB -> YUV440.
And RGB -> YUV420 is described in the same article in this section.
I did not understand:
how do i make a 160*70 bitmap image
move over a 640*280 bitmap image and
reflect off its edge
Related
I have a input.yuv image which I wants to use in my code as a input.
But I want to know whether it is 422,420 or 444 format and also wants to know whether it is planner and packed and what is its width, height and stride.
When I saw this image using https://rawpixels.net/ tool, I can see the perfect image with gray scale with dimensions 1152x512. But when I do with yuv420p or other options, the color and luma components are not with correct offset and hence showing the mixture of color and gray scale image with different offset(2 images in same screen).
Is there any way to write a C code to find above mentioned yuv details (dimensions, formats and types) ?
Not really. Files with a .yuv extension just contain raw pixel data normally in a planar format.
That would typically be width * height of luma pixels followed by either width/2height/2 (420) or widthheight/2 (422) Cb and Cr components.
They cam be 8 or 10 bits per pixel with 10 bits per pixel usually stored in 2 bytes. It's just really a case of trial and error to find out what it is.
Occasionally you find all sorts of arrangements of Y, Cb, Cr in files with a .yuv extension. Planar is most common though.
In my 2D map application, I have 16-bit heightmap textures containing altitudes in meters associated to a point on the map.
When I draw these textures on the screen, I would like to display an analysis such that the pixel referring to the highest altitude on the screen is white, the pixel referring to the lowest altitude in the screen is black and the values in-between are interpolated between those two.
I'm using an older OpenGL version and thus do not have access to modern pipeline functionality like GLSL or PBO (Which somehow can make getting color buffer contents to CPU side much more efficient than glReadPixels, as I've heard).
I have access to ATI_fragment_shader extension which makes possible to use a basic fragment shader to merge R and G channels in these textures and get a single float grayscale luminance value.
Then I would've been able to re-color these pixels again inside shader (Map them to 0-1 range) based on maximum and minimum pixel luminance values but I don't know what they are.
My question is, between the pixels currently on the screen, how do I find the pixels with maximum and minimum luminance values? Or as an alternative, how do I find these values inside a texture? (Because I could make a glCopyTexImage2D call after drawing the texture with grayscale luminance values on the screen and retrieve the data as a texture).
Stuff I've tried or read about so far:
-If I could somehow get current pixel RGB values in the color buffer to CPU side, I could find what I need manually and then use them. However, reading color buffer contents with glReadPixels is unacceptably slow. It's no use even if I set it up so that it completes one read operation over multiple frames.
-Downsampling the texture to 1x1 size until the last standing pixel is either minimum or maximum value and then using this 1x1 texture inside shader. I have no idea how to achieve this without GLSL and texel fetching support since I would have to look up the pixel which is to the right, up and up-right of the current one and find a min/max value between them.
I have an RGBA PNG file that is(I think) the capture of a signature from a digitizing tablet. Extracting out the image, ALL RGB triplets are 0,0,0 and the alpha channel values are non zero if the pixel is to carry a tone in the final image. I get all of that.
This PNG only has a IHDR, IDAT, and IEND chunks.
My first question is, are my RGB pixels considered the foreground or
the background? What might be the proper terminology to describe this
file/image?
What equation do I use to apply the alpha to the RGB.
Looking at the alpha values, I can see how to come up with a number, but what general equation would be used generate the appropriate RGB value, avoiding divide by 0 or overflow value errors if my RGBs had started out with non zero values.
I have been through the PNG spec and there's something I just don't get.
BTW, I am ultimately producing, in C, a PCL file intended for printing directly to a PCL LaserJet.
The image you display last is the foreground image. There is no foreground and background in a single image.
This link shows how to blend an image with alpha to another image.:
http://en.wikipedia.org/wiki/Alpha_compositing#Alpha_blending
In C, I have a 1D array of unsigned chars (ie, between 0 to 255) of length 3*DIM*DIM which represents a DIM*DIM pixel image, where the first 3 pixels are the RGB levels of the first pixel, the second 3 pixels are the RGB levels of the second pixel, etc. I would like to save it as a PNG image. What is the easiest, most lightweight method of doing this conversion?
Obviously OpenGL can read and display images of this form (GLUT_RGB), but DIM is larger than the dimensions of my monitor screen, so simply displaying the image and taking a screenshot is not an option.
At the moment, I have been doing the conversion by simply saving the array to a CSV file, loading it in Mathematica, and then exporting it as a PNG, but this is very slow (~8 minutes for a single 7000*7000 pixel image).
There are many excellent third party libraries you can use to convert an array of pixel-data to a picture.
libPNG is a long standing standard library for png images.
LodePNG also seems like a good candidate.
Finally, ImageMagick is a great library that supports many different image formats.
All these support C, and are relatively easy to use.
I have a two bmp files of the same scene and I would like determine if one is more bright than the other.
Similarly I have a set of bmps with different contrasts and another set of bmps with different saturation.
How do I compare these images for brightness,contrast and saturation ? These test images are saved by a tool provided by the sensor manufacturer.
I am using gcc 4.5.
To compare the brightness of two images you need to compare the grey value of the pixels (yes, one by one). In the RGB colour space the brightness (grey value) is the mean of R,G and B, so you have brightness = (R+G+B) / 3
Comparing the contrast and especially the saturation will prove to be not that easy, for a start you could have a look at HSL and HSV but in general I'd suggest to get a good book on the image processing topic.
The answer of (R+G+B)/3 is really not even a good approximation of brightness (at least from what we know today)!
[BRIGHTNESS]
What you really SHOULD do is convert to another color scale and compare the brightness using that channel of a color scale that incorporates brightness into it. Look here!!!
Formula to determine brightness of RGB color
there are a great coupld of answers here that talk about conversion or RGB into luminance, etc...
[CONTRAST]
Contrast is a function of the spread of the pixel values throughout the full range of possible pixel values. One understands the contrast by putting together a histogram of all the pixels (where the x axis represents the a pixel value, and the y axis represents how many pixels are of that value), and analyzing the histogram to understand if there is good distribution throught the entire range, or not. Comparing contrast can be done many ways, but potentially a good starting point, would be to find the pixel-value center point (average of the histogram data) of each image, and potentially some histogram width parameter (where lets say the width is about the center point and is large enough to incorporate 90% of all pixels), and compare the center and width parameters of both images. This is ONLY a starting point.
[SATURATION]
To compare saturation, one might convert the image to the HSL colour space. The S in HSL stands for Saturation. Comparing saturation within this colour space becomes exactly like comparing brightness as outlined above!!!