transparency implementation in YUV422 using only Y - c

Lets say we have 2 images in YUV422 format and assume that the second image Y field of value 0x10 is being transparent and merged on to the first one with Cb and Cr overwritten.
The product of such merge results in ugly borders (divided pixel line efect) of solid shapes. Is there a way to produce a combination of values on borders, so the transition is smooth?

This problem is not specific to YUV4:2:2:, but occurs whenever binary transparency is used. The best solution is to use a four-channel image and include an alpha channel. Essentially, an alpha channel represents the "degree of opaque-ness" of each pixel. When two images with alpha-channels overlap, alpha blending produces a result that looks much better.
If you're stuck with YUV4:2:2 or can't add alpha channel, you could try smooth the transition the two images with a low-pass filter. This will hurt the definition of your edges, but might look better than doing nothing.

Related

Graphs the same size in gnuplot multiplot when each take up most of its canvas and one has labels

I've reedited this question a few times: I've made some good progress!
So, as I understand it, multiplot splits the whole canvas up into equal sized parts as needed. This is a little weird when your different plots have different dimensions, as in my case, but it works. The problem might come in when the graph are supposed to be very close together (e.g. each takes up most of its canvas), but one of them has labels. In that case, it seems the plot with labels must resize to be smaller so everything can fit. That's where I am now.
I see a few options.
make all the plots farther apart-- but I don't want to do that.
somehow make the label not part of the multiplot-- I would totally do this, but I don't know how. It's possible even just the axis tics themselves would be too big, but I can probably deal with that or compromise just that amount on the spacing.
So my question is, how can I put words in a gnuplot graph, completely separately from a plot?
(The picture is also giant, which is unfortunate, it was the only way I could make the formatting work)
Two things:
Multiplot has a convenience mode layout <rows>, <columns> that, as you say, splits the page into equal rectangles. But you do not have to use this convenience mode; you can assign each sub-plot to any arbitrary rectangle on the page, even one that overlaps or is interior to another rectangle. Here is an example from the online demo set that is close to what you show:
Demo of multiple plots with explicit alignment of borders
Placing text anywhere on the page: The set label command allows you to position the text using screen coordinates rather than plot coordinates. For example, to place a single large label centered at the top of a page that contains multiple plots:
set label 1 "This label is positioned independent of all plots"
set label 1 at screen 0.5, screen 0.95 center
set label 1 font "Times,20"

How to display the tiny triangles or recognize them quickly?

What I am doing is a pick program. There are many triangles and I want select the front and visible ones by a rectangular region. The main method is described below.
there are a lot of triangles and each triangle has its own color.
draw all the triangles to a frame buffer.
read the color of pixel in frame buffer and based on the color, we know which triangles are selected.
The problem is that there are some tiny triangles can not be displayed in the final frame buffer. Just like the green triangle in the picture. I think the triangle is too tiny and ignored by the graphic card.
My question is how to display the tiny triangles in the final frame buffer? or how to know which triangles are ignored by the graphic card?
Triangles are not skipped based on their size, but if a pixel center does not fall inside or lie on the top or left edge (this is referred to as coverage testing) they do not generate any fragments during rasterization.
That does mean that certain really small triangles are never rasterized, but it is not entirely because of their size, just that their position is such that they do not satisfy pixel coverage.
Take a moment to examine the following diagram from the DirectX API documentation. Because of the size and position of the the triangle I have circled in red, this triangle does not satisfy coverage for any pixels (I have illustrated the left edge of the triangle in green) and thus never shows up on screen despite having a tangible surface area.
If the triangle highlighted were moved about a half-pixel in any direction it would cover at least one pixel. You still would not know it was a triangle, because it would show up as a single pixel, but it would at least be pickable.
Solving this problem will require you to ditch color picking altogether. Multisample rasterization can fix the coverage issue for small triangles, but it will compute pixel colors as the average of all samples and that will break color picking.
Your only viable solution is to do point inside triangle testing instead of relying on rasterization. In fact, the typical alternative to color picking is to cast a ray from your eye position through the far clipping plane and test for intersection against all objects in the scene.
The usability aspect of what you seem to be doing seems somewhat questionable to me. I doubt that most users would expect a triangle to be pickable if it's so small that they can't even see it. The most obvious solution is that you let the user zoom in if they really need to selectively pick such small details.
On the part that can actually be answered on a technical level: To find out if triangles produced any visible pixels/fragments/samples, you can use queries. If you want to count the pixels for n "objects" (which can be triangles), you would first generate the necessary query object names:
GLuint queryIds[n]; // probably dynamically allocated in real code
glGenQueries(n, queryIds);
Then bracket the rendering of each object with glBeginQuery()/glEndQuery():
loop over objects
glBeginQuery(GL_SAMPLES_PASSED, queryIds[i]);
// draw object
glEndQuery(GL_SAMPLES_PASSED);
Then at the end, you can get all the results:
loop over objects
GLint pixelCount = 0;
glGetQueryObjectiv(queryIds[i], GL_QUERY_RESULT, &pixelCount);
if (pixelCount > 0) {
// object produced visible pixels
}
A couple more points to be aware of:
If you only want to know if any pixels were rendered, but don't care how many, you can use GL_ANY_SAMPLES_PASSED instead of GL_SAMPLES_PASSED.
The query counts samples that pass the depth test, as the rendering happens. So there is an order dependency. A triangle could have visible samples when it is rendered, but they could later be hidden by another triangle that is drawn in front of it. If you only want to count the pixels that are actually visible at the end of the rendering, you'll need a two-pass approach.

RGBA png alpha processing

I have an RGBA PNG file that is(I think) the capture of a signature from a digitizing tablet. Extracting out the image, ALL RGB triplets are 0,0,0 and the alpha channel values are non zero if the pixel is to carry a tone in the final image. I get all of that.
This PNG only has a IHDR, IDAT, and IEND chunks.
My first question is, are my RGB pixels considered the foreground or
the background? What might be the proper terminology to describe this
file/image?
What equation do I use to apply the alpha to the RGB.
Looking at the alpha values, I can see how to come up with a number, but what general equation would be used generate the appropriate RGB value, avoiding divide by 0 or overflow value errors if my RGBs had started out with non zero values.
I have been through the PNG spec and there's something I just don't get.
BTW, I am ultimately producing, in C, a PCL file intended for printing directly to a PCL LaserJet.
The image you display last is the foreground image. There is no foreground and background in a single image.
This link shows how to blend an image with alpha to another image.:
http://en.wikipedia.org/wiki/Alpha_compositing#Alpha_blending

I have a dot bouncing around an image. Need to calculate angles of reflection off of groups of pixels (surface of objects)

Suppose we have an image (pixel buffer) that is in black and white, so each pixel is either black or white (not gray scale).
Now somewhere in the middle of that images, place a green dot. It may have a radius of n for rendering purposed, but it is really a just point. Give the dot a randomly selected direction and speed, and start it moving. If the image is all white pixels, the dot will bounce off the edges of the image, infinitely wandering around the picture. This is quite easy... just reverse either the rise or run of the dot's vector.
Next, suppose the image has some globs of black pixels. As the dot encounters these globs of black pixels, the angle of reflection needs to be calculated. This is also quite easy of the the black pixels have a fixed slope, as in my sketch (blue X represents black pixels). You can find the slope of the blue Xs and easily calculate the new vector.
But how about the case where the black pixels form really unfriendly surfaces? What are some approaches to figuring out this angle?
This is the subject that I am interested in.
There must be some algorithms that exist for this kind of purpose, but I never ran across any in school. I am not asking how to code this, rather approaches to writing the algorithm to do this. I have a few ideas that I'll try, but if there are some standard ways to do this that exist, I'd like to learn about them.
Obviously I'd like to start with Black and White then move into RGBA.
I am looking for any reference material on just this sort of subject. Websites, books, or other references are very very welcome.
Also, if there are different StackOverflow tags that could be good, let me know.
Thanks much!
Edit********** More pics and information
Maybe I wasn't clear what I meant by "unfriendly surfaces". In the previous picture, our blue X's happened to just be a line. Picture a case where it is not a line, rather a wierd shape.
We start with our green pixel traveling at a slope of 2. Suppose it's vector is that of 12 pixels per frame. It would have a projected path like this:
But suppose instead of a nice friendly line, we have this:
In my mind I can kinda of see what is likely to happen if this were a ball and some walls.
Look for edge detection algorithms used in image processing. Some edge detectors also approximate the direction of edges.
You can think of the pixel neighborhood of the green dot, maybe somewhere between 3x3 and 7x7, as a small edge direction detection problem. One approach would take two passes at the pixels:
In the first pass, smooth the sharp black/white pixels using a Gaussian filter.
In the second pass, apply an edge detection operator, such as Sobel, Prewitt or Roberts to produce the X and Y derivatives of the pixels' intensity. You can then approximate the direction as:
angle = arctan(dx/dy)
The motivation for the smoothing pass is to give the edge detection operator information from farther-away pixels.
The Wikipedia page on the Canny edge detector has a good discussion on obtaining the direction (the "gradient") of an edge, including an example of a particular Gaussian filter you can use for smoothing.
Am doing something similar with a ball and randomly generated backgrounds.
The filter and edge detection is highly technical but all other processes using a 5*5 or 3*3 grid seem similarly difficult.
However, I think I may have a cheap way around this. Assuming a ball travelling in any direction, scan all leading edges of the ball - a semicircle. The further to the edge of the ball the collision occurs the closer to vertical is the collision. Again, I think, this should allow you to easily infer the background normal and from there the answer is fairly simple

comparing bmps for brightness

I have a two bmp files of the same scene and I would like determine if one is more bright than the other.
Similarly I have a set of bmps with different contrasts and another set of bmps with different saturation.
How do I compare these images for brightness,contrast and saturation ? These test images are saved by a tool provided by the sensor manufacturer.
I am using gcc 4.5.
To compare the brightness of two images you need to compare the grey value of the pixels (yes, one by one). In the RGB colour space the brightness (grey value) is the mean of R,G and B, so you have brightness = (R+G+B) / 3
Comparing the contrast and especially the saturation will prove to be not that easy, for a start you could have a look at HSL and HSV but in general I'd suggest to get a good book on the image processing topic.
The answer of (R+G+B)/3 is really not even a good approximation of brightness (at least from what we know today)!
[BRIGHTNESS]
What you really SHOULD do is convert to another color scale and compare the brightness using that channel of a color scale that incorporates brightness into it. Look here!!!
Formula to determine brightness of RGB color
there are a great coupld of answers here that talk about conversion or RGB into luminance, etc...
[CONTRAST]
Contrast is a function of the spread of the pixel values throughout the full range of possible pixel values. One understands the contrast by putting together a histogram of all the pixels (where the x axis represents the a pixel value, and the y axis represents how many pixels are of that value), and analyzing the histogram to understand if there is good distribution throught the entire range, or not. Comparing contrast can be done many ways, but potentially a good starting point, would be to find the pixel-value center point (average of the histogram data) of each image, and potentially some histogram width parameter (where lets say the width is about the center point and is large enough to incorporate 90% of all pixels), and compare the center and width parameters of both images. This is ONLY a starting point.
[SATURATION]
To compare saturation, one might convert the image to the HSL colour space. The S in HSL stands for Saturation. Comparing saturation within this colour space becomes exactly like comparing brightness as outlined above!!!

Resources