HLSL circular/spiral transition shader - wpf

I am developing some shaders for WPF, and so far I managed to get fadeout, and swipe shader working, but for this one, I have no idea where to start.
Could someone please hand me out a few tips on how to approach this problem?
What I am trying to achieve is the following:
Thank you

In my opinion the easiest way to build any complicated effect is to decompose the original effect into small parts and add them together. In your case the effect consists of 3 parts:
5 rings filled after each other
each ring is filled counterclockwise from the left
the filling of a ring is a circle at each end
This in mind you can build a solution for each part separately and add the results together.
I assume that there will be a variable float progress; rolling from 0 to 1 which determines the progress of the transistion.
In following some starting points for each part:
For 1. you check the distance of the texture coordinate of the fragment to the center of the screen and divide the maximum distance into 5 parts. While the progress is 0 <= progress < 0.2, the first ring is visible, while 0.2 <= progress < 0.4 the second and so on.
For 2. you check the angle of the difference vector between the fragment and the center to the left vector, e.g. using atan2. Within each part (such as 0.0-0.2) you compare the local progress of the stage to the angle to determine the visibility, making the fragments appear in an angle dependent way.
The 3. might be the most tricky part, as you will have to construct the center of the progress ring ends and compute the distance to the fragment. If it is within the current ring thickness is is visible.
Hopefully this quick thoughts give you a rough starting point for your effect!

Related

Transparency issue with Scenekit renderer on arkit

I have an odd behaviour in certain conditions when I have two geometries one above the other and the first one at least has alpha value != 0.
I am using a program for the SCNMaterial, program is in metal and does simple stuff, color out is of the type
float4(color.x * alpha, color.y * alpha, color.z * alpha, alpha)
The geometry doesn't really influence the behaviour, but for the shake of explanation I have a SCNPlane on the back and an SCNCyclinder on the front.
It all looks good until camera it's perpendicular +/- 30 degs more or less, see below
As you can see you have a bluish background and then a cylinder few centimeters in front of it, it is semi transparent and we can clearly see the bluish behind it.
Whenever the camera get passed +/- 30 from being perpendicular, we can see through the background because it is not anymore rendered in corrispondence of the cylinder, see the one on the left
Basically when it is parpendicular or so the two layers are summed up and give the correct result, when reached that cone of angles only the layer on the top is rendered.
What I am missing here to make it as it should.
Not using any priority for the rendering as I am aware.
Thank you.

How to display the tiny triangles or recognize them quickly?

What I am doing is a pick program. There are many triangles and I want select the front and visible ones by a rectangular region. The main method is described below.
there are a lot of triangles and each triangle has its own color.
draw all the triangles to a frame buffer.
read the color of pixel in frame buffer and based on the color, we know which triangles are selected.
The problem is that there are some tiny triangles can not be displayed in the final frame buffer. Just like the green triangle in the picture. I think the triangle is too tiny and ignored by the graphic card.
My question is how to display the tiny triangles in the final frame buffer? or how to know which triangles are ignored by the graphic card?
Triangles are not skipped based on their size, but if a pixel center does not fall inside or lie on the top or left edge (this is referred to as coverage testing) they do not generate any fragments during rasterization.
That does mean that certain really small triangles are never rasterized, but it is not entirely because of their size, just that their position is such that they do not satisfy pixel coverage.
Take a moment to examine the following diagram from the DirectX API documentation. Because of the size and position of the the triangle I have circled in red, this triangle does not satisfy coverage for any pixels (I have illustrated the left edge of the triangle in green) and thus never shows up on screen despite having a tangible surface area.
If the triangle highlighted were moved about a half-pixel in any direction it would cover at least one pixel. You still would not know it was a triangle, because it would show up as a single pixel, but it would at least be pickable.
Solving this problem will require you to ditch color picking altogether. Multisample rasterization can fix the coverage issue for small triangles, but it will compute pixel colors as the average of all samples and that will break color picking.
Your only viable solution is to do point inside triangle testing instead of relying on rasterization. In fact, the typical alternative to color picking is to cast a ray from your eye position through the far clipping plane and test for intersection against all objects in the scene.
The usability aspect of what you seem to be doing seems somewhat questionable to me. I doubt that most users would expect a triangle to be pickable if it's so small that they can't even see it. The most obvious solution is that you let the user zoom in if they really need to selectively pick such small details.
On the part that can actually be answered on a technical level: To find out if triangles produced any visible pixels/fragments/samples, you can use queries. If you want to count the pixels for n "objects" (which can be triangles), you would first generate the necessary query object names:
GLuint queryIds[n]; // probably dynamically allocated in real code
glGenQueries(n, queryIds);
Then bracket the rendering of each object with glBeginQuery()/glEndQuery():
loop over objects
glBeginQuery(GL_SAMPLES_PASSED, queryIds[i]);
// draw object
glEndQuery(GL_SAMPLES_PASSED);
Then at the end, you can get all the results:
loop over objects
GLint pixelCount = 0;
glGetQueryObjectiv(queryIds[i], GL_QUERY_RESULT, &pixelCount);
if (pixelCount > 0) {
// object produced visible pixels
}
A couple more points to be aware of:
If you only want to know if any pixels were rendered, but don't care how many, you can use GL_ANY_SAMPLES_PASSED instead of GL_SAMPLES_PASSED.
The query counts samples that pass the depth test, as the rendering happens. So there is an order dependency. A triangle could have visible samples when it is rendered, but they could later be hidden by another triangle that is drawn in front of it. If you only want to count the pixels that are actually visible at the end of the rendering, you'll need a two-pass approach.

Simple Flat Plane Tessellation Shader

Part 1:
So I want to create a basic tessellation program that takes a plane of quads and transforms it into a more, well, detailed/tessellated plane of quads. Such as the picture below. How much it gets tessellated would depend on user controls, passed in by a uniform (initially). However I am so new to tessellation shaders that I can't even figure out how to do this.
How is this typically done? Surely you shouldn't actually draw the plane of quads prior to the shader program, since from my understanding quads won't get tessellated this way, instead the get tessellated into a way like the below picture:
I believe the answer could to be to draw a plane of points, and these points are then tessellated into more points, and these points are transformed into quads of the appropriate size in the geometry shader I think? Alternatively, instead of converting points into quads could I just draw quads between each four closest points (that would be much better)? Examples very much appreciated!
NOTE: Using GLSL > 4.0 & C only (No C++/Python)
Part 2:
After I get part 1 working, how would I make it so that certain quads are more tessellated than others, such as this?:
I want the parts closer to the camera to be more tessellated.
Part 3:
If I were able to get that far, the next part would be to alter the z-axis of points to make the plane into an interesting environment. This would be done by reading in a 2Dsampler, I know how to do that and all. However, if I am correct in Part 1 about using a plane of points then I need to do more than just alter the points that are converted into quads, because quads need to be sharing vertices essentially in order there to be no gaps between quads. How would that be done? Alternatively if we draw quads between points, with each point being the appriate height, then this wouldn't be a issue.
Part 1
Yes you're correct: generate a 'patch' as a simple grid of points, specify the tesselation levels as uniforms into the TCS (tesselation control shader) and generate the vertex data in the TES (tesselation evaluation shader).
Sounds complicated? Here's a nice tutorial I based my work on: http://antongerdelan.net/opengl/tessellation.html
Part 2
What you are talking about here is LOD (or level of detail). You would need to tesselate and render the higher polygon-count bottom-left corner of your mesh as a separate object.
Your suggested approach is correct: break the overall scene into 'chunks' and determine the LOD (i.e. the tesselation parameters) for each chunk separately, usually by some distance-to-camera algorithm.
Part 3
Another excellent tutorial which does exactly what you are after I believe: http://codeflow.org/entries/2010/nov/07/opengl-4-tessellation/
I used this approach to get a very highly detailed but memory and frame efficient terrain.
Hope this helps.

I have a dot bouncing around an image. Need to calculate angles of reflection off of groups of pixels (surface of objects)

Suppose we have an image (pixel buffer) that is in black and white, so each pixel is either black or white (not gray scale).
Now somewhere in the middle of that images, place a green dot. It may have a radius of n for rendering purposed, but it is really a just point. Give the dot a randomly selected direction and speed, and start it moving. If the image is all white pixels, the dot will bounce off the edges of the image, infinitely wandering around the picture. This is quite easy... just reverse either the rise or run of the dot's vector.
Next, suppose the image has some globs of black pixels. As the dot encounters these globs of black pixels, the angle of reflection needs to be calculated. This is also quite easy of the the black pixels have a fixed slope, as in my sketch (blue X represents black pixels). You can find the slope of the blue Xs and easily calculate the new vector.
But how about the case where the black pixels form really unfriendly surfaces? What are some approaches to figuring out this angle?
This is the subject that I am interested in.
There must be some algorithms that exist for this kind of purpose, but I never ran across any in school. I am not asking how to code this, rather approaches to writing the algorithm to do this. I have a few ideas that I'll try, but if there are some standard ways to do this that exist, I'd like to learn about them.
Obviously I'd like to start with Black and White then move into RGBA.
I am looking for any reference material on just this sort of subject. Websites, books, or other references are very very welcome.
Also, if there are different StackOverflow tags that could be good, let me know.
Thanks much!
Edit********** More pics and information
Maybe I wasn't clear what I meant by "unfriendly surfaces". In the previous picture, our blue X's happened to just be a line. Picture a case where it is not a line, rather a wierd shape.
We start with our green pixel traveling at a slope of 2. Suppose it's vector is that of 12 pixels per frame. It would have a projected path like this:
But suppose instead of a nice friendly line, we have this:
In my mind I can kinda of see what is likely to happen if this were a ball and some walls.
Look for edge detection algorithms used in image processing. Some edge detectors also approximate the direction of edges.
You can think of the pixel neighborhood of the green dot, maybe somewhere between 3x3 and 7x7, as a small edge direction detection problem. One approach would take two passes at the pixels:
In the first pass, smooth the sharp black/white pixels using a Gaussian filter.
In the second pass, apply an edge detection operator, such as Sobel, Prewitt or Roberts to produce the X and Y derivatives of the pixels' intensity. You can then approximate the direction as:
angle = arctan(dx/dy)
The motivation for the smoothing pass is to give the edge detection operator information from farther-away pixels.
The Wikipedia page on the Canny edge detector has a good discussion on obtaining the direction (the "gradient") of an edge, including an example of a particular Gaussian filter you can use for smoothing.
Am doing something similar with a ball and randomly generated backgrounds.
The filter and edge detection is highly technical but all other processes using a 5*5 or 3*3 grid seem similarly difficult.
However, I think I may have a cheap way around this. Assuming a ball travelling in any direction, scan all leading edges of the ball - a semicircle. The further to the edge of the ball the collision occurs the closer to vertical is the collision. Again, I think, this should allow you to easily infer the background normal and from there the answer is fairly simple

shadow and shading

I have read lots of ray tracer algorithm on the web. But, I have no clear understanding of the shading and shadow. Is below pseudocode correct written according to my understanding ?
for each primitive
check for intersection
if there is one
do color be half of the background color
Ishadow = true
break
for each ambient light in environment
calculate light contribution to the color
if ( Ishadow == false )
for each point light
calculate diffuse shading
calculate reflection direction
calculate specular light
trace for reflection ray // (i)
add color returned from i after multiplied by some coefficient
trace for refraction ray // (ii)
add color returned from ii after multiplied by some coefficient
return color value calculated until this point
You should integrate your shadows with the normal ray-tracing path:
For every screen-pixel you send a ray through the scene and you eventually determine the closest object-intersection: at the point of the closest object-intersection you would at first read out the pixel color (texture of the object at that point), aside from calculating reflection-vector etc (using the normal-vector) you would now additionally cast a ray from that intersection-point to each of the light-sources in your scene: if these rays intersect other objects before hitting the light-sources then the intersection-point is in shadow and you can adapt the final color of that point accordingly.
The trouble with pseudocode is that it is easy to get "pseudo" enough that it becomes the same well of ambiguity that we are trying to avoid by getting away from natural languages. "Color be half of the background color?" The fact that this line appears before you iterate through your light sources is confusing. How can you be setting Ishadow before you iterate over light sources?
Maybe a better description would be:
given a ray in space
find nearest object with which ray intersects
for each point light
if normal at surface of intersected object points toward light (use dot product for this)
cast a ray into space from the surface toward the light
if ray intersection is closer than light* light is shadowed at this point
*If you're seeing strange artifacts in your shadows, there is a mistake that is made by every single programmer when they write their first ray tracer. Floating point (or double-precision) math is imprecise and you will frequently (about half the time) re-intersect yourself when doing a shadow trace. The explanation is a bit hard to describe without diagrams, but let me see what I can do.
If you have an intersection point on the surface of a sphere, under most circumstances, that point's representation in a floating point register is not mathematically exact. It is either slightly inside or slightly outside the sphere. If it is inside the sphere and you try to run an intersection test to a light source, the nearest intersection will be the sphere itself. The intersection distance will be very small, so you can simply reject any shadow ray intersection that is closer than, say .000001 units. If your geometry is all convex and incapable of legitimately shadowing itself, then you can simply skip testing the sphere when doing shadow tests.

Resources