Part 1:
So I want to create a basic tessellation program that takes a plane of quads and transforms it into a more, well, detailed/tessellated plane of quads. Such as the picture below. How much it gets tessellated would depend on user controls, passed in by a uniform (initially). However I am so new to tessellation shaders that I can't even figure out how to do this.
How is this typically done? Surely you shouldn't actually draw the plane of quads prior to the shader program, since from my understanding quads won't get tessellated this way, instead the get tessellated into a way like the below picture:
I believe the answer could to be to draw a plane of points, and these points are then tessellated into more points, and these points are transformed into quads of the appropriate size in the geometry shader I think? Alternatively, instead of converting points into quads could I just draw quads between each four closest points (that would be much better)? Examples very much appreciated!
NOTE: Using GLSL > 4.0 & C only (No C++/Python)
Part 2:
After I get part 1 working, how would I make it so that certain quads are more tessellated than others, such as this?:
I want the parts closer to the camera to be more tessellated.
Part 3:
If I were able to get that far, the next part would be to alter the z-axis of points to make the plane into an interesting environment. This would be done by reading in a 2Dsampler, I know how to do that and all. However, if I am correct in Part 1 about using a plane of points then I need to do more than just alter the points that are converted into quads, because quads need to be sharing vertices essentially in order there to be no gaps between quads. How would that be done? Alternatively if we draw quads between points, with each point being the appriate height, then this wouldn't be a issue.
Part 1
Yes you're correct: generate a 'patch' as a simple grid of points, specify the tesselation levels as uniforms into the TCS (tesselation control shader) and generate the vertex data in the TES (tesselation evaluation shader).
Sounds complicated? Here's a nice tutorial I based my work on: http://antongerdelan.net/opengl/tessellation.html
Part 2
What you are talking about here is LOD (or level of detail). You would need to tesselate and render the higher polygon-count bottom-left corner of your mesh as a separate object.
Your suggested approach is correct: break the overall scene into 'chunks' and determine the LOD (i.e. the tesselation parameters) for each chunk separately, usually by some distance-to-camera algorithm.
Part 3
Another excellent tutorial which does exactly what you are after I believe: http://codeflow.org/entries/2010/nov/07/opengl-4-tessellation/
I used this approach to get a very highly detailed but memory and frame efficient terrain.
Hope this helps.
Related
I have bunch of 3d points (an array) not ordered in some particular order and not restricted to some axis/plane. Based on the coordinates of these points I want to order the array in clockwise order, like in the image. At moment I am clueless where to start. One idea is to find for each the closest point and somehow figure out the direction.
3Dave has already said this, but it completely depends on where the camera is.
There is no answer unless you specify the frustrum.
Note that circles are 2D, not 3D objects. "Clockwise" relates to circles.
Assuming that you mean on a plane:
This is a problem with two parts.
The first part is incredibly difficult.
The second part is relatively easy.
First part: indeed, you are doing object recognition: you have to find a circle.
For this, investigate the existing technology for shape recognition, or read up on stuff like https://link.springer.com/article/10.1007/s11042-018-6167-2
For the second part: (which is almost irrelevant after the first part). Just get the coords of each point relative to the center of the circle you found, simply calculate the angle of each from the top, and sort them.
Cheap game-type solution
If you want the cheap solution, which you can use if the points are "reasonable" ..
find the centroid of all the points (it's just the average of all)
write each point as a vector from the centroid to the point
pick any one point as being the "top"
use something like this https://docs.unity3d.com/ScriptReference/Vector3.Angle.html to get the angle of each from the "top" one
voila! just put them in order
In practice you'll likely need these things also:
find the "plane" the points are on (find the "average plane" they are on, it's relatively easy to do this, look it up!)
make an axis through the centroid which is perpendicular to the plane
I am trying to do skeletal animation in legacy OpenGL and thought I could use matrices on individual vertices. When I programmed it and it didn't work, I did some Googling to find this: https://www.talisman.org/opengl-1.1/Reference/glLoadMatrix.html
GL_INVALID_OPERATION is generated if glLoadMatrix is executed between the execution of glBegin and the corresponding execution of glEnd.
So now I'm stumped. Here is a diagram:
Bones are labeled in red. I'm trying to do skeletal animation so there are two rectangles. One uses Bone 0 and the second uses Bone 1. Only specific vertices of the triangles that make the second rectangle use the rotation matrix of Bone 1, and the ones that don't use the rotation matrix of Bone 0, kind of making a snake, if that makes sense.
Since I cannot use glLoadMatrix for individual vertices in a triangle, what other way can I displace a vertex based on a stored matrix? Perhaps multiply some of the matrix values to the vertex? Not sure how to go about doing that. Any input is appreciated, thanks!
You mention the two rectangles Bone0 and Bone1. You need to draw them separately since they need to have separate transformation matrices. Two points of the two rectangles are coincident, your transformation matrix to draw Bone1 must ensure that:
glTranslateF(...);
glRotateF(...); /* position rectangle Bone0 */
glBegin(GL_QUADS); /* draw rectangle Bone0 */
glVertex3f(...); /* draw it */
...
glEnd();
glPushMatrix(); /* save transformation matrix */
glMulMatrix(...); /*
* as per your drawing, this is not just a
* simple translate/rotate operation, but
* a translate/shear
* you need to do that manually
*/
glBegin(GL_QUADS); /*
* draw rectangle Bone1. Two of the vertices are
* coincident with two of rectangle Bone0. Your
* shear matrix must ensure they are
*/
glVertex3F(...);
glEnd();
What you're trying to do is called skinning! And unfortunately it will involve a bit more effort than your approach. It is possible to do it between one begin and end, which is generally preferable.
The easiest way is not to use OpenGL to transform your vertices. Use your favourite matrix math library to multiply the vertices with your bone matrices before they get passed to OpenGL. If the number of vertices is not too large, it won't slow you down much.
The harder way is to implement a skinning shader. This book chapter provides a good introduction on how that is done. The principle is to upload multiple matrices to OpenGL, and give each vertex an index which says which matrix to multiply with. This will be much faster than the easy approach.
GPUs are fast because they are optimized for doing the same operation on a large set of data - the tradeoff for this is that you can't modify the state (such as changing the matrix) while a draw call is in progress.
Heh, have been doing a lot of math stuff since I posted my question and checked back just now to post an answer for anyone with the same question, and I noticed I already have some answers, so thanks for answering with your input!
Since I have figured out a solution, though, I thought I would post an answer along with these other two.
Basically, what I am doing with the rendering is per-vertex rendering anyway, where it reads the vertices of each triangle from a data buffer and all of that, so it wasn't too much trouble to go ahead and write a custom function to multiply a matrix to a vertex, so a copy of the vertex is loaded from the buffer, the matrix is multiplied to it based on which bone the vertex is mapped to, and then that is used for rendering that particular vertex on the triangle.
Funny that I already had implemented what #Hannesh suggested and I just had to write the multiplier function. Very cool!
The harder way is to implement a skinning shader. This book chapter provides a good introduction on how that is done. The principle is to upload multiple matrices to OpenGL, and give each vertex an index which says which matrix to multiply with. This will be much faster than the easy approach.
Thanks again! I'll up-vote whenever I have the reputation to do so!
So, to start off, I'm not very good at computer graphics. I'm trying to implement a GUI toolkit where one of the features is being able to apply 3D transformations to 2D "layers". (a layer only has one Z coordinate, as pre-transform, it's a two dimensional axis aligned rectangle)
Now, this is pretty straightforward, until you come to 3D transformations that would push the layer back, requiring splitting the layer into several polygons in order to render it correctly, as illustrated here. And because we can have transparency, layers may not get completely occluded, while still requiring getting split.
So here is an illustration depicting the issue and the desired outcome. In this scenario, the blue layer (call it B) is on top of the red layer (R), while having the same Z position (but B was added after R). In this scenario, if we rotate B, its top two points will get a Z index lower than 0 while the bottom points will get an index higher than 0 (with the anchor point being the only point/line left as 0).
Can somebody suggest a good way of doing this on the CPU? I've struggled to find a suitable algorithm implementation (in C++ or C) that would be appropriate to this scenario.
Edit: To clarify myself, at this stage in the pipeline, there is no rendering yet. We just need to produce a set of polygons for each layer that would then represent the layer's transformed and occluded geometry. Then, if required, rendering (either software or hardware) is done if required, which is not always the case (for example, when doing hit testing).
Edit 2: I looked at binary space partitioning as an option of achieving this but I have only been able to find one implementation (in GL2PS), which I'm not sure how to use. I do have a vague understanding of how BSPs work, but I'm not sure how they can be used for occlusion culling.
Edit 3: I'm not trying to do colour and transparency blending at this stage. Just pure geometry. Transparency can be handled by the renderer, and overdraw is okay. In this case, the blue polygon can just be drawn under the red one, but with more complicated cases, depth sorting or even splitting up the polygons may be required (example of a scary case like that below). Although the viewport is fixed, because all layers can be transformed in 3D, creating a shape shown below is possible.
So what I'm really looking for is an algorithm that would geometrically split layer B into two blue shapes, one of which would be drawn "above" and one of which would be drawn below R. The part "below" would get overdraw, yes, but it's not a major issue. So B just need to be split into two polygons so it would appear to cut through R when those polygons are drawn in order. No need to worry about blending.
Edit 4: For the purpose of this, we cannot render anything at all. This all has to be done purely geometrically (producing 2D polygons). This is what I was originally getting at.
Edit 5: I should note that the overall number of quads per subscene is around 30 (average). Definitely won't go above 100. Unless the layers are 3D transformed (which is where this problem arises), they are just radix sorted by Z positions before being drawn. Layers with the same Z position are drawn in order in which they were added (first in, first out).
Sorry if I didn't make it clear in the original question.
If you "aren't good with computer graphics", Doing it on CPU (software rendering) will be extremely difficult for you, if polygons can be transparent.
The easiest way to do it is to use GPU rendering (OpenGL/Direct3D) with Depth Peeling technique.
Cpu solutions:
Soltuion #1 (extremely difficult):
(I forgot the name of this algorithm).
You need to split polygon B into two, - for example, using polygon A as clip plane, then render result using painter's algorithm.
To do that you'll need to change your rendering routines so they'll no longer use quads, but textured polygons, plus you'll have to write/debug clipping routines that'll split triangles present in scene in such way that they'll no longer break paitner's algorithm.
Big Problem: If you have many polygons, this solution can potentially split scene into infinite number of triangles. Also, writing texture rendering code yourself isn't much fun, so it is advised to use OpenGL/Direct3D.
This can be extremely difficult to get right. I think this method was discussed in "Computer Graphics Using OpenGL 2nd edition" by "Francis S. Hill" - somewhere in one of their excercises.
Also check wikipedia article on Hidden Surface Removal.
Solution #2 (simpler):
You need to implement multi-layered z-buffer that stores up to N transparent pixels and their depth.
Solution #3 (computationally expensive):
Just use ray-tracing. You'll get perfect rendering result (no limitations of depth peeling and cpu solution #2), but it'll be computationally expensive, so you'll need to optimize rendering routines a lot.
Bottom line:
If you're performing software rendering, use Solution #2 or #3. If you're rendering on hardware, use technique similar to depth-peeling, or implement raytracing on hardware.
--edit-1--
required knowledge for implementing #1 and #2 is "line-plane intersection". If you understand how to split line (in 3d space) into two using a plane, you can implement raytracing or clipping easily.
Required knowledge for #2 is "textured 3d triangle rendering" (algorithm). It is a fairly complex topic.
In order to implement GPU solution, you need to be able to find few OpenGL tutorials that deal with shaders.
--edit-2--
Transparency is relevant, because in order to get transparency right, you need to draw polygons from back to front (from farthest to closest) using painter's algorithms. Sorting polygons properly is impossible in certain situation, so they must be split, or you should use one of the listed techniques, otherwise in certain situations there will be artifacts/incorrectly rendered images.
If there's no transparency, you can implement standard zbuffer or draw using hardware OpenGL, which is a very trivial task.
--edit-3--
I should note that the overall number of quads per subscene is around 30 (average). Definitely won't go above 100.
If you will split polygons, it can easily go way above 100.
It might be possible to position polygons in such way that each polygon will split all others polygon.
Now, 2^29 is 536870912, however, it is not possible to split one surface with a plane in such way that during each split number of polygons would double. If one polygon is split 29 timse, you'll get 30 polygons in the best-case scenario, and probably several thousands in the worst case if splitting planes aren't parallel.
Here's rough algorithm outline that should work:
Prepare list of all triangles in scene.
Remove back-facing triangles.
Find all triangles that intersect each other in 3d space, and split them using line of intersection.
compute screen-space coordinates for all vertices of all triangles.
Sort by depth for painter's algorithm.
Prepare extra list for new primitives.
Find triangles that overlap in 2D (post projection) screen space.
For all overlapping triangles check their rendering order. Basically a triangle that is going to be rendered "below" another triangles should have no part that is above another triangle.
8.1. To do that, use camera origin point and triangle edges to split original triangles into several sub-regions, then check if regions conform to established sort order (prepared for painter's algorithm). Regions are created by splitting existing pair of triangles using 6 clip planes created by camera origin points and triangle edges.
8.2. If all regions conform to rendering order, leave triangles be. If they don't, remove triangles from list, and add them to the "new primitives" list.
IF there are any primitives in new primitives list, merge the list with triangle list, and go to #5.
By looking at that algorithm, you can easily understand why everybody uses Z-buffer nowadays.
Come to think about it, that's a good training exercise for universities that specialize in CG. The kind of exercise that might make your students hate you.
I am going to come out say give the simpler solution, which may not fit your problem. Why not just change your artwork to prevent this problem from occuring.
In problem 1, just divide the polys in Maya or whatever beforehand. For the 3 lines problem, again, divide your polys at the intersections to prevent fighting. Pre-computed solutions will always run faster than on the fly ones - especially on limited hardware. From profesional experience, I can say that it also does scale, well it scales ok. It just requires some tweaking from the art side and technical reviews to make sure nothing is created "ilegally." You may end up getting more polys doing it this way than rendering on the fly, but at least you won't have to do a ton of math on CPUs that may not be up to the task.
If you do not have control over the artwork pipeline, this won't work as writing some sort of a converter would take longer than getting a BSP sub-division routine up and running. Still, KISS is often the best solution.
I am trying to do my own blob detection who will receive a real time video, and try to detect a white paper sheet.
Even if is something written inside the paper. I need to detect the paper and is corner, because what i really want is to draw a opengl polygon over the paper in each corner of the paper will be a corner of the polygon. Then i need the coordinates of the paper to do other stuffs.
So i need to:
- detect a square white blob.
- get the coordinates of the cornes
- draw a polygon over the white sheet.
Any ideias how can i do that?
Much depends on context. For example, suppose that you:
know that the paper is always roughly centered (i.e. W/2, Y/2 is always inside the blob), and no more rotated than 45 degrees (30 would be better)
have a suitable border around the sheet so that the corners never touch the edges of the FOV
are able (through analysis of local variance, or if you're lucky, check of background color or luminance) to say whether a point is inside or outside the blob
the inside/outside function never fails (except possibly in the close vicinity of a border)
then you could walk a line from a point on the border (surely outside) and the center (surely inside), even through bisection, and find a point - an areal - on the edge.
Two edge points give a rect (two areals give a beam), two rects give an intersection (two beams give a larger areal) - and there's your corner. You should carry along the detection uncertainty (areal radius) in order to validate corners (another less elegant approach is to roughly calculate where the corner is, and pinpoint it with a spiral search or drunkard's walk).
This algorithm is amenable to parallelization and, as long as the hypotheses hold, should be really fast.
All that said, it remains a hack -- I agree with unwind, why reinvent the wheel? If you have memory or CPU constraints (embedded systems, etc.), I believe there ought to be OpenCV and e-Vision "lite" ports also for ARM and embedded platforms.
(Sorry for my terminology - I'm monkey-translating from Italian. "Areal" is likely to correspond to your "blob", a beam is the family of lines joining all couples of points in two different blobs, line intensity being the product of distance from a point from its areal's center)
I am trying to do my own blob detection who will receive a real time video, and try to detect a white paper sheet.
Your first shot could be a simple flood-fill. That is, select a good threshold to binarize the image and apply the algorithm. The threshold can be fixed if you know the paper is always brighter than X and the background is always darker than this. Or this can be an adaptive threshold, for example Otsu's method. OpenCV offers this for free.
If you'd need to speed it up you could use a union-find data structure.
Finally you'd need to come up with some heuristic how to identify the corners (e.g. the four extreme values in x/y direction).
Then i need [...] the coordinates of the cornes [...]
Then you don't need blob detection, but corner detection or contour detection in the first place. OpenCV has some nice functionality for exactly this.
If you can't use it, I would suggest to binarize the image as above and use a harris-detector to find the corners of the object.
OpenCV's TBB support could also come quite handy if you'd use it and you have problems to meet your real-time requirements.
here are my questions. I heard that opengl ignores the vertices which are outside the viewing frustum and doesn't consider them in rendering pipeline. Recently I ran into a same post that said you should check this your self and if a point is not inside, it is you duty to find out not opengl's! Now,
Is this true about opengl? does it understand if a point is not inside, and not to render it?
I am developing a grass scene which has about 4000 grasses on rectangles. I have awful FPS, and the only solution I came up was to decide which grasses are inside the viewport and then only render them! My question here is that what solution is best for me to find out which rectangle is not inside or which one is?
Please consider that my question is not about points mainly but about rectangles. Also I need to sort the grasses based on their distance, so it is better if native on client side memory.
Please let me know if there are any effective and real-time ways to find out if any given mesh is inside or outside the frustum. Thanks.
Even if is true then OpenGL does not show polygons outside the frustum ( as any other 3d engines ) it has to consider them to check if there are inside or not and then fps slow down. Usually some smart optimization algorithm is needed to avoid flooding the scene with invisible objects. Check for example BSP trees+PVS or Portals as a starting point.
To check if there is some bottleneck in the application, you can try with gDebugger. If nothing is reasonable wrong optimizing in order to draw just the PVS ( possible visible set ) is the way to go.
OpenGL won't render pixels ("fragments") outside your screen, so it has to clip somehow...
More precisely :
You submit your geometry
You make a Draw Call (glDrawArrays or glDrawElements)
Each vertex goes through the vertex shader, which computes the final position of the vertex in camera space. If you didn't write a vertex shader (=old opengl), the driver will create one for you.
The perspective division transforms these coordinates in Normalized Device Coordinates. Roughly, its means that the frustum of your camera is deformed to fit in a [-1,1]x[-1,1]x[-1,1] box
Everything outside this box is clipped. This can mean completely discarding a triangle, or subdivide it if it is across a clipping plane
Each remaining triangle is rasterized into fragments
Each fragment goes through the fragment shader
So basically, OpenGL knows how to clip, but each vertex still has to go through the vertex shader. So submitting your entire world will work, of course, but if you can find a way not to submit everything, your GPU will be happier.
This is a tradeoff, of course. If you spend 10ms checking each and every patch of grass on the CPU so that the GPU has only the minimal amount of data to draw, it's not a good solution either.
If you want to optimize grass, I suggest culling big patches (5m x 5m or so). It's standard AABB-frustum testing.
If you want to optimize a more generic model, you can investigate quadtree for "flat" models, octrees and bsp-trees for more complex objects.
Yes, OpenGL does not rasterize triangles outsize the viewing frustrum. But, this doesn't mean that this is optimal for applications: OpenGL implementation shall transform the vertex coordinate (by using fixed pipeline or vertex shaders), then, having the normalized coordinates it finally knows whether the triangle lie inside the viewing frustrum.
This mean that no pixel is rasterized in that cases, but the vertex data is processed all the same; simply doesn't produce fragments derived from a non visible triangle!
The OpenGL extension ARB_occlusion_query may help you, but in the discussion section make it clear:
Do occlusion queries make other visibility algorithms obsolete?
No.
Occlusion queries are helpful, but they are not a cure-all. They
should be only one of many items in your bag of tricks to decide
whether objects are visible or invisible. They are not an excuse
to skip frustum culling, or precomputing visibility using portals
for static environments, or other standard visibility techniques.
For the question regarding the mesh sorting on depth, you shall use the depth buffer: essentially the mesh fragment is effectively rendered only if its distance from the viewport is less than the previous fragment in the same position. This make you aware of sorting meshes. This buffer is essentially free, and it allows you to improve performances since it discard more far fragments.
Yes. Like others have pointed out, OpenGL has to perform a lot of per-vertex operations to determine if it is in the frustum. It must do this for every vertex you send it. In addition to the processing overhead that must take place, keep in mind that there is also additional overhead in the transmission of those vertices from the CPU to the GPU. You want to avoid sending information to the GPU that it isn't going to use. Though the bandwidth between the CPU and GPU is quite good on modern hardware, there's still a limit.
What you want is a Scene Graph. Scene graphs are frequently implemented with some kind of spatial partitioning scheme, e.g., Quadtrees, Octrees, BSPTrees, etc etc. Spatial partitioning allows you to intelligently determine what geometries are visible. Instead of doing this on a per-vertex basis (like OpenGL is forced to do) it can eliminate huge spatial subsets of geometry at a time. When rendering a complex scene, the performance savings can be enormous.