Transform shape built of contour splines to simple polygons - c

I've dumped glyphs from truetype file so I can play with them. They have shape contours that consist from quadratic bezier curves and lines. I want to output triangles for such shapes so I can visualize them for the user.
Traditionally I might use libfreetype or scan-rasterise this kind of contours. But I want to produce extruded 3D meshes from the fonts and make other distortions with them.
So, how to polygonise shapes consisting from quadratic bezier curves and lines? There's many contours that form the shape together. Some contours are additive and others are subtractive. The contours are never open. They form a loop.
(Actually, I get only contour vertices from ttf glyphs, those vertices define whether they are part of the curve or not. Even though it is easy to decompose these into bezier curves and lines, knowing the data is represented this way may be helpful for polygonizing the contours to triangles)

This is simple. You need to implement boolean operations on your curves, then proceed by joining pairs of curves untily you are left with a single curve.
First, you need to evaluate the curves and convert them to polylines.
Then you need to make sure that there is a vertex at every place where two contours intersect (this part can actually be trying, due to the numeric errors; you can use the Bentley-Ottmann algorithm).
Finally, all you need to do is to traverse the curves and connect them in the correct order to perform the boolean operation, producing weakly simple polygons.
Such polygons can be triangulated using e.g. ear clipping algorithm (which is slow, but rather simple to implement).
Hope this helps ...

Related

Maya export uvset from one model to another

Basically I want a way to export the uvs from one model to another as a part of our pipeline where rig and texture/lookdev models(created simultaneously) need to be merged
I would like a solution other than importing the model into the scene and copying the uvset.
Something like an xml export.
Is there any way.
Thanks in advance
You can't really do this precisely unless the meshes are topologically identical. You can do a decent, but not perfect job with something like this:
for each triangle in the source model, derive a tangent space matrix. That's the matrix which converts the world space points of that triangle into the UV points of that face.
for each triangle in the target model, see if it matches one of the triangles in the source. "Match" will mean "has the same three corners" regardless of order (assuming you can fix up your normals in a separate step
Where source triangles have exact matches in the target triangles, apply a planar project that matches the tangent space matrix.
where a target triangle doesn't have an exact match you'll have to guess; you can try finding the N closest matches and doing a weighted blend between all of their matrices or something like that - but it will be hacky
This should be visually pretty close to the source, but all of the UV triangles will be disconnected; you'll need to merge coincident UVs to prevent seams.
It's a pretty non-trivial project, unfortunately.

Nurbs trimmed surface

I am reading NURBS surfaces from a STEP file, as well as their boundary curves. Now I want to tessellate those surfaces.
Every algorithm I have read talks about boundary curves in parametric space, curve with a parameter t, that maps onto a 2D coordinate (u,v), the parametric coordinates of the surface.
The problem is that in STEP file I have the boundary curves defined in world space. My question is: There is an efficient way to transform a curve on a surface from world space to parametric space?
The only way I can think of is to generate lots of points from that curve, and then fit a new curve in parametric space, but I guess that there is a more efficient way to do this, knowing that the curve lies on the surface.
Thanks
If the 3D boundary curves are exactly the 3D mapping of the 2D boundary curves in the parametric domain (u, v), then perhaps there is a better way to compute these 2D boundary curves from the given 3D boundary curve. However, very often this is not the case. For a bi-cubic surface, the exact 3D boundary curve mapped from a 2D boundary curve of degree 3 is of degree 18. So, it is unlikely for any CAD software to represent these 3D boundary curves in exact format. Most of the times, they are just approximations and only lie close enough to the surface within a certain tolerance. So, if you do not have the information for the 2D boundary curves, in general you do need to do curve fitting in the parametric domain. The procedure would be to sample points from 3D curve, project them onto the surface to find corresponding (u,v) values, then do curve fitting from these (u,v) values. Of course, there would be some special cases where you can use simplified algorithms. For example, when the 3D curve matches the isoparametric curves of the surface.

Occlusion culling 3D transformed 2D rectangles?

So, to start off, I'm not very good at computer graphics. I'm trying to implement a GUI toolkit where one of the features is being able to apply 3D transformations to 2D "layers". (a layer only has one Z coordinate, as pre-transform, it's a two dimensional axis aligned rectangle)
Now, this is pretty straightforward, until you come to 3D transformations that would push the layer back, requiring splitting the layer into several polygons in order to render it correctly, as illustrated here. And because we can have transparency, layers may not get completely occluded, while still requiring getting split.
So here is an illustration depicting the issue and the desired outcome. In this scenario, the blue layer (call it B) is on top of the red layer (R), while having the same Z position (but B was added after R). In this scenario, if we rotate B, its top two points will get a Z index lower than 0 while the bottom points will get an index higher than 0 (with the anchor point being the only point/line left as 0).
Can somebody suggest a good way of doing this on the CPU? I've struggled to find a suitable algorithm implementation (in C++ or C) that would be appropriate to this scenario.
Edit: To clarify myself, at this stage in the pipeline, there is no rendering yet. We just need to produce a set of polygons for each layer that would then represent the layer's transformed and occluded geometry. Then, if required, rendering (either software or hardware) is done if required, which is not always the case (for example, when doing hit testing).
Edit 2: I looked at binary space partitioning as an option of achieving this but I have only been able to find one implementation (in GL2PS), which I'm not sure how to use. I do have a vague understanding of how BSPs work, but I'm not sure how they can be used for occlusion culling.
Edit 3: I'm not trying to do colour and transparency blending at this stage. Just pure geometry. Transparency can be handled by the renderer, and overdraw is okay. In this case, the blue polygon can just be drawn under the red one, but with more complicated cases, depth sorting or even splitting up the polygons may be required (example of a scary case like that below). Although the viewport is fixed, because all layers can be transformed in 3D, creating a shape shown below is possible.
So what I'm really looking for is an algorithm that would geometrically split layer B into two blue shapes, one of which would be drawn "above" and one of which would be drawn below R. The part "below" would get overdraw, yes, but it's not a major issue. So B just need to be split into two polygons so it would appear to cut through R when those polygons are drawn in order. No need to worry about blending.
Edit 4: For the purpose of this, we cannot render anything at all. This all has to be done purely geometrically (producing 2D polygons). This is what I was originally getting at.
Edit 5: I should note that the overall number of quads per subscene is around 30 (average). Definitely won't go above 100. Unless the layers are 3D transformed (which is where this problem arises), they are just radix sorted by Z positions before being drawn. Layers with the same Z position are drawn in order in which they were added (first in, first out).
Sorry if I didn't make it clear in the original question.
If you "aren't good with computer graphics", Doing it on CPU (software rendering) will be extremely difficult for you, if polygons can be transparent.
The easiest way to do it is to use GPU rendering (OpenGL/Direct3D) with Depth Peeling technique.
Cpu solutions:
Soltuion #1 (extremely difficult):
(I forgot the name of this algorithm).
You need to split polygon B into two, - for example, using polygon A as clip plane, then render result using painter's algorithm.
To do that you'll need to change your rendering routines so they'll no longer use quads, but textured polygons, plus you'll have to write/debug clipping routines that'll split triangles present in scene in such way that they'll no longer break paitner's algorithm.
Big Problem: If you have many polygons, this solution can potentially split scene into infinite number of triangles. Also, writing texture rendering code yourself isn't much fun, so it is advised to use OpenGL/Direct3D.
This can be extremely difficult to get right. I think this method was discussed in "Computer Graphics Using OpenGL 2nd edition" by "Francis S. Hill" - somewhere in one of their excercises.
Also check wikipedia article on Hidden Surface Removal.
Solution #2 (simpler):
You need to implement multi-layered z-buffer that stores up to N transparent pixels and their depth.
Solution #3 (computationally expensive):
Just use ray-tracing. You'll get perfect rendering result (no limitations of depth peeling and cpu solution #2), but it'll be computationally expensive, so you'll need to optimize rendering routines a lot.
Bottom line:
If you're performing software rendering, use Solution #2 or #3. If you're rendering on hardware, use technique similar to depth-peeling, or implement raytracing on hardware.
--edit-1--
required knowledge for implementing #1 and #2 is "line-plane intersection". If you understand how to split line (in 3d space) into two using a plane, you can implement raytracing or clipping easily.
Required knowledge for #2 is "textured 3d triangle rendering" (algorithm). It is a fairly complex topic.
In order to implement GPU solution, you need to be able to find few OpenGL tutorials that deal with shaders.
--edit-2--
Transparency is relevant, because in order to get transparency right, you need to draw polygons from back to front (from farthest to closest) using painter's algorithms. Sorting polygons properly is impossible in certain situation, so they must be split, or you should use one of the listed techniques, otherwise in certain situations there will be artifacts/incorrectly rendered images.
If there's no transparency, you can implement standard zbuffer or draw using hardware OpenGL, which is a very trivial task.
--edit-3--
I should note that the overall number of quads per subscene is around 30 (average). Definitely won't go above 100.
If you will split polygons, it can easily go way above 100.
It might be possible to position polygons in such way that each polygon will split all others polygon.
Now, 2^29 is 536870912, however, it is not possible to split one surface with a plane in such way that during each split number of polygons would double. If one polygon is split 29 timse, you'll get 30 polygons in the best-case scenario, and probably several thousands in the worst case if splitting planes aren't parallel.
Here's rough algorithm outline that should work:
Prepare list of all triangles in scene.
Remove back-facing triangles.
Find all triangles that intersect each other in 3d space, and split them using line of intersection.
compute screen-space coordinates for all vertices of all triangles.
Sort by depth for painter's algorithm.
Prepare extra list for new primitives.
Find triangles that overlap in 2D (post projection) screen space.
For all overlapping triangles check their rendering order. Basically a triangle that is going to be rendered "below" another triangles should have no part that is above another triangle.
8.1. To do that, use camera origin point and triangle edges to split original triangles into several sub-regions, then check if regions conform to established sort order (prepared for painter's algorithm). Regions are created by splitting existing pair of triangles using 6 clip planes created by camera origin points and triangle edges.
8.2. If all regions conform to rendering order, leave triangles be. If they don't, remove triangles from list, and add them to the "new primitives" list.
IF there are any primitives in new primitives list, merge the list with triangle list, and go to #5.
By looking at that algorithm, you can easily understand why everybody uses Z-buffer nowadays.
Come to think about it, that's a good training exercise for universities that specialize in CG. The kind of exercise that might make your students hate you.
I am going to come out say give the simpler solution, which may not fit your problem. Why not just change your artwork to prevent this problem from occuring.
In problem 1, just divide the polys in Maya or whatever beforehand. For the 3 lines problem, again, divide your polys at the intersections to prevent fighting. Pre-computed solutions will always run faster than on the fly ones - especially on limited hardware. From profesional experience, I can say that it also does scale, well it scales ok. It just requires some tweaking from the art side and technical reviews to make sure nothing is created "ilegally." You may end up getting more polys doing it this way than rendering on the fly, but at least you won't have to do a ton of math on CPUs that may not be up to the task.
If you do not have control over the artwork pipeline, this won't work as writing some sort of a converter would take longer than getting a BSP sub-division routine up and running. Still, KISS is often the best solution.

does glRotate in OpenGL rotate the camera or rotate the world axis or rotate the model object?

I want to know whether glRotate rotates the camera, the world axis, or the object. Explain how they are different with examples.
the camera
There is no camera in OpenGL.
the world axis
There is no world in OpenGL.
or the object.
There are no objects in OpenGL.
Confused?
OpenGL is a drawing system, that operates with points, lines and triangles. There is no concept of a scene or a world in OpenGL. All there is are vertices of which each has a set of attributes and there is the state of OpenGL which determines how vertices are turned into pixels.
The very first stage of this process is getting the vertex positions within the viewport. In the fixed function pipeline (i.e. without shaders), to get those, each vertex position if first multiplied with the so called "modelview" matrix, the intermediary result is used for lighting calculations and then multiplied with the "projection" matrix. After that clipping and then normalization into viewport coordinates are applied.
Those two matrices I mentioned save two purposes. The first one "modelview" is used to apply some transformation on the incoming vertices so that those end up in the desired spot relative to the origin. There is no difference in first moving geometry to some place in the world, and then moving the viewpoint within the world. Or keeping the viewpoint at the origin and move the whole world in the opposite. All this can be described by the modelview matrix.
The second one "projection" works together with the normalization process to behave like a kind of "lens", so to speak. With this you set the field of view (and a few other parameters, like shift, which you need for certain applications – don't worry about it).
The interesting thing about matrices is, that they're non-commutative, i.e. for two given matrices N, M
M * N =/= N * M ; for most M, N
This ultimately means, that you can compose a series of transformations A, B, C, D... into one single compound transformation matrix T by multiplying the primitive transformations onto each other in the right order.
The OpenGL matrix manipulation functions (they're obsolete BTW), do just that. You have a matrix selected for manipulation (the matrix mode) for example the modelview M. Then glRotate effectively does this:
M *= R(angle,axis)
i.e. the active matrix gets multiplied on a rotation matrix constructed from the given parameters. Similar for scale and translate.
If this happens to appear to behave like a camera or placing a object depends entirely on how and in which order those manipulations are combined.
But for OpenGL there are just numbers/vectors (vertex attributes), which somehow translate into 2-dimensional viewport coordinates, that get drawn as points for filled inbetween as line or a triangle.
glRotate works on the current matrix. So it depend if the matrix is the camera one or a world trasformation one. To know more about the current matrix have a look at glMatrixMode().
Finding example is just googling: I found this one that in order to me should help you to figure out what's happening.

Triangulation of polygon

Im trying to triangulate a polygon for use in a 3d model. When i try using the ear method on a polygon with points as dotted below, i get triangles where the red lines are. Since there are no other points inside these triangles this is probably correct. But i want it to triangulate the area inside the black lines only. Anyone know of any algorithms that will do this?
There are many algorithms to triangulate a polygon that do not need partitioning into monotone polygons first. One is described in my textbook Computational Geometry in C, which has code associated with it that can be freely downloaded from that link (in C or in Java).
You must first have the points in order corresponding to a boundary traversal. My code assumes counterclockwise, but of course that is easy to change. See also the Wikipedia article. Perhaps that is your problem, that you don't have the boundary points consistently organized?
The usual approach would be to split your simple polygon into monotone polygon using trapezoid decomposition and then triangulate the monotone polygons.
The first part can be achieved with a sweep line algorithm. And speed-ups are possible with the right data-structure (e.g. doubly connected edge list). The best description of this, that I know, can be found in Computational Geometry. This and this also seem helpful.
Wikipedia suggest that you break the polygon up into monotone polygons. You check that the polygon is not concave by simply checking for all angles being less than 180 degrees - any corners which has a angle of over 180 is concave, and you need to break it at that corner.
If you can use C++, you can use CGAL and in particular the example given here that can triangulate a set of non-intersected polygons. This example works only if you already know the black segments.
You need to use the EarClipping algorithm, not the Delaunay. See the following white paper: http://www.geometrictools.com/Documentation/TriangulationByEarClipping.pdf

Resources